Sahithyan's S3 — Operating Systems
Threads
A thread is the basic unit of CPU utilization, consisting of a program counter, a stack, and a set of registers. Threads are sometimes called lightweight processes because they have some of the properties of processes but are more efficient to create and manage.
Single vs Multi-Threaded Processes
A traditional process has a single thread of control, while a multi-threaded process has multiple threads of control within the same address space.
Thread Components
Each thread includes:
- Thread ID
- Program counter
- Register set
- Stack
Threads within the same process share:
- Code section
- Data section
- OS resources such as open files and signals
Benefits of Multithreading
- Responsiveness: Multithreading can allow an application to remain responsive to user input while performing intensive operations.
- Resource sharing: Threads share the memory and resources of the process they belong to, making communication more efficient.
- Economy: Creating and managing threads requires fewer system resources than creating processes.
- Scalability: Multithreaded applications can take advantage of multiprocessor architectures more effectively.
Thread Models
User-Level Threads (ULT)
- Managed by user-level thread libraries (POSIX Pthreads, Win32 threads, Java threads)
- The kernel is not aware of the existence of these threads
- Fast to create and manage, as no kernel intervention is required
- However, if one ULT performs a blocking system call, the entire process blocks
Kernel-Level Threads (KLT)
- Supported and managed directly by the operating system
- Creation and management are more expensive than ULTs
- If one thread blocks, another thread can still be scheduled
- Provides true parallelism on multiprocessor systems
Hybrid Models
- Combines user-level threads with kernel-level threads
- Multiple user-level threads mapped to a smaller or equal number of kernel threads
- Examples: Many-to-One, One-to-One, Many-to-Many models
Thread Synchronization
Since threads share resources, synchronization is critical:
- Race Conditions: Occur when multiple threads access shared data concurrently, with at least one thread modifying the data.
- Critical Section: A code segment where shared resources are accessed.
- Mutual Exclusion: Ensuring only one thread executes in the critical section at a time.
Synchronization Mechanisms
- Mutex Locks: Basic synchronization tool ensuring mutual exclusion
- Semaphores: More sophisticated synchronization constructs that can also manage resource allocation
- Monitors: High-level synchronization constructs that encapsulate both data and operations
- Condition Variables: Allow threads to wait for specific conditions to be met
Thread Scheduling
Scheduling threads introduces new considerations beyond process scheduling:
-
Contention Scope: How threads compete for CPU time
- Process-contention scope (PCS): Threads compete within the process
- System-contention scope (SCS): Threads compete system-wide
-
Allocation Domain: Where threads can be scheduled
- Local scheduling: Threads are bound to specific processors
- Global scheduling: Threads can be scheduled on any available processor
Thread Implementation Challenges
- Thread Local Storage: Providing per-thread data storage
- Thread Cancellation: Safely terminating threads
- Signal Handling: Determining which thread should handle signals
- Thread Pooling: Pre-creating threads to reduce overhead
- Thread Priority Inversion: Lower priority threads holding resources needed by higher priority threads
Thread Libraries
- POSIX Threads (Pthreads): IEEE standard, widely used in UNIX systems
- Win32 Threads: Native Windows threading API
- Java Threads: Part of the Java language, with built-in synchronization support
Emerging Concepts
- Fibers: User-mode scheduled threads with cooperative multitasking
- Green Threads: User-level threads scheduled by a virtual machine
- Coroutines: Computer program components that generalize subroutines for non-preemptive multitasking