Threads So far, we assumed that a process has a single thread of execution: i.e. a single sequence of executed instructions. In many applications, however, several parts of the same process can run in parallel. These parts share memory address space and other resources. In a multithreading system, a process may be composed of several threads that run in parallel. Memory space, I/O devices, ..etc. are assigned to a process. However, CPU time is assigned to a single thread. Each thread has independent state and context. ELC-403b Lecture 4 Page 1 Threads Advantages of Multithreading Process can remain responsive if a part of it is blocked or performs a long operation. Creating a new thread of an existing process requires less time and resources than spawning a new process. Also context switching between threads of the same process is faster than switching to another process. In a mutliprocessor system, threads can be assigned to different processors (or cores), thus speeding up the execution of a given process. ELC-403b Lecture 4 Page 2 1 Multiprocessor Scheduling We consider the shown bus-based architecture which is suitable for small number of processors. CPU 1 CPU 2 cache cache Shared RAM Bus ELC-403b Lecture 4 Page 3 Multiprocessor Scheduling Two possible approaches: Master-slave: A master CPU runs the kernel processes. Other CPUs run user processes, and direct system calls to the master. This simplifies coordination between processors, but master becomes a performance bottleneck. Symmetric Multiprocessing: code and data of OS are stored in shared memory, and any CPU can run the system calls. This avoids overloading the master, but more complex coordination is required. E.g. what if two CPUs select the same shared memory area? ELC-403b Lecture 4 Page 4 2 Multiprocessor Scheduling For multiprocessor scheduling, a simple approach is to have a common ready queue (or multilevel queue) for all processors. As a processor become idle, it immediately extracts a ready process from the above queue. This method has the advantage of load balancing: no CPU remains idle while others are overloaded. ELC-403b Lecture 4 Page 5 Multiprocessor Scheduling Example: A dual processor system uses FCFS scheduling with a common ready queue. Both processors become idle when queue contain processes with process times of 2.4, 6 , 6.7, 7.5, and 9.7. Find the average waiting time for the above processes and compare it with the case of single processor with double speed. ELC-403b Lecture 4 Page 6 3 Multiprocessor Scheduling A process will repeatedly become blocked then run again. Migrating a process from one CPU to another usually causes the overhead of refilling the cache. Thus, processor affinity may be preferred: i.e. attempt to keep each process always on the same processor. We have then two problems: Assignment of processes (or threads) to CPUs. How each CPU will schedule its share of processes. Load balancing is more difficult in this case. ELC-403b Lecture 4 Page 7 4
© Copyright 2025 Paperzz