1 / 114

Chapter 3, Processes

Chapter 3, Processes. 3.1 Process Concep t. The process is the unit of work in a system. Both user and system work is divided into individual jobs, or processes.

hayley
Download Presentation

Chapter 3, Processes

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 3, Processes

  2. 3.1 Process Concept • The process is the unit of work in a system. Both user and system work is divided into individual jobs, or processes. • As already defined, a process is a program in execution, or a program that has been given a footprint in memory and can be scheduled to run.

  3. Recall what multi-processing means • The importance of processes may not be immediately apparent because of terminology and also because of the progress of technology. • Keep in mind that multi-processing refers to multiple physical processors. • Also, most recent general purpose computer chips are in fact multi-core, which means that at the physical level, they are multi-processor systems on a single chip.

  4. Why processes are important • The importance of processes stems from the fact that all modern, general purpose systems are multi-tasking. • For the purposes of clarity, in this course the main topic is multi-tasking on a single physical processor. • The point is this: • In a multi-tasking system, each individual task exists as a process.

  5. Defining an O/S by means of processes • Chapter 1 concerned itself with a definition of an operating system. • Given the fundamental nature of processes, another possible definition presents itself: • The operating system is that collection of processes which manages and coordinates all of the process on a machine.

  6. From the point of view of making the system run, the fact that the operating system is able to manage itself is fundamental. • From the point of view of getting any useful work done, the fact that the operating system manages user processes is fundamental.

  7. Why we are considering multi-tasking on one processor rather than multi-processing • One final note on the big picture before going on: • Managing processes requires managing memory and secondary storage, but it will become clear soon that getting work done means scheduling processes on the CPU. • As mentioned, we are restricting our attention to scheduling multiple processes, one after the other, on a single physical processor.

  8. In multiple core systems, to the extent possible, the problems of scheduling multiple jobs concurrently on more than one processor are handled in hardware. • However, the operating system for such a system would have to be “multiple-core” aware.

  9. This is a way of saying that modern operating systems are more complex because they are at least in part multi-processor operating systems. • The point is that you can’t begin to address the complexities of multi-processing until you’ve examined and come to an understanding of operating system functions in a uni-processing environment.

  10. What is a Process? • A process is a running or runnableprogram. • Ithas the six aspects listed on the next overhead. • In other words, a process is in a sense defined by a certain set of data values, and by certain resources which have been allocated to it. • At various times in the life of a process, the values representing these characteristics may be stored for future reference, or the process may be in active possession of them, using them.

  11. Text section = the program code • Program counter = instruction pointer = address or id of the current/next instruction • Register contents = current state of the machine • Process stack = method parameters, return addresses, local variables, etc. • Data section = global variables • Heap = dynamically allocated memory

  12. The term state has two meanings • The first meaning was given above as point 3. • Machine state = current contents of cpu/hardware (registers…) for a given process. • Although one of the aspects of a process, do not confuse machine state with process state.

  13. Process state refers to the scheduling status of the process • Systems may vary in the exact number and names of scheduling states. • As presented in this course, a straightforward operating system would have the five process (scheduling) states listed on the next overhead.

  14. Process scheduling states • New • Running • Waiting • Ready • Terminated

  15. Process life cycle • A process begins in the new state and ends in the terminated state. • In order to get from one to the other it has to pass through other states. • It may pass through the other states more than one time, cycling through periods when it is scheduled to run and periods when it is not running.

  16. In a classic system, there are six fundamental actions which trigger state transition, which are listed on the following overheads. • The relationship between states and transitions is summarized in the state transition diagram which follows that list.

  17. 1. The operating system is responsible for bringing processes in initially. 2. It is also responsible for bringing jobs to an end, whether they completed successfully or not. 3. Interrupts can be viewed as temporarily ending the running of a given process.

  18. 4. Processes are scheduled to run by the operating system 5. Processes “voluntarily” relinquish the processor and wait when they issue a request for I/O from secondary storage 6. The successful completion of an I/O request makes the requesting processes eligible to run again.

  19. Simple State (Transition) Diagram

  20. How does the operating system keep track of processes and states? • In a sense, what the operating system does is manage processes. • Inside the operating system software it is necessary to maintain representations of processes. • In other words, it’s necessary to have data structures which contain the following data: • The definition of the process—its aspects and resources • The process’s state—what state it is in, as managed by the operating system in its scheduling role

  21. What is a process control block? • The Process Control Block (PCB) is the representation of a process in the O/S. • In other words, it is a data structure (like an object) containing fields (instance variables) which define the process and its state. • As will soon become apparent, PCB’s don’t exist in isolation. • They may be stored in linked collections of PCB’s where the collection and the linking implicitly define the process’s state.

  22. The PCB contains the following 7 pieces of information. • In effect, these 7 pieces consist of technical representations of the 6 items which define a process, plus process state. • Current process state = new, running, waiting, ready, terminated • Program counter value = current/next instruction • CPU general purpose register contents = machine state—saved and restored upon interrupt

  23. 4. CPU scheduling info = process priority and pointers to scheduling queues 5. Memory management info = values of base and limit registers 6. Accounting info = job id, user id, time limit, time used, etc. 7. I/O status info = I/O devices allocated to process, open files, etc.

  24. This a graphical representation of a PCB, indicating how it might be linked with others

  25. Threads • You may already have encountered the term thread in the context of Java programming. • Threads come up in this operating systems course for two reasons: • The thread concept exists in modern operating systems • This is an operating systems book which relies on knowledge of Java rather than C

  26. On the one hand, this is an advantage. • Threads are a concept which is directly accessible in Java. • On the other hand, it means that threads sort of drop in out of the blue. • Consider this that point…

  27. Processes and threads • What has been referred to up to this point as a process can also be called a heavyweight thread. • It is also possible to refer to lightweight threads. • Lightweight threads are what is meant when simply using the term thread in Java. • Not all systems necessarily support lightweight threads, but the ubiquity of Java tells you how widespread lightweight threads are in system software.

  28. What is a lightweight thread? • The term (lightweight) thread in means that >1 execution path can be started through the code of a process (heavyweight thread). • Each lightweight thread will have its own data, but it will share the same code with other lightweight threads

  29. The origin of the terminology and its meaning can be envisioned pictorially. • Let the picture below represent the warp (vertical threads) and woof (horizontal threads) of woven cloth.

  30. The woof corresponds to the lines of code in a program. • The warp corresponds to the so-called “threads”, the multiple execution paths through the code • This picture represents two activations of the same program.

  31. A concrete example: A word processor might have separate threads for character entry, spell checking, etc. • It is not that the character entry routine/module (method) calls spell checking, for example. • When the user opens a document, a thread becomes active for character entry.

  32. When the user selects the spell checking option in the menu, a separate thread of execution (in a different part of) the same program is started. • These two threads can run concurrently. • They don’t run simultaneously, but the user enters characters so slowly, that it is possible to run spell checking “at the same time”.

  33. The relationship between process scheduling and thread scheduling • In effect, threads are like processes in microcosm. • This accounts for the lightweight/heavyweight thread terminology. • They differ in the fact that processes run different program code while threads share program code.

  34. The operating system schedules processes so that they run concurrently. • They do not run simultaneously. • Each process runs for a short span of time. • It then waits while another process runs for a short span of time. • From the user’s (human speed) point of view, multiple processes are running “at the same time”.

  35. The point is that an operating system can also support threads. • The implementation of the JVM on a given system depends on that system’s implementation of threads. • Within each process, threads are run concurrently, just as the processes themselves are run concurrently.

  36. To repeat, threads are processes in microcosm. • Again, this is the one key advantage of learning operating systems from a book which uses Java instead of C. • You can’t write operating system internals.

  37. However, you can write threaded code with a familiar programming language API, rather than having to learn an operating system API. • All of the challenges of correct scheduling exist for Java programs, and the tools for achieving this are built into Java. • You can learn some of the deeper aspects of actual Java programming at the same time that you learn the concepts which they are based on, which come from operating system theory.

  38. 3.2 Process Scheduling • Multi-programming (= concurrent batch jobs) objective = maximum CPU utilization—have a process running at all times • Multi-tasking (= interactive time sharing) objective = switch between jobs quickly enough to support multiple users in real time • Process scheduler = the part of the O/S that picks the next job to run

  39. One aspect of scheduling is system driven, not policy driven: Interrupts force a change in what job is running • Aside from handling interrupts as they occur, it is O/S policy, the scheduling algorithm, that determines what job is scheduled • The O/S maintains data structures, including PCB’s, which define current scheduling state • There are privileged machine instructions which the O/S can call in order to switch the context (move one job out and another one in)

  40. Scheduling queues = typically some type of linked list data structure • Job queue = all processes in the system—some may still be in secondary storage—may not have been given a memory footprint yet • Ready queue = processes in main memory that are ready and waiting to execute (not waiting for I/O, etc. • I/O device (wait) queues = processes either in possession of or waiting for I/O device service

  41. Queuing Diagram of Process Scheduling

  42. Diagram key • Rectangles represent queues • Circles represent resources • Ovals represent events external to the process • Events internal to the process which trigger a transition are simply indicated by the queue that the process ends up in • Upon termination the O/S removes a process’s PCB from all queues and deallocates all resources held

  43. General Structure of Individual O/S Queues

  44. Schedulers • The term scheduler refers to a part of the O/S software • In a monolithic system it may be implemented as a module or routine. • In a non-monolithic system, a scheduler may run as a separate process.

  45. Long term scheduler—this is the scheduler you usually think of second, not first, although it acts first • Picks jobs from secondary storage to enter CPU ready queue • Controls degree of multiprogramming (total # of jobs in system) • Responsible for stability—number of jobs entering should = number of jobs finishing • Responsible for job mix, CPU bound vs. I/O bound • Runs infrequently; can take some time to choose well

  46. Short term scheduler, a.k.a. the CPU scheduler, the scheduler you usually think of first • This module implements the algorithm for picking processes from the ready queue to give the CPU to • This is the heart of interactive multi-tasking • This runs relatively frequently • It has to be fast so you don’t waste CPU time on switching overhead

  47. Medium term scheduler—the one you usually think of last • Allows jobs to be swapped out to secondary storage if multi-programming level is too high • Not all systems have to have long or medium term schedulers • Simple Unix just had a short term scheduler. • The multi-programming level was determined by the number of attached terminals

  48. The relationship between the short, medium, and long term schedulers

  49. Context Switch—Switching CPU from Process to Process—The Short Term Scheduler at Work

  50. Context Switching is the Heart of Short Term Scheduling • Context switching has to be fast. • It is pure overhead cost • In simple terms, it is supported by machine instructions which load and save all register values for a process at one time • It frequently has hardware support—such as multiple physical registers on the chip, so that a context switch means switching between register sets, not reading and writing memory

More Related