1 / 15

Thread

This Slideserver define the multiprocessor in os

Lakshmi7
Download Presentation

Thread

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Distributed operating system By, LAKSHMI.S, ASSISTANT PROFESSOR, DEPARTMENT OF COMPUTER SCIENCE, SRI ADI CHUNCHANAGIRI WOMEN’S COLLEGE, CUMBUM.

  2. Introduction Multiprocessor Operating system. Multiprocessor system means, there are more than one processor which work parallel to perform the required operations. It allows the multiple processors, and they are connected with physical memory, computer buses, clocks, and peripheral devices. Structure of mulitiprocessing operating systems In operating systems, to improve the performance of more than one CPU can be used within one computer system called Multiprocessor operating system.

  3. Multiple CPUs are interconnected so that a job can be divided among them for faster execution. When a job finishes, results from all CPUs are collected and compiled to give the final output. Jobs needed to share main memory and they may also share other system resources among themselves. Multiple CPUs can also be used to run multiple jobs simultaneously. For Example: UNIX Operating system is one of the most widely used multiprocessing systems. Fields The different fields of multiprocessor operating systems used are as follows −

  4. Asymmetric Multiprocessor − Every processor is given seeded tasks in this operating system, and the master processor has the power for running the entire system. In the course, it uses the master-slave relationship. Symmetric Multiprocessor − In this system, every processor owns a similar copy of the OS, and they can make communication in between one another. All processors are connected with peering relationship nature, meaning it won’t be using master & slave relation. Shared memory Multiprocessor − As the name indicates, each central processing unit contains distributable common memory. Uniform Memory Access Multiprocessor (UMA) − In this system, it allows accessing all memory at a consistent speed rate for all processors.

  5. Distributed memory Multiprocessor − A computer system consisting of a range of processors, each with its own local memory, connected through a network, which means all the kinds of processors consist of their own private memory. NUMA Multiprocessor − The abbreviation of NUMA is Non-Uniform Memory Access Multiprocessor. It entails some areas of the memory for approaching at a swift rate and the remaining parts of memory are used for other tasks. Operating system design issues The operating system may be implemented with the assistance of several structures. The structure of the operating system is mostly determined by how the many common components of the OS are integrated and merged into the kernel. In this article, you will learn the following structure of the OS. Various structures are used in the design of the operating system.

  6. Threads: A thread is a single sequential flow of execution of tasks of a process so it is also known as thread of execution or thread of control. There is a way of thread execution inside the process of any operating system. • Target program:The target program is the output of the code generator. The output can be: • a) Assembly language: It allows subprogram to be separately compiled. • b) Relocatable machine language: It makes the process of code generation easier. • c) Absolute machine language: It can be placed in a fixed location in memory and can be executed immediately. • Memory management: • During code generation process the symbol table entries have to be mapped to actual p addresses and levels have to be mapped to instruction address.

  7. Mapping name in the source program to address of data is co-operating done by the front end and code generator. • Local variables are stack allocation in the activation record while global variables are in static area. • Instruction selection: • Nature of instruction set of the target machine should be complete and uniform. • When you consider the efficiency of target machine then the instruction speed and machine idioms are important factors. • The quality of the generated code can be determined by its speed and size. • Example: • The Three address code is: • a:= b + c • d:= a + e

  8. Evaluation order: • The efficiency of the target code can be affected by the order in which the computations are performed. Some computation orders need fewer registers to hold results of intermediate than others. • Threads • A thread is a single sequential flow of execution of tasks of a process so it is also known as thread of execution or thread of control. There is a way of thread execution inside the process of any operating system. Apart from this, there can be more than one thread inside a process. Each thread of the same process makes use of a separate program counter and a stack of activation records and control blocks. Thread is often referred to as a lightweight process.

  9. The process can be split down into so many threads. For example, in a browser, many tabs can be viewed as threads. MS Word uses many threads - formatting text from one thread, processing input from another thread, etc. Types of Threads In the operating system, there are two types of threads. Kernel level thread. User-level thread. User-level thread The operating system does not recognize the user-level thread. User threads can be easily implemented and it is implemented by the user. If a user performs a user-level thread blocking operation, the whole process is blocked. The kernel level thread does not know nothing about the user level thread. The kernel-level thread manages user-level threads as if they are single-threaded processes?examples: Java thread, POSIX threads, etc.

  10. Advantages of User-level threads The user threads can be easily implemented than the kernel thread. User-level threads can be applied to such types of operating systems that do not support threads at the kernel-level. It is faster and efficient. Context switch time is shorter than the kernel-level threads. It does not require modifications of the operating system. User-level threads representation is very simple. The register, PC, stack, and mini thread control blocks are stored in the address space of the user-level process. It is simple to create, switch, and synchronize threads without the intervention of the process.

  11. Kernallevel thread The kernel thread recognizes the operating system. There is a thread control block and process control block in the system for each thread and process in the kernel-level thread. The kernel-level thread is implemented by the operating system. The kernel knows about all the threads and manages them. The kernel-level thread offers a system call to create and manage the threads from user-space. The implementation of kernel threads is more difficult than the user thread. Context switch time is longer in the kernel thread. If a kernel thread performs a blocking operation, the Banky thread execution can continue. Example: Window Solaris.

  12. Advantages of Kernel-level threads The kernel-level thread is fully aware of all threads. The scheduler may decide to spend more CPU time in the process of threads being large numerical. The kernel-level thread is good for those applications that block the frequency. Scheduler Activation Scheduler activations are a threading mechanism that, when implemented in an operating system's process scheduler, provide kernel-level thread functionality with user-level thread flexibility and performance.

  13. One technique for communication between the user-thread library and the kernel is known as scheduler activation. It works as likes: The kernel provides an application with a set of virtual processors (LWPs), and the application can schedule user threads onto an available virtual processor. Moreover, the kernel must inform an application about certain events. This procedure is known as an upcall. Upcalls are handled by the thread library with an upcall handler, and upcall handlers must run on a virtual processor. One event that triggers an upcall occurs when an application thread is about to block. In this scenario, the kernel makes an upcall to the application informing it that a thread is about to block and identifying the specific thread. Then the kernel allocates a new virtual processor to the application. The application runs an upcall handler on this new virtual processor, that saves the state of the blocking thread and relinquishes the virtual processor on which the blocking thread is running. Another thread that is eligible to run on the new virtual processor is scheduled then by the upcall handler. Whenever the event that the blocking thread was waiting for occurs, the kernel makes another upcall to the thread library informing it that the previously blocked thread is now eligible to run.

  14. Thank you

More Related