1 / 23

Salsa Theory Debrief 2

Salsa Theory Debrief 2. COMP346 - Operating Systems Tutorial 6 Edition 1.1, June 19, 2002. Topics. Device Drivers Mutual Exclusion Some Deadlock Processes fork() and exec() System Calls Processes vs. Threads, Take II Message Passing vs. System Calls. Device Drivers.

kare
Download Presentation

Salsa Theory Debrief 2

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Salsa Theory Debrief 2 COMP346 - Operating Systems Tutorial 6 Edition 1.1, June 19, 2002 Serguei A. Mokhov, mokhov@cs.concordia.ca

  2. Topics • Device Drivers • Mutual Exclusion • Some Deadlock • Processes • fork() and exec() System Calls • Processes vs. Threads, Take II • Message Passing vs. System Calls Serguei A. Mokhov, mokhov@cs.concordia.ca

  3. Device Drivers • Provide Common Interface (a layer of abstraction) • Execute privileged instructions • Part of the OS (Why? :-)) • Have two halves: • Upper Half – Exposes 1. to the rest of OS • Lower Half • Provides Device-Specific implementation of 1 • Interrupt handling Serguei A. Mokhov, mokhov@cs.concordia.ca

  4. Device Drivers (2) • Mapping independent I/O functions to device-dependent Upper half. Works at CPU’s speed open() close() read() write() … Buffers to sync data between the halves Lower Half. Operates at device’s speed Int handling, dev_open(), dev_close()… Serguei A. Mokhov, mokhov@cs.concordia.ca

  5. Mutual Exclusion • Possible Implementations: • General semaphores on top of counting ones • Using interrupts • Drawbacks • ME violation • Starvation Serguei A. Mokhov, mokhov@cs.concordia.ca

  6. Drivers • Example: • LP driver Serguei A. Mokhov, mokhov@cs.concordia.ca

  7. Mutual Exclusion • Binary semaphores were implemented as separate entities for efficiency reasons [1]. • General semaphores were using binary ones. Serguei A. Mokhov, mokhov@cs.concordia.ca

  8. Mutual Exclusion P() { V() { wait(mutex); wait(mutex); count – 1; count + 1; if (count < 0) { if (count <= 0) { signal(mutex); signal(delay); wait(delay); } } else { signal(mutex); signal(mutex); } } } • What are the problems with the above? • Consider 2 V() and 2 P() operations executing concurrently: PVPV Serguei A. Mokhov, mokhov@cs.concordia.ca

  9. Mutual Exclusion P() { V() { wait(mutex); wait(mutex); count – 1; count + 1; if (count < 0) { if (count <= 0) { signal(mutex); signal(delay); wait(delay); } } else { signal(mutex); signal(mutex); } } } • One P’s ran and got interrupted. • Then one V ran till completion. • Then the second P did as the first one. • Then the second V ran till completion. • Then… ? Imagine 100 P’s and V’s… Serguei A. Mokhov, mokhov@cs.concordia.ca

  10. Mutual Exclusion • To avoid the possible problem of the previous solution, we introduce interrupt operations. • Looks OK, everybody is happy. BUT… what about multiprocessor systems? P() { V() { cli cli count – 1; count + 1; if (count < 0) { if (count <= 0) { wait(delay); signal(delay); } } sti sti } } Serguei A. Mokhov, mokhov@cs.concordia.ca

  11. Some Scheduling • The scheduler places processes blocked on a semaphore in a queue. • If semaphore queues are handled using: • FIFO • Stack (LIFO) • How about fairness and starvation properties of each method? Serguei A. Mokhov, mokhov@cs.concordia.ca

  12. Memory Stuff • Tutorials 7 and 8 from Paul and Tony • Locality of Reference Model • Keep the pages for a given scope of code (e.g. function), locality, with all instructions, variables (either local or global) used by these instructions in the memory to avoid thrashing. • There can be several localities • Localities change over time, can overlap. Serguei A. Mokhov, mokhov@cs.concordia.ca

  13. Memory Stuff • Locality – Self Referencing Code (loops): • Prof. Aiman Hanna: “Locality in this context just means that the process is referencing the same piece of code (that is why the page that is used will be used many times). The most obvious way to have code locality are loops, since the executed lines may go again and again. Since these lines belong to the same page(s), these pages will be kept (not considered for replacement). This is why LFU will behave best at that locality points.” Serguei A. Mokhov, mokhov@cs.concordia.ca

  14. Deadlock • A system: • Three concurrent processes • Five resources of the same type • Each process needs to acquire three resources to terminate. • Upon termination a process releases all resources back to the system. • Is there a possibility of a deadlock? Serguei A. Mokhov, mokhov@cs.concordia.ca

  15. Deadlock p1 p2 p3 Serguei A. Mokhov, mokhov@cs.concordia.ca

  16. Deadlock p1 p2 p3 Serguei A. Mokhov, mokhov@cs.concordia.ca

  17. Deadlock p1 p2 p3 Serguei A. Mokhov, mokhov@cs.concordia.ca

  18. Deadlock p1 p2 p3 Serguei A. Mokhov, mokhov@cs.concordia.ca

  19. Deadlock ? ? p1 p2 p3 Serguei A. Mokhov, mokhov@cs.concordia.ca

  20. Processes Again • fork() and exec() – Review COMP229 slides from the past term. Serguei A. Mokhov, mokhov@cs.concordia.ca

  21. Threads vs. Processes, Take II • Do the threads really “speed up” process’ execution? • Define “speed up” and the kind of threads (user, kernel, LWP) you’re talking about • Consider the task of the running program. Does it exploit parallelism? Serguei A. Mokhov, mokhov@cs.concordia.ca

  22. Threads vs. Processes, Take II • In which circumstances one’s better off with processes rather than threads? • Answer this: • What’s your task and how parallel is it? • Robustness? (e.g. PostgreSQL RDBMS) What if a thread “crashes”? • Complexity of implementation? • Synchronization? • Is really “speed up” achieved using threads? Serguei A. Mokhov, mokhov@cs.concordia.ca

  23. Message Passing vs. System Calls • Where does one outperform the other? • Good question because they do different things. Message passing is an IPC facility, system calls have a wider spectrum of use (subset may be used in IPC, obviously). • Consider this (assuming IPC): • Message passing services run in the user space outside of the kernel. • System calls execute in the kernel on behalf of the process. • Messages have limitations on size. • System calls imply mode switch and more overhead • Messages are not necessarily received immediately. • Now looking at the above tell: which one is more secure? Which one is more flexible? What are the space/time requirements in each? Serguei A. Mokhov, mokhov@cs.concordia.ca

More Related