1 / 42

Chapter 7 Memory Management

Chapter 7 Memory Management. Today. Exam I Thursday thru Saturday 79 Questions (50 match, 29 guess) 1 Page Handwritten Notes No Class Monday or Tuesday Project 3 – Jurassic Park Chapter 7 – Memory Management. Project 3 Assignment. Step 1: Delta Clock. Implement delta clock.

vila
Download Presentation

Chapter 7 Memory Management

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 7Memory Management

  2. Today • Exam I • Thursday thru Saturday • 79 Questions (50 match, 29 guess) • 1 Page Handwritten Notes • No Class Monday or Tuesday • Project 3 – Jurassic Park • Chapter 7 – Memory Management Memory Management

  3. Project 3 Assignment Step 1: Delta Clock • Implement delta clock. • Design data structure to hold delta times/events. • Program an insert delta clock function • insertDeltaClock(int time, Semaphore* sem); • High priority, mutex protected • Add 1/10 second function to decrement top event and semSignal semaphore when 0 • pollinterruptsor • High priority, mutex protected. • Thoroughly test the operation of your delta clock before proceeding. • os345p3.c • Print Delta Clock (dc): int P3_dc(int argc, char* argv[]); • Test Delta Clock (tdc): int P3_tdc(int argc, char* argv[]); • int dcMonitorTask(int argc, char* argv[]); • int timeTask(int argc, char* argv[]); Memory Management

  4. Project 3 Assignment Step 2: Car Tasks • Implement simple car task. • Design car functionality and Jurassic Park interface. (Don’t worry about passengers or drivers yet.) Semaphore* fillSeat[NUM_CARS];      SWAP; Semaphore* seatFilled[NUM_CARS];    SWAP; Semaphore* rideOver[NUM_CARS];      SWAP; Memory Management

  5. Project 3 Assignment Step 2: Car Tasks (example) // For each car, do 3 times: { SEM_WAIT(fillSeat[carID]); SWAP; // wait for available seat SEM_SIGNAL(getPassenger); SWAP; // signal for visitor SEM_WAIT(seatTaken); SWAP; // wait for visitor to reply ... save passenger ride over semaphore ... SEM_SIGNAL(passengerSeated); SWAP: // signal visitor in seat // if last passenger, get driver { SEM_WAIT(needDriverMutex); SWAP; // wakeup attendant SEM_SIGNAL(wakeupDriver); SWAP; ... save driver ride over semaphore ... // got driver (mutex) SEM_SIGNAL(needDriverMutex); SWAP; } SEM_SIGNAL(seatFilled[carID]); SWAP; // signal next seat ready } SEM_WAIT(rideOver[myID]); SWAP; // wait for ride over ... release passengers and driver ... Memory Management

  6. Project 3 Assignment Step 3: Visitor Tasks • Design visitor functionality and car task interface. (Don’t worry about tickets yet.) • Each task visitor should create its own timing semaphore, which is used for timing functions (ie, arrival delay, standing in lines, time in gift shop or museum.) The delta clock should be used to SEM_SIGNAL these semaphores. • Park visitors should randomly arrive at the park over a 10 second period. In addition, visitors should stand in lines for a random time before requesting a ticket or entrance to the museum or gift shop (3 seconds maximum). • The “SWAP” directive should be inserted after every line of code in your Jurassic Park simulation. Park critical code must be protected by the parkMutexmutex. • The park simulation creates a “lostVisitor” task which sums critical variables in the park to detect any lost visitors. Memory Management

  7. Project 3 Assignment Step 3: Visitor Tasks • Use resource semaphores (counting) to control access to the park, the number of tickets available, and the number of people allowed in the gift shop and museum. • Use mutex semaphores (binary) to protect any critical sections of code within your implementation, such as when updating the delta clock, acquiring a driver to buy a ticket or drive a tour car, accessing global data, or sampling the state of a semaphore. • Use semaphores (binary) to synchronize and communicate events between tasks, such as to awaken a driver, signal data is valid, signal a mode change, etc. Memory Management

  8. Semaphores • Use resource semaphores (counting) to control access to the park, the number of tickets available, and the number of people allowed in the gift shop and museum. // create MAX_TICKETS tickets using counting semaphore tickets = createSemaphore("tickets", COUNTING, MAX_TICKETS); SWAP; // buy a ticket (consume) SEM_WAIT(tickets); SWAP; // resell ticket (produce) SEM_SIGNAL(tickets); SWAP; Lab 3 – Jurassic Park

  9. Semaphores • Use mutex semaphores (binary) to protect any critical sections of code, such as when updating the delta clock, acquiring a driver to buy a ticket or drive a tour car, accessing global data, or sampling the state of a semaphore. // need ticket, wait for driver (mutex) SEM_WAIT(needDriverMutex); SWAP; { // signal need ticket (signal, put hand up) … } // release driver (mutex) SEM_SIGNAL(needDriverMutex); SWAP; Lab 3 – Jurassic Park

  10. Semaphores • Use signal semaphores (binary) to synchronize and communicate events between tasks, such as to awaken a driver, signal data is valid, etc. // signal need ticket (signal, put hand up) SEM_SIGNAL(needTicket); SWAP; { // wakeup driver (signal) SEM_SIGNAL(wakeupDriver); SWAP; // wait ticket available (signal) SEM_WAIT(ticketReady); SWAP; // buy ticket (signal) SEM_SIGNAL(buyTicket); SWAP; } // put hand down (signal) SEM_WAIT(needTicket); SWAP; Lab 3 – Jurassic Park

  11. Shared Memory • Shared memory can be implemented using C global memory when protected with mutex semaphores. // protect shared memory access SEM_WAIT(parkMutex); ;SWAP // access inside park variables myPark.numOutsidePark--; ;SWAP myPark.numInPark++; ;SWAP // release protect shared memory access SEM_SIGNAL(parkMutex); ;SWAP Lab 3 – Jurassic Park

  12. Passing Semaphores • Shared memory can be implemented using C global memory when protected with mutex semaphores. // signal resource ready SEM_WAIT(resourceMutex); SWAP; SEM_WAIT(needPassenger); SWAP: gSemaphore = mySemaphore; SWAP; SEM_SIGNAL(resourceReady); SWAP; SEM_WAIT(resourceAcquired); SWAP; SEM_SIGNAL(resourceMutex); SWAP; // signal resource ready SEM_SIGNAL(needPassenger); SWAP; SEM_WAIT(needPassenger); SWAP: gSemaphore = mySemaphore; SWAP; SEM_SIGNAL(resourceReady); SWAP; SEM_WAIT(resourceAcquired); SWAP; SEM_SIGNAL(resourceMutex); SWAP; Lab 3 – Jurassic Park

  13. Jurassic Park struct # Waiting to Enter Park numOutsidePark Tour Car Line numInCarLine # of Passengers park.cars[ ].passengers Ticket Line numInTicketLine Driver Status park.drivers[ ] typedefstruct { int numOutsidePark; // # outside of park int numInPark; // # in park (P=#) int numTicketsAvailable; // # left to sell (T=#) int numRidesTaken; // # of tour rides taken (S=#) int numExitedPark; // # who have exited the park int numInTicketLine; // # in ticket line int numInMuseumLine; // # in museum line int numInMuseum; // # in museum int numInCarLine; // # in tour car line int numInCars; // # in tour cars int numInGiftLine; // # in gift shop line int numInGiftShop; // # in gift shop int drivers[NUM_DRIVERS]; // driver state (-1=T, 0=z, 1=A, 2=B, etc.) CAR cars[NUM_CARS]; // cars in park } JPARK; # Tickets Available numTicketsAvailable # in Park numInPark # Rides Taken numRidesTaken # Exited Park numExitedPark # in Gift Shop numInGiftShop Gift Shop Line numInGiftLine # in Museum numInMuseum Museum Line numInMuseumLine Lab 3 – Jurassic Park

  14. Exam I 27. Why would early versions of the UNIX operating system be unsuitable for real-time applications? • Because a process executing in kernel mode acts like a user functions. • Because a process executing in user mode may not be preempted. • Because maximum latency to process an interrupt could not be guaranteed. • Untrue. UNIX is well suited for real-time applications. 30. During its lifetime, a process moves among a number of states. The most important of these are • Executing and Blocked. • Idaho, Utah, Wyoming, and Nebraska. • New, Running, and Suspended. • Ready, Running, and Blocked. 62. What are the software contexts in which concurrency becomes an issue? e. Multiprogramming, modularity, system software Memory Management

  15. Project 3 Assignment Step 4: Driver Tasks • Develop the driver task. • Design driver functionality and interface with visitor/car tasks. • Implement design and integrate with os345, visitor, and car tasks. (Now is the time to worry about ticket sales and driver duties.) • Add ticket sales and driver responsibilities. • When a driver is awakened, use the semTryLock function to determine if a driver or a ticket seller is needed. Memory Management

  16. Project 3 Assignment Driver Task int driverTask(int argc, char* argv[]) { char buf[32]; Semaphore* driverDone; int myID = atoi(argv[1]) - 1; SWAP; // get unique drive id printf(buf, "Starting driverTask%d", myID); SWAP; sprintf(buf, "driverDone%d", myID + 1); SWAP; driverDone = createSemaphore(buf, BINARY, 0); SWAP;// create notification event while(1) // such is my life!! { mySEM_WAIT(wakeupDriver); SWAP; // goto sleep if (mySEM_TRYLOCK(needDriver)) // i’m awake - driver needed? { // yes driverDoneSemaphore = driverDone; SWAP; // pass notification semaphore mySEM_SIGNAL(driverReady); SWAP; // driver is awake mySEM_WAIT(carReady); SWAP; // wait for car ready to go mySEM_WAIT(driverDone); SWAP; // drive ride } else if (mySEM_TRYLOCK(needTicket)) // someone need ticket? { // yes mySEM_WAIT(tickets); SWAP; // wait for ticket (counting) mySEM_SIGNAL(takeTicket); SWAP; // print a ticket (binary) } else break; // don’t bother me! } return 0; } // end driverTask Should this be mutexed? Memory Management

  17. CS 345 Memory Management

  18. Chapter 7 Learning Objectives After studying this chapter, you should be able to: • Discuss the principal requirements for memory management. • Understand the reason for memory partitioning and explain the various techniques that are used. • Understand and explain the concept of paging. • Understand and explain the concept of segmentation. • Assess the relative advantages of paging and segmentation. • Summarize key security issues related to memory management. • Describe the concepts of loading and linking. Memory Management

  19. Requirements Memory Management Requirements • Relocation • Users generally don’t know where they will be placed in main memory • May swap in at a different place (pointers???) • Generally handled by hardware • Protection • Prevent processes from interfering with the O.S. or other processes • Often integrated with relocation • Sharing • Allow processes to share data/programs • Logical Organization • Support modules, shared subroutines • Physical Organization • Main memory verses secondary memory • Overlaying Memory Management

  20. source Compiler/Assembler object Linker load module Loader Executable Address Binding • A process must be tied to a physical address at some point (bound) • Binding can take place at 3 times • Compile time • Always loaded to same memory address • Load time • relocatable code • stays in same spot once loaded • Execution time • may be moved during execution • special hardware needed Memory Management

  21. Memory Management Techniques • Fixed Partitioning • Divide memory into partitions at boot time, partition sizes may be equal or unequal but don’t change • Simple but has internal fragmentation • Dynamic Partitioning • Create partitions as programs loaded • Avoids internal fragmentation, but must deal with external fragmentation • Simple Paging • Divide memory into equal-size pages, load program into available pages • No external fragmentation, small amount of internal fragmentation Memory Management

  22. Memory Management Techniques • Simple Segmentation • Divide program into segments • No internal fragmentation, some external fragmentation • Virtual-Memory Paging • Paging, but not all pages need to be in memory at one time • Allows large virtual memory space • More multiprogramming, overhead • Virtual Memory Segmentation • Like simple segmentation, but not all segments need to be in memory at one time • Easy to share modules • More multiprogramming, overhead Memory Management

  23. Fixed Partitioning • Main memory divided into static partitions • Simple to implement • Inefficient use of memory • Small programs use entire partition • Maximum active processes fixed • Internal Fragmentation Operating System 8 M 8 M 8 M 8 M 8 M Memory Management

  24. Operating System 8 M 2 M 4 M 6 M 8 M 8 M 12 M Fixed Partitioning • Variable-sized partitions • assign smaller programs to smaller partitions • lessens the problem, but still a problem • Placement • Which partition do we use? • Want to use smallest possible partition • What if there are no large jobs waiting? • Can have a queue for each partition size, or one queue for all partitions • Used by IBM OS/MFT, obsolete • Smaller partition by using overlays Memory Management

  25. Placement Algorithm w/Partitions • Equal-size partitions • because all partitions are of equal size, it does not matter which partition is used • Unequal-size partitions • can assign each process to the smallest partition within which it will fit • queue for each partition • processes are assigned in such a way as to minimize wasted memory within a partition Memory Management

  26. Operating System Operating System New Processes New Processes Process Queues • When its time to load a process into main memory the smallest available partition that will hold the process is selected Memory Management

  27. Dynamic Partitioning • Partitions are of variable length and number • Process is allocated exactly as much memory as required • Eventually get holes in the memory. • external fragmentation • Must use compaction to shift processes so they are contiguous and all free memory is in one block Memory Management

  28. Allocation Strategies • First Fit • Allocate the first spot in memory that is big enough to satisfy the requirements. • Best Fit • Search through all the spots, allocate the spot in memory that most closely matches requirements. • Next Fit • Scan memory from the location of the last placement and choose the next available block that is large enough. • Worst Fit • The largest free block of memory is used for bringing in a process. Memory Management

  29. Which Allocation Strategy? • The first-fit algorithm is not only the simplest but usually the best and the fastest as well. • May litter the front end with small free partitions that must be searched over on subsequent first-fit passes. • The next-fit algorithm will more frequently lead to an allocation from a free block at the end of memory. • Results in fragmenting the largest block of free memory. • Compaction may be required more frequently. • Best-fit is usually the worst performer. • Guarantees the fragment left behind is as small as possible. • Main memory quickly littered by blocks too small to satisfy memory allocation requests. Memory Management

  30. 8K 12K 22K Last allocated block (14K) 18K 8K 6K Allocated block Free block 14K 36K Before Dynamic Partitioning Placement Algorithm 8K 12K Allocate 18K First Fit Next Fit Best Fit 6K 2K 8K 6K 14K 20K After Memory Management

  31. Memory Fragmentation Memory Fragmentation • As memory is allocated and deallocated fragmentation occurs • External - • Enough space exists to launch a program, but it is not contiguous • Internal - • Allocate more memory than asked for to avoid having very small holes Memory Management

  32. Memory Fragmentation Memory Fragmentation • Statistical analysis shows that given N allocated blocks, another 0.5 N blocks will be lost due to fragmentation. • On average, 1/3 of memory is unusable • (50-percent rule) • Solution – Compaction. • Move allocated memory blocks so they are contiguous • Run compaction algorithm periodically • How often? • When to schedule? Memory Management

  33. Buddy System Buddy System • Tries to allow a variety of block sizes while avoiding excess fragmentation • Blocks generally are of size 2k, for a suitable range of k • Initially, all memory is one block • All sizes are rounded up to 2s • If a block of size 2s is available, allocate it • Else find a block of size 2s+1 and split it in half to create two buddies • If two buddies are both free, combine them into a larger block • Largely replaced by paging • Seen in parallel systems and Unix kernel memory allocation Memory Management

  34. Relocation Addresses • Logical • reference to a memory location independent of the current assignment of data to memory • translation must be made to the physical address • Relative • address expressed as a location relative to some known point • Physical • the absolute address or actual location Memory Management

  35. Base Register Adder Bounds Register Comparator Relocation Hardware Support for Relocation Relative address Process Control Block Absolute address Program Interrupt to operating system Data / Stack Kernel Stack Process image in main memory Memory Management

  36. Relocation Base/Bounds Relocation • Base Register • Holds beginning physical address • Add to all program addresses • Bounds Register • Used to detect accesses beyond the end of the allocated memory • May have length instead of end address • Provides protection to system • Easy to move programs in memory • These values are set when the process is loaded and when the process is swapped in • Largely replaced by paging Memory Management

  37. Paging Paging • Partition memory into small equal-size chunks and divide each process into the same size chunks • The chunks of a process are called pages and chunks of memory are called frames • Operating system maintains a page table for each process • contains the frame location for each page in the process • memory address consist of a page number and offset within the page Memory Management

  38. A B C D Free 0 - 7 4 13 1 - 8 5 14 2 - 9 6 3 10 11 12 Paging Paging (continued…) • Page size typically a power of 2 to simplify the paging hardware • Example (16-bit address, 1K pages) • 010101 011011010 • Top 6 bits (010101)= page # • Bottom 10 bits (011011010) = offset within page • Common sizes: 512 bytes, 1K, 4K Memory Management

  39. A.0 A.1 A.2 0 0 0 0 --- 4 7 0 4 13 0 A.3 1 1 1 1 5 --- 1 8 1 5 14 B.0 2 2 2 2 6 --- 2 9 2 6 Free Frame List B.1 D.0 3 3 10 3 11 3 Process B Process B B.2 D.1 4 12 C.0 Process C Process A D.2 C.1 Process D C.2 C.3 D.3 D.4 Paging Paging Frame Number 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 Memory Management

  40. Segmentation Segmentation • Program views memory as a set of segments of varying sizes • Supports user view of memory • Easy to handle growing data structures • Easy to share libraries, memory • Privileges can be applied to a segment • Programs may use multiple segments • Implemented with a segment table • Array of base-limit register pairs • Beginning address (segment base) • Size (segment limit) • Status bits (Present, Modified, Accessed, Permission, Protection) Memory Management

  41. physical address logical address linear address Segmentation Unit Paging Unit Physical Memory Segmentation Segmentation/Paging • In Pentium systems • CPU generate logical addresses • Segmentation unit produces a linear address • Paging unit generates physical address in memory • (Equivalent to an MMU) CPU Memory Management

  42. Memory Management

More Related