1 / 25

II’nd PART OPERATING SISTEMS

Ştefan Stăncescu. II’nd PART OPERATING SISTEMS. LECTURE 10 MEMORY MANAGEMENT. MEMORY MANEGEMENT. Main Memory – cf. von Neumann arh. (nonremanent – fast ) : Cell row where info are deposed, modified, kept Code & data space where programs act as processes Memory management :

tana
Download Presentation

II’nd PART OPERATING SISTEMS

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Ştefan Stăncescu II’nd PART OPERATING SISTEMS LECTURE 10 MEMORY MANAGEMENT

  2. MEMORY MANEGEMENT • Main Memory – cf. von Neumann arh. (nonremanent – fast ) : • Cell row where info are deposed, modified, kept • Code & data space where programs act as processes • Memory management : • Free/allocated memory space management • Allocate/free memory spaces to processes in run • Swap => MP – disk transfer • Transfers between hierarchical levels of memories • Virtual memory management – page fault mechanisms

  3. MEMORY MANEGEMENT • NoMM • MSDOS – Only oneuser program in memory –cooperativ SO – one program after each, in batch processing • Al resources available – no security • Multiprocessing only with collaborative threads in time sharing UC - • - embedded systems User programs OS DD (I/O) User programs User programs OS OS

  4. MEMORY MANEGEMENT • Fixed memory partitions management • Time shared UC for separate processes –allocation at boot • Each process runs only in previously fixed space • In each memory fixed partition • Batch processing management in each partition • Process queue management at each fixed partition • Process memory space allocation at fixed partition accordingly with partition resources (space) • Empty partitions with satisfied queues and a lot of partitions with processes in wait for memory space • => only one queue for all • but inappropriate space allocation Partitia 1 Partitia 2 Partitia 3 Partitia 4 Partitia 5

  5. MEMORY MANEGEMENT • Swap memory – variable “partitions” • Disk (external memory) resident programs • when a process run is needed by process scheduler • program is loaded and run from disc in MM • with instant space required by process • when a process run is ended, • all data needed are saved on disk and • memory space in MM is released • Disc  main memory = swap • Space allocation/released accounted • Free space compaction and management Partitia 1 Partitia 2 Partitia 3 Partitia 4 Partitia n

  6. MEMORY MANEGEMENT • Variable memory partitions – swap • => Free space management • Process memory space allocation optimized PROCES F PROCES D PROCES D PROCES D PROCES C 3 Prtitia 3 PROCES E PROCES E PROCES B PROCES B PROCES B PROCES A PROCES A PROCES A PROCES A PROCES A

  7. MEMORY MANEGEMENT • Rule 50% • N processes  N/2 holes • 1 process => up 50% probability hole/process • down 50% probability hole/process • Total 2 processes for 1 hole

  8. MEMORY MANEGEMENT S = medium process dim kS = medium hole dim M = all memory dim N = no. processes n/2= no. holes (50% rule) • Unused memory rule • (N/2) * kS = M-NS • M=NS(1+k/2) • f=% unused space in memory • f=(N/2) * (kS /M) // no. holes * hole dim/ all memory dim. • f=NkS/2M=NkS/2(NS*(1+k/2)) // with M calculated before • f=k/(2+k) // f is dependent only of k • Ex: k=1/2 => f=(1/2)/(2+1/2)=1/5=20% • k=1/4 => f=11%

  9. MEMORY MANEGEMENT • Process allocation in holes with holes list • First fit => first hole in list sufficient for process => fast • Next fit => first fit in rest list => little worse as first fit • Best fit => most near dim in list => worsest • Worst fit => anti best fit, small process in greater hole • All alg are slow, by hole list analysis time improvements => sorting, separate lists, etc. • 5. Quick fit => separate hole lists of fixed dims • => Problem=> merging little holes – working with big holes

  10. MEMORY MANEGEMENT • Modele in gestiunea de spatii de memorie • Harta de biti -Bitmap

  11. MEMORY MANEGEMENT • Modele in gestiunea de spatii de memorie • Liste inlantuite:

  12. MEMORY MANEGEMENT • Modele in gestiunea de spatii de memorie • Modificari la liste inlantuite

  13. MEMORY MANEGEMENT • Modele in gestiunea de spatii de memorie Buddy sistem

  14. MEMORY MANEGEMENT Page Replacement Algorithms at Page Faults Optimal Page Replacement Algorithm Not Recently Used – NRU First In First Out – FIFO Second chance Clock Least Recently Used - LRU Not Frequently Used - NFU Not Frequently Used – NFU w/Aging Working set - WS WS w/Clock 14

  15. MEMORY MANEGEMENT Page Replacement Algorithms at Page Faults Optimal Page Replacement Algorithm Ideal one, not realizable Optimality: First the program runs with fixed data Note the page demanding list Decide best replacements by keeping minimum PF (the most delayed use page will be replaced) At each PF => calculate the most delayed use page replace that one with the demanded page 15

  16. MEMORY MANEGEMENT Some HW adjacent each physical page w info needs in decision about page replacement. Reference-Modify HW cell memories: R M 0 0 No referred page, w/o read/write - modify 1 0 Recent read page, w/o write-modify 0 1 Old read page, w/old write-modify 1 1 Recent read page, w/old or recent write-modify R erased at clock IT (“forget” old unnecessary reads) 16

  17. MEMORY MANEGEMENT NRU (not recently used) w/RM help in page ranking R M 0 0 Class 3 Max priority 1 0 Class 2 0 1 Class 1 1 Class 0 min priority NRU => At PF eject at random any page from the less priority nonempty class Simple, no optimal, reasonably efficient, no big mistakes 17

  18. MEMORY MANEGEMENT FIFO (First In First Out) FIFO queue w/ index requested pages Alg: Fill the queue and reuse any pages in queue is requested After queue full – PF eject page in front (earliest in) Simple, efficient, but may eject page in current use, should be kept for next intensive work, although the page has spent much time in memory 18

  19. MEMORY MANEGEMENT FIFO Second Chance FIFO queue w/ index requested pages and RM Alg.: AT PF if (R=0) then replace //old page, unreffered else R=0 //pretend to not be read, although R=1 move from front to end of FIFO queue //consider as a new loaded page go to next position (if same page found, replace, without remorse) 19

  20. MEMORY MANEGEMENT FIFO Clock Hand points to the oldest page On a page fault, follow the hand to inspect pages Apply clock second chance: if (R=1), R=0 and advance the hand if (R=0) replace page (only if all pages are R=1, second chance get first page) Long round 20

  21. MEMORY MANEGEMENT Least Recently Used LRU NR couter/register attached to each page to count accesses Alg: at each clock IT all NR are increments with R (0/1) //each NR keeps reference page rank at PF, page with smallest rank is ejected => old heavy runs are kept also! – wrong! Idea: aging = replace bold text w/ shift all NR 1 bit right add R(0/1) at left (msb position bit in NR) 21

  22. GESTIUNE DE MEMORIE Least Recently Used LRU 0 1 2 3 2 1 0 3 2 3 22

  23. MEMORY MANEGEMENT Not Frequently Used - NFU NR couter/register attached to each page And a overall process time counter Alg: At each reference overall increment time counter and transfer content to referred page NR // NR keeps page data access time stamp at PF eject the page with oldest data stamp (lowest content) 23

  24. GESTIUNE DE MEMORIE Not Frequently Used - NFU 0 1 2 3 4 5 24

  25. GESTIUNE DE MEMORIE Least Recently Used - LRU /Not Frequently Used – NFU LRU update at each access (instruction) NFU update at each IT (many instructions) a page may count a lot of references between IT’s same rank for a few or a lot of references pages w/first references between IT are at same rank as pages w/last references in interval 25

More Related