1 / 28

Operating Systems {week 11}

Rensselaer Polytechnic Institute CSCI-4210 – Operating Systems David Goldschmidt, Ph.D. Operating Systems {week 11}. Hierarchical storage architecture. very fast. very small. volatile. non-volatile. very slow. very large. Von Neumann architecture.

jason
Download Presentation

Operating Systems {week 11}

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Rensselaer Polytechnic Institute CSCI-4210 – Operating Systems David Goldschmidt, Ph.D. Operating Systems{week 11}

  2. Hierarchical storage architecture very fast very small volatile non-volatile very slow very large

  3. Von Neumann architecture • Based on the von Neumann architecture, data and program instructionsexist in physical memory • Repeatedly performfetch-decode-executecycles • The execute partoften results in datafetch and store operations physical memory

  4. Main memory (i) • Locations in memoryare identified bymemory addresses • When compiled, programsconsist of relocatable code • Other compiled modulesalso consist ofrelocatable code symbolic addresses in source code relative addresses in object code

  5. Main memory (ii) • At load time, anyadditional librariesalso consist ofrelocatable code physical addresses generated by loader

  6. Main memory (iii) • At run time, memoryaddresses of all objectfiles are mapped to asingle memory spacein physical memory

  7. Dynamic loading and linking • Using dynamic loading, external libraries are not loaded when a process starts • Libraries are stored on disk in relocatable form • Libraries loaded into memory only when needed • Using dynamic linking, external libraries can be preloaded into shared memory • When a process calls a library function, thecorresponding physical address is determined

  8. Contiguous memory allocation (i) • Main memoryis partitionedand allocatedto residentoperating systemand user processes fixed partitioning scheme

  9. Contiguous memory allocation (ii) • A pair of base and limitregisters define thelogical address space • Also known asrelocation registers

  10. Contiguous memory allocation (iii) • The CPU generates logical memory addresses • A Memory-Management Unit (MMU)maps logical memory addressesto the physical address space • User programs never seephysical memory addresses

  11. Contiguous memory allocation (iv) • Hardware protects against memory access outside of a process’s valid memory space

  12. OS OS OS OS Process 5 Process 5 Process 5 Process 5 Process 9 Process 9 Process 8 Process 1 Process 2 Process 2 Process 2 Process 2 Dynamic partitioning • Variable-length or dynamic partitions: • When a new process enters the system, the process is allocated to a single contiguous block • The operating system maintains a list of allocated partitions and free partitions

  13. Placement algorithms • How can we place new process Pi in memory? • First-fit algorithm: allocate the first free blockthat’s large enough to accommodate Pi • Best-fit algorithm: allocate thesmallest free block that’s largeenough to accommodate Pi • Next-fit algorithm: allocate thenext free block, searching from last allocated block • Worst-fit algorithm: allocate the largest free blockthat’s large enough to accommodate Pi

  14. OS Process 5 Process 8 Process 3 Process 2 Process 6 Process 7 Process 12 Fragmentation (i) • Memory is wasted due to fragmentation,which can cause performance issues • Internal fragmentation is wasted memorywithina partition or process memory • External fragmentation can reducethe number of runnable processes • Total memory space exists to satisfya memory request, but memory isnot contiguous Process 9

  15. OS Process 5 Process 8 Process 3 Process 3 Process 2 Process 3 Process 6 Process 6 Process 6 Process 7 Process 7 Process 7 Process 12 Process 12 Process 12 Fragmentation (ii) • Reduce external fragmentation bycompaction or defragmentation • Rearrange memory contents to organizeall free memory blocks together intoone large contiguous block • Compaction is possible only ifrelocation is dynamic and isdone at execution time • Compaction is expensive Process 9 Process 9

  16. Noncontiguous allocation (i) • A noncontiguous memory allocation scheme avoids the external fragmentation problem • Slice up physical memory intofixed-sized blocks called frames • Sizes are powers of 2 (e.g. 214) • Slice up logical memory intofixed-sized blocks called pages • Allocate pages into frames • Note that frame size equals page size

  17. process Pi == in use main memory == free Noncontiguous allocation (ii) • When a process of size n pages is ready to run, operating system finds n free frames • The OS keepstrack of pagesvia a page table

  18. Paging via a page table (i) • Page tables map logical memoryaddresses to physical memoryaddresses

  19. Paging via a page table (ii) • Example process Pineeds 16MB oflogical memory • Page size is 4MB • Logical memory ismapped to a 32MBphysical memory • Frame size is 4MB binary 0 ==> 000000 4 ==> 000100 8 ==> 001000 12 ==> 001100 16 ==> 010000 20 ==> 010100 24 ==> 011000 28 ==> 011100

  20. Allocating a new process

  21. page offset page number p d Address translation (i) • Every logical address issliced into two distinctcomponents: • Page number (p): used as an index into thepage table to obtain the base physical memory address • Page offset (d): combined with the base address to identify the physical memory address

  22. page offset page number p d Address translation (ii) • Covers a logical addressspace of size 2m withpage size 2n (m – n) (n)

  23. Address translation (iii)

  24. Address translation (iv) • The page table is in main memory • Every memory access request actually requires two memory accesses: 2 1

  25. Translation look-aside buffer • Use page tablecaching at thehardware levelto speed addresstranslation • Hardware-leveltranslation look-aside buffer(TLB)

  26. Effective memory access time • Given: • Memory access time is 100 nanoseconds • TLB access time is 20 nanoseconds • TLB hit ratio is 80% • The effective memory-access time (EMAT) is • 0.80 x 120 ns + 0.20 x 220 ns = 140 ns • What is the effective memory-access timegiven a hit ratio of 99%? 50%?

  27. Multilevel page tables • For large page tables, usemultiple page table levels • Slice up the logical addressinto multiple page indicators

  28. Swapping • Processes in the ready queue have memory images waiting on disk • Processes are swapped in andout of memory • Can suffer from slow data transfer times

More Related