1 / 66

Segmentation User Mode and Memory Management Case studies Pentium Unix Linux Windows

Memory Management - Part 3: Outline. Segmentation User Mode and Memory Management Case studies Pentium Unix Linux Windows. Segmentation. Several address spaces per process A compiler needs segments for source text symbol table constants segment stack parse tree

kwhitney
Download Presentation

Segmentation User Mode and Memory Management Case studies Pentium Unix Linux Windows

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Memory Management - Part 3: Outline • Segmentation • User Mode and Memory Management • Case studies • Pentium • Unix • Linux • Windows Ben-Gurion University Operating Systems, 2013, Meni Adler, Michael Elhadad and Amnon Meisels

  2. Segmentation • Several address spaces per process • A compiler needs segments for • source text • symbol table • constants segment • stack • parse tree • compiler executable code • Most of these segments grow during execution symbol table symbol table Source Text source text constant table parse tree call stack Ben-Gurion University Operating Systems, 2013, Meni Adler, Michael Elhadad and Amnon Meisels

  3. Users view of segments Symbol table Parse tree Source text Call stack Constants Segmented memory allows each table to grow or shrink independently of the other tables. Ben-Gurion University Operating Systems, 2013, Meni Adler, Michael Elhadad and Amnon Meisels

  4. Segmentation - segment table Ben-Gurion University Operating Systems, 2013, Meni Adler, Michael Elhadad and Amnon Meisels

  5. Segmentation Hardware Ben-Gurion University Operating Systems, 2013, Meni Adler, Michael Elhadad and Amnon Meisels

  6. Segmentation vs. Paging Ben-Gurion University Operating Systems, 2013, Meni Adler, Michael Elhadad and Amnon Meisels

  7. Segmentation pros and cons • Advantages: • Growing and shrinking independently • Sharing between processes simpler • Linking is easier • Protection easier • Disadvantages: • Pure segmentation  external fragmentation • Segments may be very large. What if they don't fit into physical memory? Ben-Gurion University Operating Systems, 2013, Meni Adler, Michael Elhadad and Amnon Meisels

  8. Segmentation Architecture • Logical address composed of the pair <segment-number, offset> • Segment table – maps to linear address space; each table entry has: • base – contains the starting linear address where the segment resides in memory. • limit – specifies the length of the segment. • Segment-table base register (STBR) points to the segment table’s location in memory. • Segment-table length register (STLR) indicates number of segments used by a program; segment number s is legal if s < STLR. Ben-Gurion University Operating Systems, 2013, Meni Adler, Michael Elhadad and Amnon Meisels

  9. Segmentation Architecture (Cont.) • Protection: each segment table entry contains: • validation bit = 0  illegal segment • read/write/execute privileges • Protection bits associated with segments; code sharing occurs at segment level. • Since segments vary in length, memory allocation is a dynamic storage-allocation problem (external fragmentation problem) Ben-Gurion University Operating Systems, 2013, Meni Adler, Michael Elhadad and Amnon Meisels

  10. Sharing of segments Ben-Gurion University Operating Systems, 2013, Meni Adler, Michael Elhadad and Amnon Meisels

  11. Segmentation with Paging • Segments may be too large to keep in (physical) memory • Cause external fragmentation • The two approaches may be combined: • Segment table. • Pages inside a segment. • Solves fragmentation problems. • Most systems today provide a combination of segmentation and paging Ben-Gurion University Operating Systems, 2013, Meni Adler, Michael Elhadad and Amnon Meisels

  12. Memory management, part 3: outline • Segmentation • User Mode and Memory Management • Case studies • Pentium • Unix • Linux • Windows Ben-Gurion University Operating Systems, 2013, Meni Adler, Michael Elhadad and Amnon Meisels 12

  13. User Mode and Memory Management • Heap Manager • User Mode component in charge of handling malloc/free requests by program. • The Heap Manager can trigger system calls to expand/allocate Heap Segments to the process. • Memory Mapped Files • Shared Memory • Locking Memory in RAM

  14. Heap Allocation vs. Growing Process RAM In User Mode, processes use a library function to allocate and free memory from the Heap. This function is called malloc and is used by new in C++ and internally in Java. The Heap Allocator must be thread-safe (since the heap is shared by multiple threads) which can be a source of performance issues. When a process is created, it is allocated a default heap. When malloc cannot find sufficient memory in the current heap, it must invoke a system call to increase the heap. This system call is called brk in Linux/Unix and VirtualAlloc in Windows.  User mode library functions (malloc, new, HeapCreate, HeapAlloc) can only allocate memory inside the current heap. When they need memory, they all call brk/VirtualAlloc.  See http://www.flounder.com/inside_storage_allocation.htm Ben-Gurion University Operating Systems, 2013, Meni Adler, Michael Elhadad and Amnon Meisels 14

  15. Heap Manager • The Heap Manager is part of the user mode program (not of the kernel). • It handles a block of memory (the Heap Segment) provided by the OS and places a header at the top. The heap segment header contains at least: • the size of the heap segment • a CRITICAL_SECTION to provide thread-safe access.  • There can be multiple Heap Segments in one process memory. This may be useful to reduce contention in multi-threaded applications. • The Heap Manager divides the remaining memory into two classes of storage: • Allocated storage: blocks which are "owned" by the calling program • Free storage, which is owned by the allocator.  • The Heap Manager is not permitted to do anything to allocated memory, except free it when asked to free it.  It is free to do whatever it wants with the free storage.  • The heap segment contents is divided into memory blocks.  Each memory block has a header at the front, which is not part of the visible space a user of the allocator sees, and there is always at least one such memory block. • When the allocator starts, all the storage is free, so there is a single heap segment header for the entire heap segment, and a single block header saying all the rest is free.

  16. Heap Manager: Allocate • When the program calls “malloc”, the Heap Manager must locate a block large enough for the allocation, mark it as allocated and return a pointer to it.  • Algorithms to locate a block include "first fit", "best fit" "quick fit“ and “buddy”.  • “First fit“ is the simplest:  the list of free blocks is scanned until a free block large enough to satisfy the request is found.  The block is split into two blocks; the first part is returned as the result of the allocation request, and the rest of it remains as a new, smaller, block on the free list.  • If I have a 32K block at the front, and ask for a 1K allocation, the allocator does not search for a 1K free block, or the smallest-block-not-less-than-1K (this is "best fit") but simply splits the first block it finds that is large enough to satisfy the request, hence the name "first fit".

  17. Heap Manager: Free • The programmer returns a block to the Heap Manager free list by calling the “free” method on a pointer previously returned by malloc. • When a block is freed, it is marked as available.  It is erroneous for the application to ever use the pointer to that block again for any purpose.  • The freed block is marked as "free" and ownership returns to the allocator.  Under certain conditions, the free block may be coalesced with adjacent free blocks to form larger blocks. • It is good practice to set pointers to NULL after freeing the object they point to.  This avoids using what is called a "dangling pointer" error, that is, a pointer to a block of memory which has been freed because trying to access the NULL pointer will always trigger a hardware fault instead of allowing risky memory overlap. • Imagine if you have allocated a 100-byte block of storage, freed it, it coalesced with some other block,  the space got reallocated, and then you write into the data structure you thought you once had a pointer to.  You will corrupt either the control blocks (headers placed before each block in the heap) used by the heap allocator, or corrupt data that legitimately is "owned" by some other part of your program.  • When freed blocks cannot be coalesced, the heap becomes “fragmented”.

  18. Heap Manager: Compaction • If 2 non-adjacent memory 1K memory blocks are freed – we obtain two separate free blocks of 1K. If the program asks to allocate a 2K block, nothing would happen.  While there are 2K bytes of "free" storage, they are non-adjacent, and therefore useless for this purpose.  • It would be nice to "slide" the block between the non-adjacent blocks down into the position of the second block, thus making the third block free; two adjacent 1K free blocks will coalesce into a single 2K free block, and the allocation would succeed.  This is called compaction. • The problem in languages like C or C++ is that there are pointers to the address of the used block, and it would be necessary to find all the pointers to this block in the process memory, change them to point to the new position, move the block down, and then coalesce.  In languages like C/C++, there is no way the Heap Manager can find all the pointers, since they could be in local variables (in the stacks), global variables, heap-allocated structures, and even temporary registers.  Such transformations have to be thread-safe, which makes it essentially impossible to achieve. • Java and C# can manage heap compaction in a thread-safe fashion (as part of Garbage Collection).

  19. Heap Manager: Fragmentation • The consequence of having "holes" in the heap that are not usable to satisfy a request is called  Heap fragmentation. • It is often the consequence of having allocations of lots of different-size objects that have different lifetimes.  • First-fit tends to create high fregmentation. • Best-fit creates lower fragmentation, but requires longer search times to locate a good block, which increases multi-thread contention. • An optimization is to keep the free-list sorted in order of ascending block size. • Another optimization is to use different heap segments for different block sizes.

  20. Heap Manager: Expanding the Heap • When the Heap Manager cannot find blocks to answer an allocation request, it can either: • Try to coalesce more free blocks. • Return NULL to indicate the malloc operation failed. • Ask the OS to expand the Heap Segment. • The Heap Manager can manage multiple Heap Segments – or expand an existing Heap Segment. • This is achieved by invoking a system call called brk (or sbrk) in Unix/Linux and VirtualAlloc in Windows. • Brk and VirtualAlloc allocate new pages to the Virtual Memory of the process, and it also commit these pages – meaning, it will actually take physical page frames, which causes risks of page faults. It also commits space in the page file – if the size of all committed pages is larger than the size of the page file, the system call will fail.

  21. Memory Mapped Files: Definition Memory Mapped Files are a mechanism that allows programmers to intervene in the way content is mapped into virtual memory. A memory-mapped file is a segment of virtual memory which has been assigned a direct mapping with a segment of a file. After it is established, the view on the file through the memory allows user mode applications to view the mapped portion as if it were in RAM. When using Memory Mapped Files, the programmer intervenes in mapping Pages from the Process Page Table to files in the file system instead of the usual location in the Swap File used by the kernel. Ben-Gurion University Operating Systems, 2013, Meni Adler, Michael Elhadad and Amnon Meisels 21

  22. Memory Mapped Files: Usage Memory Mapped Files are useful when the program accesses large files especially in Read mode. The content of the file is accessed without going through system calls for each read operation. Useful in accessing files that are larger than physical RAM size – and results in lazy-loading (the parts of the file that need loading are only loaded when needed). Memory Mapped Files are used by the Process Loader: when a process is executed (with system calls such as exec), the kernel loads the program executable and libraries using memory mapped files. Memory Mapped Files can also allow inter-process communication: 2 processes map the same file in their virtual memory. Ben-Gurion University Operating Systems, 2013, Meni Adler, Michael Elhadad and Amnon Meisels 22

  23. Memory Mapped Files in Linux 2 system calls: mmap and munmap #include <sys/mman.h> void* mmap( void *addr, // Address where file is mapped size_t length // Size of the segment mapped in RAM int prot, // Pages can be R/W/X int flags, // SHARED / FIXED … int fd, // File descriptor off_t offset); // Offset within the file int munmap(void *addr, size_t length); In general – addr passed as argument should be NULL (meaning let the kernel decide where in the RAM the file segment will be mapped). Ben-Gurion University Operating Systems, 2013, Meni Adler, Michael Elhadad and Amnon Meisels 23

  24. Memory Mapped Files in Linux #include <sys/mman.h> #include <sys/stat.h> #include <fcntl.h> #include <stdio.h> #include <stdlib.h> #include <unistd.h> #define handle_error(msg) \ do { perror(msg); exit(EXIT_FAILURE); } while (0) int main(int argc, char *argv[]) { char *addr; int fd; struct stat sb; off_t offset, pa_offset; size_t length; ssize_t s; if (argc < 3 || argc > 4) { fprintf(stderr, "%s file offset [length]\n", argv[0]); exit(EXIT_FAILURE); } fd = open(argv[1], O_RDONLY); if (fd == -1) handle_error("open"); /* To obtain file size */ if (fstat(fd, &sb) == -1) handle_error("fstat"); offset = atoi(argv[2]); pa_offset = offset & ~(sysconf(_SC_PAGE_SIZE) - 1); /* offset for mmap() must be page aligned */ if (offset >= sb.st_size) { fprintf(stderr, "offset is past end of file\n"); exit(EXIT_FAILURE); } if (argc == 4) { length = atoi(argv[3]); if (offset + length > sb.st_size) length = sb.st_size - offset; /* Cannot display bytes past end of file */ } else { length = sb.st_size - offset; /* No length arg ==> display to end of file */ } addr = mmap(NULL, length + offset - pa_offset, PROT_READ, MAP_PRIVATE, fd, pa_offset); if (addr == MAP_FAILED) handle_error("mmap"); s = write(STDOUT_FILENO, addr + offset - pa_offset, length); if (s != length) { if (s == -1) handle_error("write"); fprintf(stderr, "partial write"); exit(EXIT_FAILURE); } exit(EXIT_SUCCESS); }

  25. Memory Mapped Files and Writing • When the array in RAM mapped to an mmap file is updated by the process, the corresponding Pages are marked as “dirty” by the hardware MMU. • Dirty pages are written back to storage according to the kernel schedule (when the page is swapped out, or pre-emptively). • In all cases, when calling munmap(), all dirty pages are committed back to disk. • Dirty pages can also be explicitly saved to file by invoking the system call msync(): int msync(void *addr, size_tlength, intflags);

  26. Memory Mapped Files in Windows http://msdn.microsoft.com/en-us/library/ms810613.aspx System calls: CreateFileMapping OpenFileMapping MapViewOfFile UnmapViewOfFile FlushViewOfFile CloseHandle Ben-Gurion University Operating Systems, 2013, Meni Adler, Michael Elhadad and Amnon Meisels 26

  27. Shared Memory Shared Memory is a mechanism to share parts of the virtual memory pages among different processes. Shared Memory is a method of Inter Process Communication (that only applies when the processes run on the same host). Shared Memory benefits are to save memory space: if the same content is shared by multiple processes, it is loaded in RAM only once. Example: Shared Libraries (DLLs) can be loaded once and mapped to shared memory. Closely related to memory mapped files with SHARED mode. Examples: http://www.cs.cf.ac.uk/Dave/C/node27.html Ben-Gurion University Operating Systems, 2013, Meni Adler, Michael Elhadad and Amnon Meisels 27

  28. Shared Memory: Steps • To use Shared Memory, processes must: • Agree on a shared name for the Shared Memory segment. • Allocate the Shared Memory segment • The first allocation creates new Virtual Memory Pages • Subsequent allocations with the same key return reference to existing pages. • Attach the Shared Memory segment (by mapping it into the memory of the process). • Use a mutex mechanism (semaphore) to synchronize access to the shared memory to avoid collisions when writing. • Detach the Shared Memory segment. • One process must de-allocate the segment.

  29. Shared Memory: System Calls • Allocate Shared Memory: shmget() int segment_id = shmget (shm_key, getpagesize (), IPC_CREAT | S_IRUSR | S_IWUSER); • Attach Shared Memory: shmat() • Detach Shared Memory: shmdt() • De-allocate Shared Memory: shmctl()

  30. Shared Memory: Linux Example // client #include <sys/types.h> #include <sys/ipc.h> #include <sys/shm.h> #include <stdio.h> #define SHMSZ 27 main() { int shmid; key_t key; char *shm, *s; /*We need to get the segment named * "5678", created by the server. */ key = 5678; /* Locate the segment. */ if ((shmid = shmget(key, SHMSZ, 0666)) < 0) { perror("shmget"); exit(1); } /* Attach the segment to our data space. */ if ((shm = shmat(shmid, NULL, 0)) == (char *) -1) { perror("shmat"); exit(1); } /* Read what the server put in the memory. */ for (s = shm; *s != NULL; s++) putchar(*s); putchar('\n'); /* Finally, change the first character of the segment to '*', indicating we have read * the segment. */ *shm = '*'; exit(0); } [Would be better to use a semaphore to coordinate client/server] // Server #include <sys/types.h> #include <sys/ipc.h> #include <sys/shm.h> #include <stdio.h> #define SHMSZ 27 main() { char c; int shmid; key_t key; char *shm, *s; /* We'll name our shared memory segment "5678". */ key = 5678; /* Create the segment. */ if ((shmid = shmget(key, SHMSZ, IPC_CREAT | 0666)) < 0) { perror("shmget"); exit(1); } /* Attach the segment to our data space. */ if ((shm = shmat(shmid, NULL, 0)) == (char *) -1) { perror("shmat"); exit(1); } /* Put some things into the memory for the other process to read. */ s = shm; for (c = 'a'; c <= 'z'; c++) *s++ = c; *s = NULL; /* Wait until the other process changes the first character of our memory * to '*', indicating that it has read what is there. */ while (*shm != '*') sleep(1); exit(0); }

  31. Locking Memory: Definition Some user mode programs may want to prevent swapping out parts of their memory for performance reasons – to prevent the uncertainty of a page fault. In Linux: #include <sys/mman.h> int mlock(const void *addr, size_tlen); int munlock(const void *addr, size_tlen); int mlockall(intflags); int munlockall(void); mlock() locks part of the calling process's virtual address space into RAM, preventing that memory from being paged to the swap area. munlock() unlocks part of the calling process's virtual address space, so that pages in the specified virtual address range may once more be swapped out if required by the kernel memory manager. Memory locking and unlocking are performed in units of whole pages. Ben-Gurion University Operating Systems, 2013, Meni Adler, Michael Elhadad and Amnon Meisels 31

  32. Locking Memory: Usages (From mlock man page): Memory locking has 3 main applications: real-time algorithms and high-security data processing long running high-performance servers Real-time applications require deterministic timing, and, like scheduling, paging is one major cause of unexpected program execution delays. Real-time applications will usually also switch to a real-time scheduler withsched_setscheduler. Cryptographic security software may handle critical data like passwords or secret keys as data structures. As a result of paging, these secrets could be transferred onto disk, where they might be accessible to the enemy long after the security software has erased the secrets in RAM and terminated. Server programs (for example, SQL Servers), often use Memory Locking to obtain predictable high performance. Ben-Gurion University Operating Systems, 2013, Meni Adler, Michael Elhadad and Amnon Meisels 32

  33. Memory Locking: Real-time considerations https://writer.zoho.com/public/rreginelli/Chapter-4---Memory-Locking-R21/noband A real-time process should lock down its memory and not grow.  1.   Problem: Paging can impede a real-time process from meeting critical deadlines. Solution: A real-time process should lock down its entire address space using mlockall() Discussion: Locking down memory prevents the operating system from discarding a process’s pages from main memory.   The process address space of a real-time application should always be resident in main memory because the cost, in time, of retrieving a page from backing store is prohibitively expensive (10s of msec) and unpredictable. 2.   Problem: malloc calls during real-time processing induce unpredictable delays from the memory allocator and the paging subsystem. Solution: Pre-allocate and lock down all memory needed by the real-time application before real-time processing begins. Discussion: Dynamic memory allocation can impact timing in two ways:   1.  The Linux heap manager (implemented in user mode library, and accessed through functions such as malloc, new, and free) does not run in constant time because  it uses a complex heap management algorithm, whose runtime depends on the current heap fragmentation level and on thread contention level (heap manager must be thread-safe and uses mutexes). 2.  Dynamic memory allocation can induce a page fault when the heap does not have sufficient memory to fulfill a request for memory.   If the heap is full, the process will ask the kernel to increase its address space.  The delay due to the page fault occurs at two points: a.  If the process is locked in memory with the MCL_FUTURE flag, a page fault is generated immediately.  The MCL_FUTURE ensures that any growth in the process address space is also locked into memory.  b.  If the process is not locked into memory, a page fault is deferred until later.  This is known as demand paging.  Under demand paging, the memory allocator provisions virtual space (creates an entry in the process page table), but not physical space (no physical page frame is allocated), until the new memory is accessed for the first time. Whether the penalty of the page fault is incurred immediately or left pending, the unexpected delay can jeopardize the timely completion of critical real-time processing. Risk: memory locking leads to hard-commitment. When multiple processes lock their pages in RAM – swapping is impossible, and the OS can only run processes that can fit in RAM.

  34. Memory Locking: mlockall() • mlockall() locks all pages mapped to the address space of the calling process into main memory.   • Any shared library utilized by the application is also locked into memory in its entirety.   • This prevents any part of the process from being paged out of main memory.  All pages are guaranteed to be resident in main memory upon successful return from mlockall().  All pages mapped to the calling process remain resident in main memory until the process either exits or calls munlockall(). • mlockall() gets flags as input:  • MCL_CURRENT: lock all pages mapped to the process at the time mlockall() is called.  If the process address space grows later, through dynamic memory allocation, those new pages will not be locked into memory. • MCL_FUTURE: any new pages provisioned and mapped to the process after the call to mlockall() are also locked into RAM.  Therefore, if the process’ address space grows through dynamic memory allocation, all new pages will be automatically locked into main memory as soon as allocated.

  35. Memory Locking: mlockall() With MCL_FUTURE New allocated data is automatically locked.

  36. Memory Locking: Example 1   /*     * Lock down all pages mapped to the process     */    printf(“Locking down process\n”);    if (mlockall(MCL_CURRENT | MCL_FUTURE) < 0) {       perror("mlockall");       return 1;    }   /*     *  Begin real-time processing     */       /*  Insert RT code here */   /*     * Release all memory belonging to this process     */        printf(“Finished, cleaning up\n”);        munlockall();      return 0; } #include <stdio.h> #include <sys/mman.h> #define SIZE      500 int main(void) { /* A big record */    struct record {       char name[128];       int count;       unsigned char dummy_array[1024000];       struct record *next;    }; /* An array of ~ 500MB */      struct record *a[SIZE];    int i;   /*     * Preallocate memory required by the application before     *  real-time processing begins     */        /* Insert code here */

  37. Monitoring Paging & Scheduling The system call getrusage() collects information about paging and scheduling during program execution.  #include <sys/time.h> #include <sys/resource.h> int getrusage(intwho, struct rusage *usage); • getrusage() returns resource usage measures for who, which can be one of the following: • RUSAGE_SELF • Return resource usage statistics for the calling process, which is the sum of resources used by all threads in the process. • RUSAGE_CHILDREN • Return resource usage statistics for all children of the calling process that have terminated and been waited for. These statistics will include the resources used by grandchildren, and further removed descendants, if all of the intervening descendants waited on their terminated children. • RUSAGE_THREAD (since Linux 2.6.26) • Return resource usage statistics for the calling thread. • The resource usages are returned in the structure pointed to by usage, which has the following form: struct rusage { struct timeval ru_utime; /* user CPU time used */ struct timeval ru_stime; /* system CPU time used */ long ru_maxrss; /* maximum resident set size */ long ru_ixrss; /* integral shared memory size */ long ru_idrss; /* integral unshared data size */ long ru_isrss; /* integral unshared stack size */ long ru_minflt; /* page reclaims (soft page faults) */ long ru_majflt; /* page faults (hard page faults) */ long ru_nswap; /* swaps */ long ru_inblock; /* block input operations */ long ru_oublock; /* block output operations */ long ru_msgsnd; /* IPC messages sent */ long ru_msgrcv; /* IPC messages received */ long ru_nsignals; /* signals received */ long ru_nvcsw; /* voluntary context switches */ long ru_nivcsw; /* involuntary context switches */ };

  38. Monitoring Paging: Example 2a #include <stdio.h> #include <stdlib.h> #include <sys/mman.h> #include <sys/resource.h> #define SIZE      500 int main(void) {    struct record {       char name[128];       int count;       unsigned char dummy_array[1024000];       struct record *next;    };    struct record *a[SIZE];    int i;    struct rusage usage1, usage2, usage3, usage4; getrusage(RUSAGE_SELF, &usage1);    /* Part 1:  Lock down memory mapped to the process address space */    if (mlockall(MCL_CURRENT) < 0) {       perror("mlockall"); return 1;    }    /* Collect stats that show the effect of mlockall on paging */ getrusage(RUSAGE_SELF, &usage2);     /* Part 2:  Dynamically allocate memory for records */    for (i = 0; i < SIZE; i++) {       if ((a[i] = (struct record *)malloc(sizeof(struct record))) == NULL) {          perror("malloc failed"); return 1;       }    } // continued…

  39. Monitoring Paging: Example 2b    /* Collect stats that show the effect of dynamic memory allocation */    getrusage(RUSAGE_SELF, &usage3);    /* Part 3:  Initialize allocated records */    for (i = 0; i < SIZE; i++) {       (a[i])->count = 0;       (a[i])->dummy_array[0] = 0;       (a[i])->next = NULL;    }    /* Collect stats that show the effect of initialing newly allocated memory. */    getrusage(RUSAGE_SELF, &usage4);    /* Print out paging statistics */    printf("Before mlockall\n");    printf("minflt %d\tmajflt %d\n\n", usage1.ru_minflt, usage1.ru_majflt);    printf("After mlockall\n");    printf("minflt %d\tmajflt %d\n\n", usage2.ru_minflt, usage2.ru_majflt);    printf("After dynamic memory allocation with malloc\n");    printf("minflt %d\tmajflt %d\n\n", usage3.ru_minflt, usage3.ru_majflt);    printf("After initializing all allocated memory\n");    printf("minflt %d\tmajflt %d\n\n", usage4.ru_minflt, usage4.ru_majflt);    /* Cleanup. Free all memory allocations */    for(i = 0; i < SIZE; i++) {       free(a[i]);    }      return 0; }

  40. Monitoring Paging: Example Run 1 • Program has 3 steps: • Locking down the application’s memory mlockall() • Dynamically allocating memory • Initializing the dynamically allocated memory • Execution with mlockall(MCL_CURRENT): Before mlockall minflt 144  majflt 0 After mlockall minflt 421  majflt 0 After dynamic memory allocation with malloc minflt 921  majflt 0 After initializing all allocated memory minflt 1421majflt 0

  41. Monitoring Paging: Example Run 2 • Mlockall(MCL_CURRENT | MCL_FUTURE)   Before mlockall minflt 144     majflt 0 After mlockall minflt 421     majflt 0 After dynamic memory allocation with malloc minflt 125921  majflt 0 After initializing all allocated memory minflt 125921  majflt 0

  42. Monitoring Paging: Analyze Results 1 With MCL_CURRENT: • Only the application’s current pages are locked into memory.  • getrlimit() reports that there were 144 page reclaims after the start of the application, which represents the #page reclaims required to bring in only the working set of pages for the application. • After the call to mlockall, there are 277 additional page reclaims (for a total of 421 since the program’s start).  At this point, all current data and executable pages are locked into memory.  This includes any libraries that are referenced by the application.  Libraries are locked down in their entirety, and a large number of the page reclaims seen here were used to page in and lock down parts of libraries not already in memory. • After dynamically allocating memory, there are 500 more page reclaims (for a total of 921 since the program’s inception).  This memory is not locked down because mlockall was called with the MCL_CURRENT flag, which specifies only memory already allocated at the time it is called be locked down.  Therefore, this memory will be demand paged and physical pages are mapped to the virtual address only when the memory is accessed for the first time.  Memory is required, however, to hold the size information fields that are associated with each chunk of memory, and it is responsible for the 500 page reclaims generated in this portion of the program. • Finally, there are 500 more page reclaims (for a total of 1421 since the program’s inception) after pieces of the allocated memory are initialized.  Physical pages are only allotted to initialized memory.  The rest of the dynamically allocated memory remains only in virtual space, with no physical pages mapped to it.

  43. Monitoring Paging: Analyze Results 2 With MCL_CURRENT | MCL_FUTURE: Before mlockall minflt 144     majflt 0 After mlockall minflt 421     majflt 0 After dynamic memory allocation with malloc minflt 125921  majflt 0 After initializing all allocated memory minflt 125921  majflt 0 • Here, the number of page reclaims triggered by the dynamic memory allocation is substantial:  Physical pages are allotted for all the memory that was dynamically allocated, whether it is used or not.  • This example demonstrates the power of mlockall(), but also serves as a strong caution:  Be sure to dynamically allocate only memory that will be used, otherwise valuable physical memory will be needlessly tied up, potentially starving other system processes.

  44. Memory management, part 3: outline • Segmentation • User Mode and Memory Management • Case studies • Pentium • Unix • Linux • Windows Ben-Gurion University Operating Systems, 2013, Meni Adler, Michael Elhadad and Amnon Meisels

  45. Pentium: Segmentation + paging • Segmentation with or without paging is possible • 16K segments per process, segment size up to 4G 32-bit words • Page size 4K • A single global GDT, each process has its own LDT • 6 segment registers may store (16 bit) segment selectors: CS, DS, SS… • When the selector is loaded to a segment register, the corresponding descriptor is stored in microprogram registers Privilege level (0-3) 0 = GDT/ 1 = LDT 13 1 2 Index Pentium segment selector Ben-Gurion University Operating Systems, 2013, Meni Adler, Michael Elhadad and Amnon Meisels

  46. Pentium- segment descriptors Pentium code segment descriptor. Data segments differ slightly Ben-Gurion University Operating Systems, 2013, Meni Adler, Michael Elhadad and Amnon Meisels

  47. Pentium - Forming the linear address • Segment descriptor is in internal (microcode) register • If segment is not zero (TRAP) or paged out (TRAP) • Offset size is checked against limit field of descriptor • Base field of descriptor is added to offset (4k page-size) Ben-Gurion University Operating Systems, 2013, Meni Adler, Michael Elhadad and Amnon Meisels

  48. Intel Pentium address translation 10 10 12 Can cover up to 4 MBphysical address space Ben-Gurion University Operating Systems, 2013, Meni Adler, Michael Elhadad and Amnon Meisels

  49. Memory management, part 3: outline • Segmentation • User Mode and Memory Management • Case studies • Pentium • Unix • Linux • Windows Ben-Gurion University Operating Systems, 2013, Meni Adler, Michael Elhadad and Amnon Meisels

  50. Process A Stack pointer 20K BSSInit. Data 8K Text 0 UNIX process address space Process B Stack pointer 20K BSSInit. Data 8K Text OS 0 Physical memory Ben-Gurion University Operating Systems, 2013, Meni Adler, Michael Elhadad and Amnon Meisels

More Related