1 / 81

Brown Bag Seminar FALL 2006 MEMORY MANAGEMENT

Brown Bag Seminar FALL 2006 MEMORY MANAGEMENT. By Kester Marrain. Introduction to the Memory Manager. Default virtual size process on 32-bit Windows is 2 GB. If image marked with large address space aware, can increase to 3 GB on 32-bit and 4 GB on 64-bit, with a switch at boot time.

pierrette
Download Presentation

Brown Bag Seminar FALL 2006 MEMORY MANAGEMENT

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Brown Bag Seminar FALL 2006MEMORY MANAGEMENT By Kester Marrain

  2. Introduction to the Memory Manager • Default virtual size process on 32-bit Windows is 2 GB. • If image marked with large address space aware, can increase to 3 GB on 32-bit and 4 GB on 64-bit, with a switch at boot time. • Memory Manager is part of the windows executive and exist in the file Ntoskrnl.exe

  3. Memory Manager Tasks • Translating, or mapping, a process’s virtual address space into physical memory. • Paging some of the contents of memory to disk when it becomes overcommitted-that is, when running threads or system code try to use more physical memory than is currently available.

  4. Memory Manger Additional Services • Memory Mapped files internally called section objects. • Copy-on-Write memory • Support for applications using large, sparse address space • Provides a way for a process to allocate and use larger amounts of physical memory than can be mapped into the process virtual address space.

  5. Memory Manager Components • Set of executive system services for allocating, deallocating, and managing virtual memory (Win API or kernel-mode device driver) interfaces. • A translation-not-valid and access fault trap handler for resolving exceptions and availing virtual pages residency.

  6. Memory Manager Components • Several key components that run in the context of six different kernel-mode system threads: • The working set manager ( priority 16 ). • The process/stack swapper ( priority 23 ). • The modified page writer ( priority 17 ). • The mapped page writer ( priority 17 ). • The dereference segment thread ( priority 18 ) • The zero page thread ( priority 0 )

  7. Working Set Manager • Used by the Balance Set Manager ( a system thread created by kernel), calls once per second as well as when free memory falls below a certain threshold, drives the overall memory management policies

  8. Internal Synchronization • Memory Manager is fully reentrant. • Spinlocks and Executive resources.

  9. Configuring The Memory Manager • You can add and or/modify registry values under the key HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management to override some of the default performance calculations. • Default computations will be sufficient for the majority of workloads. • Many of the limits and thresholds that control memory manager policy decisions are computed at boot time on the basis of memory size and product type.

  10. Services the Memory Manager Provides • System services to allocate and free virtual memory, share memory between processes, map files into memory, flush virtual pages, and lock the virtual pages into memory. • Services exposed through Windows API. • Page Granularity virtual memory functions (Virtualxxx). • Memory-mapped file functions(CreateFileMapping,MapViewOfFile). • Heap Functions (Heapxx and older interfaces Localxxx and Globalxxx). • Allocating and Deallocating physical memory and locking pages in memory for direct memory access (DMA), (Mm prefix Functions)

  11. Large and Small Pages • The virtual address space is divided into units called pages. • Hardware Memory Management translates virtual to physical addresses at the granularity of a page. • Large page advantage is speed of translation for references to other data within the page.

  12. Large and Small Pages • Large Page Disadvantage page must be read/write, thus page may possibly be inadvertently overwritten because of the fact that sensitive data may be resident in a page that has its write privilege turned on.

  13. Reserving and Committing Pages • Pages in a process address space are free, reserved, or committed. • Reserving and Committing services are exposed through the Windows VirtualAlloc and VirtualAllocEx functions. • Committed pages are pages that, when accessed, ultimately translate to valid pages in memory.

  14. Reserving and Committing Pages • Committed pages are either private, or mapped to a view of a section. • Committed pages that are private are inaccessible unless they are accessed using cross-process memory functions, such as ReadProcessMemory, or WriteProcessMemory.

  15. Reserving and Committing Pages • Pages are written to disk through normal modified page writing. • Mapped file pages written back to disk by calling the FlushViewOfFile function. • Pages can be decommitted and/or release address space with the VirtualFree or VirtualFreeEx function.

  16. Locking Memory • In general it is better to let the memory manager decide which pages remain in physical memory. However, pages can be locked in memory in two ways: • A call to the VirtuaLock Function to lock pages in their process working set. • Device Drivers can call the kernel-mode functions MmProbeAndLockPages, MmLockPagableCodeSection, MmLockPagableDataSection, or MmLockPagableSectionByHandle. Pages locked using this method must be explicitly unlocked.

  17. Allocation Granularity • When a region of address space is reserved, Windows ensures that the size and base of the region is a multiple of the system page size.

  18. Original Data Original Data Shared Memory and Mapped Files • Shared memory can be defined as memory that is visible to more than one process or that is present in more than one process virtual address space. Page 1 Page 2 Page 3 Process Address Space Process Address Space Physical Memory

  19. Shared Memory and Mapped Files • Code pages in executable images are mapped as execute-only and writable pages are mapped as copy-on-write. • The underlying primitives in the memory manager used to implement shared memory are called section objects, which are called file mapping objects in the Windows API.

  20. Shared Memory and Mapped Files • A section object can be connected to an open file on disk or to committed memory. • To create a section object, call the Windows CreateFileMapping function, specifying the file handle to map it to ( or INVALID_HANDLE_VALUE for a page file backed section).

  21. Shared Memory and Mapped Files • If the session has a name, other processes can open it with OpenFileMapping. • Access can also be granted to section objects through handle inheritance or handle duplication. • Device Drivers can also manipulate section objects.

  22. Protecting Memory • All system wide data structures and memory pools used by kernel-mode system components can be accessed only while in kernel mode. • Each process has a separate private address space. • All processors supported by Windows provide some form of hardware-controlled memory protection.

  23. Protecting Memory • Finally, shared memory section objects have standard Windows access-control lists that are checked when a process attempt to open them.

  24. No Execute Page Protection • Also known as DEP (Data Execution Prevention), means an attempt to transfer control to an instruction in a page marked as “no execute” will generate an access fault.

  25. Original Data Original Data Copy-On-Write Before Copy-On-Write Page 1 Page 2 Page 3 Process Address Space Process Address Space Physical Memory

  26. Original Data Original Data Copy-On-Write After Copy-On-Write Page 1 Page 2 Page 3 Process Address Space Process Address Space Copy of Page 2 Physical Memory

  27. Heap Manager • Manages allocation inside larger memory. • Exists in two places: Ntdll.dll and Ntoskrnl.exe. • Examples of heap functions are: • heapCreate or HeapDestroy • HeapAlloc • HeapFree • HeapLock • HeapWalk

  28. Types of Heaps • Each process has at least one heap, default process heap. It is never deleted during the process’s lifetime and its size is 1MB, it can be made bigger. • An array with all heaps is maintained in each process. • Above array can be queried by threads by using GetProcessHeaps.

  29. Heap Manager Structure Application Windows heap APIs Heap Manager Front-End Heap Layer (Optional) Core Heap Layer Memory Manager

  30. Heap Manager • Two types of front-end layers: • Look-aside lists • Low Fragmentation Heap. • Only one front-end layer can be used at a time.

  31. Heap Synchronization • If a process is single threaded or uses an external mechanism for synchronization, it can tell the heap manager to avoid the overhead of synchronization by specifying HEAP_NO_SERIALIZE either at heap creation or on a per-allocation basis. • A process can also lock the entire heap.

  32. Look Aside List • Look-aside list are single linked lists that allow “push to the list” or “pop from the list” in a last in, first out order with non-blocking algorithms. • There are 128 look-aside list, which handle allocations up to 1 KB on 32-bit platforms. • Provides increase performance improvement because multiple thread can concurrently perform allocation and deallocation operations without acquiring the heap global lock.

  33. Look-Aside List • The heap manager creates look-aside list automatically when a heap is created, as long as no debugging options are enabled. • Difference between pools and look-aside list is that while general pool allocations can vary in size, a look-aside list contains only fixed-sized blocks.

  34. Low Fragmentation Heap • For applications that have relatively small heap memory usage ( < 1MB), the Heap Manager’s best fit policy helps keep a low memory footprint. • The LFH is turned on only if an application calls the HeapSetInformation function. • The LFH is used to optimize the usage for patterns by efficiently handling same-size blocks.

  35. Heap Debugging Features • Enable Tail Checking • Enable Free Checking • Parameter Checking • Heap Validation • Heap Tagging and stack traces support • Pageheap

  36. Address Windowing Extensions • The 32-bit version of windows can support up to 128 GB of physical memory, each 32-bit user process has by default only a 2-GB virtual address space. • To allow a 32-bit process to allocate and access more physical memory than can be represented in its limited address space, Windows provides a set of functions called Address Windowing Extensions.

  37. Address Windowing Extensions • Allocating and using memory vie the AWE functions is done in three steps: • Allocating the physical memory to be used. • Creating a region of virtual address space to act as a window to map views of the physical memory. • Mapping views of the physical memory into the window. • This is the only way for a 32-bit process to directly use more than 2 GB of memory.

  38. Address Windowing Extensions • AWE memory also is never paged out. • This is useful for security because the data in AWE memory could never have a copy in the paging file that someone could examine by rebooting into an alternate operating system.

  39. Address Windowing Extension • Restrictions on AWE • Pages can’t be shared between processes. • The same physical page cannot be mapped to more than one virtual address in the same process.

  40. System Memory Pools • At system initialization, the memory manager creates two types of dynamically sized memory pools that the kernel-mode components use to allocate system memory: • Non-Paged Pools • Paged Pools

  41. System Memory Pools • Both types of pools are mapped in the system part of the address space and are mapped into every process. • Uniprocessors have 3 paged pools • Multiprocessors have 5 paged pools. • Having more than one paged pool reduces the frequency of system blocking on simultaneous calls to pool routines.

  42. Non-Paged Pools • Consists of ranges of system virtual addresses that are guaranteed to reside in physical memory at all times, and thus can be accessed at any time without incurring a page fault. This is required because page faults can’t be satisfied at DPC/dispatch level or above.

  43. Paged Pools • A region of virtual memory in system space that can be paged in and out of the system. It is accessible from any process context.

  44. Pool vs. Look-Aside Lists • The general pools are more flexible in terms of what they can supply. • Look-aside list are faster because they don’t use spinlocks and also because the system does not have to search for memory.

  45. Driver Verifier

  46. Driver Verifier • Driver Verifier is a mechanism that is used to help find and isolate commonly found bugs in device drivers or kernel-mode system code. • Accessible by clicking on run and typing Verifier • Special pool options causes the pool allocation to bracket pool allocations with an invalid page so that references before or after the allocation will result in a kernel-mode access violation, thus crashing the system with the finger pointed at the buggy driver.

  47. Driver Verifier • Pool Tracking – The memory manager checks at driver unload time whether the driver freed all the memory allocations it made. If it didn’t it crashes the system. • Force IRQL Checking • Enabling Low Resource Simulation

  48. Virtual Address Space Layout • Three main types of data are mapped into the virtual address space in Windows: • Per-Process private code and data • Session wide Code and Data • System wide Code and Data

  49. Session • A session consists of the processes and other system objects that represent a single user’s workstation logon session. • Each session has a session specific page pool.

  50. System • System code • System mapped views • Hyperspace – A special region used to map the process working set list and to temporary map other physical pages for operations such as zeroing a page on the free list, invalidating page table entries in other page table, and process creation to set up a new process’s address space.

More Related