1 / 37

Dynamic memory allocation and fragmentation

Dynamic memory allocation and fragmentation. Seminar on Network and Operating Systems Group II. Schedule. Today (Monday): General memory allocation mechanisms The Buddy System Thursday: General Object Caching Slabs. What is an allocator and what must it do?. Memory Allocator.

andrew
Download Presentation

Dynamic memory allocation and fragmentation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Dynamic memory allocation and fragmentation Seminar on Network and Operating Systems Group II

  2. Schedule • Today (Monday): • General memory allocation mechanisms • The Buddy System • Thursday: • General Object Caching • Slabs

  3. What is an allocator and what must it do?

  4. Memory Allocator • Keeps track of memory in use and free memory • Must be fast and waste little memory • Services memory requests it receives • Prevent forming of memory “holes” “For any possible allocation algorithm, there will always be a program behavior that forces it into severe fragmentation.”

  5. The three levels of an allocator • Strategies • Try to find regularities in incoming memory requests. • Policies • Decides where and how to place blocks in memory (selected by the strategy) • Mechanisms • The algorithms that implement the policy

  6. Policy techniques • Uses splitting and coalescing to satisfy the incoming requests. • Split large blocks for small requests • Coalesce small blocks for larger requests

  7. Fragmentation, why is it a problem?

  8. Fragmentation • Fragmentation is the inability to reuse memory that is free • External fragmentation occurs when enough free memory is available but isn’t contiguous • Many small holes • Internal fragmentation arises when a large enough block is allocated but it is bigger than needed • Blocks are usually split to prevent internal fragmentation

  9. What causes fragmentation? • Isolated deaths • When adjacent objects do not die at the same time. • Time-varying program behavior • Memory requests change unexpectedly

  10. Why traditional approaches don’t work • Program behavior is not predictable in general • The ability to reuse memory depends on the future interaction between the program and the allocator • 100 blocks of size 10 and 200 of size 20?

  11. How do we avoid fragmentation? A single death is a tragedy. A million deaths is a statistic. -Joseph Stalin

  12. Understanding program behavior • Common behavioral patterns • Ramps • Data structures that are accumulated over time • Peaks • Memory used in bursty patterns usually while building up temporal data structures. • Plateaus • Data structures build quickly and are used for long periods of time

  13. Memory usage in the GNU C Compiler KBytes in use Allocation Time in Megabytes

  14. Memory usage in the Grobner program KBytes in use Allocation Time in Megabytes

  15. Memory usage in Espresso PLA Optimizer KBytes in use Allocation Time in Megabytes

  16. Mechanisms • Most common mechanisms used • Sequential fits • Segregated free lists • Buddy System • Bitmap fits • Index fits

  17. Sequential fits • Based on a single linear list • Stores all free memory blocks • Usually circularly or doubly linked • Most use boundary tag technique • Most common mechanisms use this method.

  18. Sequential fits • Best fit, First fit, Worst fit • Next fit • Uses a roving pointer for allocation • Optimal fit • “Samples” the list first to find a good enough fit • Half fit • Splits blocks twice the requested size

  19. 2 4 8 16 32 64 128 Segregated free lists • Use arrays of lists which hold free blocks of particular size • Use size classes for indexing purposes • Usually in sizes that are a power of two • Requested sizes are rounded up to the nearest available size

  20. Segregated free lists • Simple segregated list • No splitting of free blocks • Subject to severe external fragmentation • Segregated fit • Splits larger blocks if there is no free block in the appropriate free list • Uses first fit or next fit to find a free block • Three types: exact lists, strict size classes with rounding or size classes with range lists.

  21. Buddy system • A special case of segregated fit • Supports limited splitting and coalescing • Separate free list for each allowable size • Simple block address computation • A free block can only be merged with its unique buddy. • Only whole entirely free blocks can be merged.

  22. Buddy system 3 MB 16 MB Free 8 MB 4 MB

  23. Buddy system 3 MB 16 MB Split 8 MB Free Free 4 MB

  24. Buddy system 3 MB 16 MB Split 8 MB Split Free 4 MB Free Free

  25. Buddy system 16 MB Split 8 MB Split Free 4 MB Alloc. Free

  26. Binary buddies • Simplest implementation • All buddy sizes are powers of two • Each block divided into two equal parts • Internal fragmentation very high • Expected 28%, in practice usually higher • (Demonstration applet)

  27. Fibonacci buddies • Size classes based on the fibonacci series • More closely-spaced set of size classes • Reduces internal fragmentation • Blocks can only be split into sizes that are also in the series • Uneven block sizes a disadvantage? • When allocating many equal sized blocks

  28. Fibonacci buddies Size series: 2 3 5 8 13 21 34 55 … Splitting blocks: 13 21 5 8 8 13

  29. Weighted buddies • Size classes are power of two • Between each pair is a size three times a power of two • Two different splitting methods • 2xnumbers can be split in half • 2x*3numbers can be split in half or unevenly into two sizes.

  30. 6 6 3 3 2 4 Weighted buddies Size series: 2 3 4 6 8 12 16 24 …(21) (20*3) (22) (21*3) (23) (22*3) (24) (23*3) … Splitting of 2x*3 numbers:

  31. Double buddies • Use 2 different binary buddy series • One list uses powers of two sizes • Other uses powers of two spacing, offset by x • Splitting rules • Blocks can only be split in half • Split blocks stay in the same series

  32. 6 6 3 3 2 4 Double buddies Size series: 2 4 8 16 32 64 128 …(21) (22) (23) (24) (25) (26) (27)… 3 6 12 24 48 96 192 …(3*20) (3*21)(3*22) (3*23) (3*24) (3*25) (3*26)… Splitting of 3*2x numbers:

  33. Deferred coalescing • Blocks are not merged as soon as they are freed. • Uses quick lists or subpools • Arrays of free lists, one for each size class that is to be deferred • Blocks larger than those defined to be deferred are returned to the general allocator

  34. Deferred reuse • Recently freed blocks are not immediately reused • Older free blocks used instead of newly freed • Compacts long-lived memory blocks • Can cause increased fragmentation if only short-lived blocks are requested

  35. Discussion

  36. Questions? • Why can deferred reuse cause increased fragmentation if only short-lived blocks are requested? • How can the order in which the requests arrive effect memory fragmentation? • Why is fragmentation at peaks more important than at intervening points?

  37. Questions? • When would deferred coalescing be likely to cause more fragmentation? • What is a possible disadvantage when splitting blocks using the fibonacci buddy system? • In the double buddy system, why does the added size-class list reduce internal fragmentation by about 50%?

More Related