1 / 9

Chapter 18 Buffer Cache Tuning

Chapter 18 Buffer Cache Tuning. Buffer Cache Principles. The LRU List / LRU Algorithm Keeps information on how often blocks are accessed “Warm” blocks have been recently accessed On the “MRU” – most recently used end of LRU list “Cold” blocks have not been recently accessed

candra
Download Presentation

Chapter 18 Buffer Cache Tuning

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 18Buffer Cache Tuning

  2. Buffer Cache Principles • The LRU List / LRU Algorithm • Keeps information on how often blocks are accessed • “Warm” blocks have been recently accessed • On the “MRU” – most recently used end of LRU list • “Cold” blocks have not been recently accessed • On the “LRU” end of the LRU list • Coldest blocks removed from cache to make way for new blocks

  3. Table Scans and the Buffer Cache • Blocks from table scans treated differently • Done to prevent flooding the cache • Table scan blocks put immediately at LRU end of list • Blocks read using direct IO bypass buffer cache • Blocks from index lookups placed at MRU end of the LRU list • CACHE table parameter can modify behavior • Will treat scans the same as index lookups

  4. Buffer Cache Configuration • Monitor buffer cache using V$BH • One row for each block in buffer cache • Shows which segments in buffer cache • Buffer cache hit ratio • Means percentage of block requests already in cache • Historically used to measure good performance • Look at V$BUFFER_POOL_STATISTICS • View with caution, may not tell the whole story • Operations with direct IO not included

  5. Multiple Buffer Caches • Seven buffer caches in all • Default • Keep • Smaller tables, frequently accessed • Recycle • Larger tables, accessed via table scans • Less relevant with 11g based on cache being bypassed • Blocksize specific (2K to 32K) • Configured generally for performance, or • If transportable tablespace feature used between databases

  6. Sizing the Buffer Cache • Relationship between buffer cache and physical dependent on application workload • Use the effective buffer cache advisory • Oracle keeps second, larger LRU list just for advisory statistics • See V$DB_CACHE_ADVICE

  7. ASMM and AMM • Automatic Shared Memory Management (10g) • Used with versions 10g forward • Allows certain SGA components to be auto sized • Use parameter SGA_TARGET (10g) • See V$SGA_TARGET_ADVICE • Can still set individual parameters to specify minimum settings • Automatic Memory Management (11g) • Use parameter MEMORY_TARGET (11g) • Simplifies memory management in Oracle

  8. ASMM Tuning • Can still set individual parameters • Including db_cache_size, shared_pool, large_pool • specifies minimum settings • Can prevent ASMM “over shrinking” a pool • Can stabilize overall memory in the SGA • ASMM doesn’t manage non-default pools • Memory thrashing can occur with ASMM • With variable workloads • Look for “SGA: allocation forcing component growth” • Set minimum values for individual pools to avoid thrashing • If persistent problem, disable ASMM

  9. ASMM Tuning (cont.) • Buffer cache starvation can occur if • Shared servers are used • Global PL/SQL variables used • In-memory collections are used • Poor cursor management • General taxing of large or shared pools • Avoid by: • Disabling ASMM • Setting buffer cache parameter individually

More Related