1 / 41

Cache Optimization for Mobile Devices Running Multimedia Applications

Cache Optimization for Mobile Devices Running Multimedia Applications. Komal Kasat Gaurav Chitroda Nalini Kumar. Outline . Introduction MPEG-4 Architecture Simulation Results Conclusion. INTRODUCTION. Introduction. Multimedia. Combination of graphics, video, audio

ely
Download Presentation

Cache Optimization for Mobile Devices Running Multimedia Applications

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Cache Optimization for Mobile Devices Running MultimediaApplications Komal Kasat Gaurav Chitroda Nalini Kumar

  2. Outline • Introduction • MPEG-4 • Architecture • Simulation • Results • Conclusion

  3. INTRODUCTION

  4. Introduction Multimedia • Combination of graphics, video, audio • Operates on data presented visually aurally • In multimedia operations compression is done such that less significant data to the viewer is discarded • Common events represented by fewer bits while rare events by more bits • Transmitter encodes and transmits, decoder decodes and plays them back

  5. Introduction Caches • Size and complexity of Multimedia applications is increasing • Critical applications have time constraints • Requires more computational power & more traffic from CPU to memory • Significant processor/memory speed gap • To deal with memory bottlenecks we use caches • Cache improves performance by reducing data access time

  6. Introduction Memory Hierarchy CPU Main Memory BUS

  7. Introduction Memory Hierarchy CPU Main Memory Cache BUS

  8. Introduction Memory Hierarchy CPU Main Memory CL1 CL2 BUS

  9. Introduction Data transfer among CPU, Cache and Main Memory • Data between CPU and cache is transferred as data object • Data between cache and main memory is transferred as block Data Object Transfer Block Transfer CPU Cache Main Memory

  10. Introduction Why Cache Optimization? • With improved CPU, memory subsystem deficiency is main performance bottleneck • Sufficient reuse of values for caching to reduce raw required memory bandwidth for video data • High data rates, large sizes and distinctive memory access patters of MPEG exert strain on caches • Though miss rate acceptable, they increase cache memory traffic • Dropped frames or blocking make caches inefficient • We have limited power and bandwidth in mobile embedded applications • Cache inefficiency has impact on system cost

  11. MPEG-4

  12. MPEG-4 MPEG 4 • Moving Picture Experts Group • Next generation global multimedia standard • Defines the compression of Audio and Visual (AV) digital data • Employs both spatial & temporal redundancy for compression • What is the technique??

  13. MPEG-4 • Break data into 8 x 8 pixel blocks • Apply Discrete Cosine Transform • Quantize, RLE and entropy coding algorithm • For temporal redundancy – motion compensation • 3 types of frames: • I intra : contain complete image, compresses for spatial redundancy only • P predicted : built from 16 x 16 macro blocks • Macro Block: consists of pixels from closet previous I or P frames such that require fewer bits • B bidirectional frames : information not in reference frames is encoded block by block • Reference frames are 2 - I and P, one before and one after in temporal order

  14. MPEG-4 • Consider GOP with 7 picture frames • Due to dependencies frames are processed in non temporal order • The encoding, transmission and decoding order should be the same • 2 parameters M & N specified at encoder • I frame decoded every N frames • P frame decoded every M frames • Rest are B frames • Consider the simplified bit stream hierarchical structure

  15. MPEG-4 N=7 & M=3 P 7 B 6 Bidirectional Prediction B 5 P 4 B 3 B 2 I 1 Prediction

  16. MPEG-4 Sequence Header GOP …. GOP GOP Header Picture …. Picture Picture Header Slice …. Slice Slice Header Macro-block …. Macro-block Macro-block Header Block …. Block

  17. MPEG-4 • Decoder reads data as stream of bits • Each section identified by unique bit pattern • GOP contains at least one I- frame and dependent P and B frames • There are dependencies while decoding the encoded video • So, selecting right cache parameters improves cache performance significantly • Hence Cache Optimization is important

  18. ARCHITECTURE

  19. Architecture Cache Design Parameters • Cache Size: • Most significant design parameter • Usually increased by factors of two • Increasing cache size shows improvement • Cost & space constraints - critical design decision • Line Size: • Larger line size – lower miss rates, superior spatial locality • Sub-block placement helps decouple size of cache lines & memory bus • More data to be read and written back on a miss • Minimal memory traffic with small lines

  20. Architecture • Associativity: • Better performance by increasing associativity for small caches • Going from direct mapped to 2-way may reduce memory traffic by 50% for small cache size • Sizes greater than 4 show minimal benefit across all cache sizes • Multilevel Caches: • CL2 cache between CL1 and main memory significantly improves CPU performance • CL2 addition decreases bus traffic and latency

  21. Architecture Simulated Architecture

  22. Architecture • DSP decoded encoded video stream • CL1 is split cache with D1 and I1 • CL2 is unified cache • DSP and main memory connected via shared bus • DMA I/0 transfers & buffers data from storage to main memory • DSP decodes and writes video streams to main memory • CPU reads and writes into main memory through its cache hierarchy

  23. SIMULATION

  24. Simulation Simulation Tools • Cachegrind – from Valgrind • It is a ‘cache profiler’ simulation package • Performs detailed simulation of D1, I1, CL2 caches • Gives the total references, misses, miss rates • It is useful for programs written in any language • VisualSim • Provides block libraries for CPU, caches, bus, main memory • Simulation model developed by selecting appropriate blocks and making connections • Has functionalities to run model and collect results

  25. Simulation MPEG-4 Workload • Workload defines all possible operating scenarios and environmental conditions • Quality of workload is important for simulation accuracy and completeness • In the simulation D1, I1 and CL2 hit ratios are used to model the system • This data is obtained from Cachegrind and used by VisualSim simulation model

  26. Simulation • Different combinations of D1, I1 and CL2 are used • About 33% references are data and 67% are instructions • As cache size & line size increase, miss rate decreases Level 1 Data and Instruction References

  27. Simulation • Calculated hit rates for various sizes of CL1 and Cl2 caches • As cache size increases, hit rate increases D1, I1 and CL2 hit ratios

  28. Simulation • About 67 % of references are reads and about 33 % of references are writes Read and Write References

  29. Simulation Input Parameters

  30. Simulation Assumptions • Dedicated bus between CL1 and CL2 introduces negligible delay compared to the bus connecting CL2 and memory • Write back update policy is implemented, so CPU is released immediately after CL1 is updated • Task time has been divided proportionally among CPU, main memory, bus, L1 and L2 cache

  31. Simulation Performance Metrics 2 performance metrics • Utilization • CPU Utilization is ratio of time that CPU spent computing to time that CPU spent transferring bits and performing un-tarring and tarring functions • Transactions • Total number of transactions performed is the total umber of tasks performed by a component during simulation

  32. RESULTS

  33. Results Miss rate variation due to CL1 size changing keeping CL2 size constant Not much benefit by using CL1 greater than 8+8

  34. Results • Effect on miss rate due to changing CL2 cache size • From 32KB to 512KB miss rate decreases slowly • From 512KB to 2MB miss rate decreases sharply • Form 2MB to 4MB miss rate almost unchanged • From cost, space and complexity standpoint larger CL2 does not provide significant benefits

  35. Results • For smaller cache size like D1, miss rate starts decreasing or hit rates start increasing with increase in line size • Miss rates start increasing after a point called ‘cache pollution point’ • From 16 to 64B, larger line size gives better spatial locality • From 128B does not show improvement as on a miss more data has to be read and written

  36. Results • Miss rate significant decreases when going from 2-way to 4-way • Not much significant improvement for 8-way and higher

  37. Results • CL1: 8+8 size, 16B Line Size, 4-way set associativity • CL2 size varied from 32KB to 4MB • CPU Utilization and Transactions collected Total Transactions for different CL2 Sizes

  38. Results • Memory requests initiated by CPU referred to CL1 • Then to CL2 and finally unsuccessful requests to Main Memory • MM transactions decrease with increase in CL2 size • All tasks initiated at CPU referred to CL1 • Considering 10000 tasks, 3333 data and 6667 instructions • For D1 hit ratio 5% and I1 hit ratio 2% • 168+135 = 303 go to CL2 • For CL2 32KB, miss ratio 0.9% • Only 3 tasks go to MM • For CL2 2MB+, miss ratio 0% • No tasks go to MM

  39. Results • CPU Utilization decreases with increase in CL2 size • Between 512KB and 2MB decrement is significant • For 128KB and smaller or 4MB and bigger, the change is not significant

  40. CONCLUSION • Focused on enhancing MPEG-4 decoding using cache optimization for mobile devices • Used Cachegrind and VisualSim simulation tools • Optimize cache size, line size, associativity and cache levels • Simulated architecture consists of and 2 level cache • Collected references form Cachegrind to drive VisualSim simulation model • Future Scope: Improve system performance further by using techniques like Selective Caching, Cache Locking, Scratch Memory, Data Recording

  41. QUESTIONS

More Related