1 / 31

MemTracker Efficient and Programmable Support for Memory Access Monitoring and Debugging

MemTracker Efficient and Programmable Support for Memory Access Monitoring and Debugging. Guru Venkataramani , Brandyn Roemer, Yan Solihin, Milos Prvulovic. Introduction. Software is increasingly complex More complexity means more bugs Memory bugs are most common

cade-newman
Download Presentation

MemTracker Efficient and Programmable Support for Memory Access Monitoring and Debugging

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. MemTrackerEfficient and Programmable Support for Memory Access Monitoring and Debugging Guru Venkataramani, Brandyn Roemer, Yan Solihin, Milos Prvulovic

  2. Introduction • Software is increasingly complex • More complexity means more bugs • Memory bugs are most common • Many are security vulnerabilities • How to catch them efficiently? Venkataramani HPCA’07

  3. Debugging and Monitoring • Maintain Information/state about memory • Low performance overhead in “always-on” mode • Hard problem • Flexible/Programmable • Even harder!! Venkataramani HPCA’07

  4. Challenges • Software Approach • Flexible • Large (2X to 30X) slowdown • Hardware approach • Faster • Most are checker specific • Others need Software intervention too often Venkataramani HPCA’07

  5. Related Work • DISE [ISCA’03] + Pattern matches instructions, dynamically injects instrumentation • Modifies front end of pipeline • Adds extra code to instruction stream • Mondrian [ASPLOS’02] + Fine grain protection- different permissions for adjacent words • Software intervention for permission updates • Complex hardware (trie structure) Venkataramani HPCA’07

  6. Objectives • MemTracker * Maintains state for every memory word * No software intervention for most state checks and updates * Efficient checks and updates even when nearby locations have different states * Programmable (can implement different checkers) Venkataramani HPCA’07

  7. What is MemTracker? • A programmable state machine • (State, event) → (State, Exception) • Supports upto 16 states (4 state bits/word) • Not any fundamental limit; Can be extended • All memory actions are events • Memory accesses : Loads, Stores • User events (affect only state) Venkataramani HPCA’07

  8. Example Heap Checker Load/Store/Free ERROR Malloc LoadERROR ALLOC, UNINIT UNALLOC Free Store MallocERROR Free Load/Store Load/Store NON-HEAP INIT Malloc/Free ERROR Venkataramani HPCA’07

  9. MemTracker State Table Event → State↓ Venkataramani HPCA’07

  10. MemTracker State Table Event → State↓ Venkataramani HPCA’07

  11. MemTracker State Table Event → State↓ Venkataramani HPCA’07

  12. MemTracker State Table Event → State↓ Venkataramani HPCA’07

  13. State Storage Protected, Reserved Virtual Space for State Data State Application’s Virtual Address Space Code, Data, Heap and Stack Normal Virtual Memory Space for code, data, stack and heap Venkataramani HPCA’07

  14. State Lookup – Word access only State Base Reg Data address (0xABCD) Number of State Bits 0xF0000000 0xF0000000 101010111100 101010111100 11 11 0xF0000000 01 2 + State address(0xF0000ABC) State (11) Cache MUX 11001010 11001010 Venkataramani HPCA’07

  15. Caching State information • No additional resources for state • Data and state blocks compete for cache lines in existing caches • Load/Stores already have data lookups, now they also need state lookups • These state lookups double the L1 port contention Shared Caching Venkataramani HPCA’07

  16. Caching State information • Expand cache line to store state for its data • +One lookup finds both data and state • - L1 cache larger and slower even when no checking Interleaved Caching Venkataramani HPCA’07

  17. Caching State information • Dedicated (small) state L1 cache • Provides separate ports for state lookups • Leaves data L1 cache alone • When NOT checking, turn SL1 off Split Caching Venkataramani HPCA’07

  18. Caching State information Shared Caching Interleaved Caching Split Caching L2 and below use shared caching (no addt’l space for state) L2 single ported, rarely a contention problem (L1 filters out most acceses) State smaller than data, so needs less bandwidth and capacity We use Split L1 and Shared L2 and below in the rest of the talk Venkataramani HPCA’07

  19. Pipeline Front End Out of Order Back end IF ID REN REG EXE MEM WB CMT Data L1 Venkataramani HPCA’07

  20. Pipeline Modifications State Forwarding IF ID REN REG EXE MEM WB Pre- CMT CHK CMT State L1 Prefetch Data L1 Venkataramani HPCA’07

  21. Other Issues • OS issues • Context switches (Fast) • Paging (same as data) • Multiprocessor implementation • Coherence • State information treated same as data • Consistency • Key issue: atomicity of state and data • Example: Same instruction accesses new data, old state • More details in paper ! Venkataramani HPCA’07

  22. Evaluation Platform • SESC, Out of Order, 5 GHz. • L1 Data cache • 16 KB, 2-way, 2-ports, 32B block • L1 State cache (split caching) • 2KB, 2-way, 2-ports, 32B block • L2 cache - 2 MB, 4-way, 1-port, 32B block Venkataramani HPCA’07

  23. Checkers used in Evaluation • Heap Checker (Example seen before) • 4 states – NonHeap, UnAlloc, Alloc, Init • Return Address Checker • Detects return address modifications • 3 states – NotRA, GoodRA, BadRA • HeapChunks Checker • Detects sequential Heap Buffer overflows • 2 states – Delimit, NotDelimit • Combined Checker • Combines all the above • 7 states,4 (although actually 3) state bits per word • Most demanding; Default in evaluation Venkataramani HPCA’07

  24. Performance of Checkers on SPEC 2.7% Run time Overhead Venkataramani HPCA’07

  25. Sensitivity - Prefetching Run time Overhead Venkataramani HPCA’07

  26. MemTracker vs. other schemes Run time Overhead Venkataramani HPCA’07

  27. Conclusions • MemTracker • Monitors and checks memory accesses • Can be programmed to implement different checkers • Low performance overheads2.7% average and 4.7% worst for combined checker on SPEC • Tested on injected bugs – it finds them! • More Details in paper Venkataramani HPCA’07

  28. Thank you! Questions? guru@cc.gatech.edu Venkataramani HPCA’07

  29. BACKUP SLIDES Venkataramani HPCA’07

  30. Sensitivity – State Cache Size Run time Overhead Venkataramani HPCA’07

  31. Caching Configurations on SPEC Run time Overhead Venkataramani HPCA’07

More Related