Making the fast case common and the uncommon case simple in unbounded transnational memory
Sponsored Links
This presentation is the property of its rightful owner.
1 / 37

Making the Fast Case Common and the Uncommon Case Simple in Unbounded Transnational Memory PowerPoint PPT Presentation

  • Uploaded on
  • Presentation posted in: General

Making the Fast Case Common and the Uncommon Case Simple in Unbounded Transnational Memory. Qi Zhu CSE 340, Spring 2008 University of Connecticut Paper Source: ISCA’07, San Diego, CA. Motivation.

Download Presentation

Making the Fast Case Common and the Uncommon Case Simple in Unbounded Transnational Memory

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript

Making the fast case common and the uncommon case simple in unbounded transnational memory

Making the Fast Case Common and the Uncommon Case Simple in Unbounded Transnational Memory

Qi Zhu

CSE 340, Spring 2008

University of Connecticut

Paper Source: ISCA’07, San Diego, CA



  • Hardware transactional memory has great potential to simplify the creation of correct and efficient multithreaded programs.

  • Unfortunately, supporting the concurrent execution of an unbounded number of unbounded transactions is challenging. Many proposed implementations are very complex.



  • In 1993, Herlihy and Moss proposed an elegant and efficient hardware implementation. Unfortunately, this implementation limits the volume of data and does not allow transactions to survive a context switch.

Recent development

Recent Development

  • Recently, several proposals have emerged to provide unbounded hardware transactions and support overflowed transactions with the same concurrency as non-overflowed transactions. However, these systems do not have simple implementations.

Methodology proposed

Methodology Proposed

  • In order to simplify the design, the author explores a different approach.

  • The two key concepts the author proposed are:

    1. Permissions-only Cache

    2. OneTM: OneTM Serialized and OneTM Concurrent

An example execution on three systems

An example execution on three systems

Making the fast case common permissions only cache

Making the Fast Case CommonPermissions-Only Cache

  • The permissions-only cache allows a transaction to access more data than can be buffered on-chip without transitioning to a higher-overhead overflowed execution mode.

  • Only when the permissions-only cache itself overflows does the system need to fall back on some other mechanism for detecting conflicts for overflowed blocks.



  • The permissions-only cache is:

  • (1)read by external coherence requests as part of conflict detection

  • (2)updated when a transaction block is replaced from the data cache

  • (3)invalidated on a commit or abort

  • (4)read on transactional store misses

Efficient encoding

Efficient Encoding

  • Because the permissions-only cache does not contain data, it can more efficiently encode the transactional read/write bits.

  • By using sector cache techniques, the tag overhead can be reduced dramatically. For example, with good page-level spatial locality, a 4KB permissions-only cache allows a transaction to access up to 1MB of data without overflow.



  • By efficiently tracking transactions’ read and write sets, the permissions-only cache increase the size of transactions that can successfully complete without invoking an overflowed execution model.

  • With a sufficiently large permissions-only cache, the occurrence of overflowed transactions will likely to be rare.

Background on unbounded hardware tm

Background on Unbounded Hardware TM

  • To provide basis for later comparison, we first take a look at three hardware-based unbounded transactional memory proposals that precisely detect conflicts at the cache block granularity.

  • 1. UTM

  • 2. VTM

  • 3. PTM

Making the fast case common and the uncommon case simple in unbounded transnational memory


  • UTM( Unbounded Transactional Memory) was the first transactional memory proposal to support unbounded transactions. UTM maintains its transactional state in a single, shared, memory-resident data structure called the xstate.

Making the fast case common and the uncommon case simple in unbounded transnational memory


  • VTM( Virtualizing Transactional Memory) tracks overflowed transactional state using a shared data structure mapped into the virtual address space (called the XADT). Entries in the XADT are allocated when blocks overflow the cache.

  • Much like xstate, XADT also uses linked list and supports accessing all entries for a specific virtual memory block or all entries for a specific transaction.

Making the fast case common and the uncommon case simple in unbounded transnational memory


  • PTM( Unbounded Page-Based Transaction Memory) supports unbounded transactional memory by associating state with physical addresses at the block granularity but allocating/reallocating this shadow state on a per-page basis. PTM’s shadow pages behave similarly to UTM’s log pointers, except PTM’s Transaction Access Vector(TAV) lists track data for an entire page.



  • These three proposals require the hardware to dynamically

    allocate/deallocate, maintain, and concurrently manipulate complex linked-based structures(UTM’s xstate, VTM’s XADT, and PTM’s Transaction Access Vectors) and the corresponding cached version of these structures.



  • Manipulating and accessing these structures can add overhead to both overflowed transactions and concurrently-executing non-overflowed transactions.

  • More importantly, the hardware for correctly manipulating these structure is not simple.

Making the uncommon case simple

Making the Uncommon Case Simple

  • The author propose OneTM, a transactional memory system in which only a single overflowed transaction per process can be active at a time. The principal advantage of this design is that the implementation is relatively simple. The impact of the concurrency restrictions on overall system throughput is small because the permissions-only cache ensures that overflow will be the uncommon case.



  • OneTM-Serialized

    Serialized implementation simply stalls all other threads in an application when one of the threads needs to execute an overflowed transactions. At the same time, overflowed transactions still support an explicit abort operation because overflowed transactions continue to log.

Onetm serialized

OneTM Serialized

  • STSW: Shared (per-process) transaction status word

  • PTSW: Private (per-thread) transaction status word

Description of transaction status word

Description of transaction status word

  • (a) Shared Transaction Status Word (STSW)

  • (b) Private Transaction Status Word (PTSW)

Located at a fixed address in virtual memory known to all threads

Per-thread architected register, saved/restored on context switches



  • Although this implementation is simple, the price of its simplicity is the loss of all concurrency when a transaction overflows, which will have a significant negative impact on performance.

Onetm concurrent


  • In order to allow other code to execute concurrently with a single overflowed transaction, we introduce per-block persistent metadata.

    The system uses this metadata to track the read and write set of the single overflowed transaction; other threads then check the metadata to detect conflicts.

Metadata operation

Metadata Operation

  • When a transaction overflows, it transitions to overflowed execution mode. A simple way of accomplish this is to abort the transaction and restart it in overflowed mode after ensuring that no other thread in the application is already executing in overflowed mode.

Metadata operation1

Metadata Operation

  • Alternatively, OneTM-Concurrent can avoid an abort by more gracefully transition to overflowed mode. As before, the processor must first ensure that no other thread in the application is executing in overflowed mode.

  • Next, the processor walks both the data cache and the permissions-only cache to set the overflowed metadata for blocks read or written by transactions; this action ensures that the conflict detection information for these blocks is not lost if the overflowed transaction is context switched.

Lazy metadata clearing

Lazy Metadata Clearing

  • OTID: Overflowed Transaction Identifier

    OTID is used to differentiate between stale and current metadata. The OTID of the active overflowed is also stored in the STSW( see table before).

    When a transaction transitions to overflowed mode, it increments the OTID in the STSW. Instead of explicitly clearing the metadata bits when it completes, the overflowed transaction simply clears the overflowed bit in the STSW as before.

Lazily coherent metadata

Lazily Coherent Metadata

  • To prevent out-of-order writeback from overwriting more recent metadata with stale metadata, the system allows only the owner of the block( non-exclusive or exclusive ) to set the metadata.

  • Once the metadata has been written, it is the owner’s responsibility to ensure the data is eventually written back to memory( or transfer the ownership, and thus the responsibility, on to another processor).

Lazily coherent metadata1

Lazily Coherent Metadata

  • The key to the correctness of this lazy updating of metadata is that the system guarantees that any new requests for the block receive the most recent version of the metadata. Once an overflowed transaction has set the read bit( and thus has the block in owned state), any other processor that tries to write the block will issue a cache request and receive the most recent version of the metadata, indicating the conflict.

Example execution

Example Execution

‘O’ state indicates a single non-exclusive read-only dirty owner

Experimental evaluation

Experimental Evaluation

simulated machine configuration

Experimental evaluation1

Experimental Evaluation

Benchmark summary



Scalability analysis using the tree 10 microbenchmark

Scalability analysis using the tree-10% microbenchmark

Results summary

Results Summary

  • These results show that allowing only one overflowed transaction at a time can result in performance competitive with an ideally concurrent implementation.

  • The addition of even a small permissions-only cache closes the gap for our benchmarks.

  • For larger multiprocessors, the performance of OneTM-Concurrent suffers versus the ideal unbounded transactional memory.



  • The permissions-only cache eases the memory-footprint restrictions imposed by the bounded execution mode of the processor. This structure permits bounded transactions to access potentially hundreds of megabytes before overflowing. The permissions-only cache can be introduced into hardware-only and hardware-software proposed systems to make the fast case common.



  • We also proposed OneTM, an unbounded transactional memory system that assumes overflowed transaction will be rare to simplify the implementation. Specially, we limit the number of overflowed transaction that may exist in an application at a time to be one.

  • By bounding concurrency among overflowed transactions, OneTM avoids the complexity of managing and traversing linked data structure in hardware, admitting simple conflict detection and transaction commit.

Making the fast case common and the uncommon case simple in unbounded transnational memory

  • Questions?

Thank you

Thank you !

  • I wish everyone have a great summer !

  • Login