410 likes | 588 Views
Transactional Memory : Hardware Proposals Overview. Manu Awasthi Architecture Reading Club Fall 2006. Why do we care?. The rise of multicore architectures, CMP’s (Support for) Lots of cheap threads available Synchronization will be an issue Concurrent updates on shared memory
E N D
Transactional Memory : Hardware Proposals Overview Manu Awasthi Architecture Reading Club Fall 2006
Why do we care? • The rise of multicore architectures, CMP’s • (Support for) Lots of cheap threads available • Synchronization will be an issue • Concurrent updates on shared memory • Today’s methodologies (Locks) • Are not scalable • Fail to exploit concurrency to the fullest
Why Locks are EVIL? • Locks: objects only one thread can hold at a time • Organization: lock for each shared structure • Usage: (block) acquire access release • Correctness issues • Under-locking data races • Acquires in different orders deadlock • Performance issues • Conservative serialization • Overhead of acquiring • Difficult to find right granularity • Blocking
Example of evil Locks struct Shared_Structure{ int shared_var1; int shared_var2; int shared_var3; : : : };
Example of evil Locks struct Shared_Structure{ int shared_var1; int shared_var2; int shared_var3; : : : };
Example of evil Locks struct Shared_Structure{ int shared_var1; int shared_var2; int shared_var3; : : : };
Example of evil Locks struct Shared_Structure{ int shared_var1; int shared_var2; int shared_var3; : : : };
Coarse-Grained Locking Easily made correct … But not scalable.
Fine-Grained Locking • more scalable • High overhead in acquire and release • Increased complexity
Enter Transactions… • Code segments with three features: • Atomicity • Serialization only on conflicts • Rollback support <begin_transaction> { statement_1; statement_2; statement_3;….. } <end_transaction> • Generally, critical section = transaction atomic instructions
Agenda • Transactions: what all the hoopla’s about • Research Proposals • Usages • Implementations Disclaimer 1:Covering only hardware support Disclaimer 2: Purely an overview
Hardware Overview • Exploit Cache coherence protocols • Already do almost what we need • Invalidation • Consistency checking • Exploit Speculative execution • Branch prediction = optimistic synchro!
Execution Strategy • Four main components: • Logging/buffering (Speculative Execution) • Conflict detection • Abort/rollback • Commit • All papers present different methods of doing the above.
T HW Transactional Memory read active caches Interconnect memory
T T Transactional Memory read active active caches memory
T T Transactional Memory active committed active caches memory
D T Transactional Memory write committed active caches memory
D T T Rewind write aborted active active caches memory
Transaction Commit • At commit point • If no cache conflicts, we win. • Mark transactional entries • Read-only: valid • Modified: dirty (eventually written back)
But…. • Limits to • Transactional cache size • Scheduling quantum • Transaction cannot commit if it is • Too big • Too slow • Actual limits platform-dependent
[Rajwar & Goodman, ASPLOS ‘02] TLR/SLE • Transactional execution of critical sections. • Locks define scope of a transaction • Doesn’t change the programming model • H/W identifies and speculatively executes critical sections. • Timestamps provide serializabilty.
SLE • Mechanism to identify lock acquires and releases • Enabling mechanism for TLR • Concept of silent stores
[Hammond+, ISCA ‘04 & ASPLOS ‘04] TCC @ Stanford • Again, speculative transaction execution • Identify transaction start and end • Read set, write set. • Save architectural state • Check for conflicts on memory references • Snoop over system bus to check for violations • Fold the commit state in a packet • Send over sys bus, commit • Centralized bus arbiter => scalability limits!!
TCC – Programming Model • Divide into transactions • Here, its programmer’s job • However, easier to do than locks. • Why? • Specify order • In case relative ordering of transaction commit matters • e.g.? • Assign phase numbers to transactions.
Some Results • Small read state (6-12 kB) • Write state (4-8 kB) • Both of above per benchmark, per processor • Significant speedup • Not so modest bandwidth requirements
UTM/LTM @ Stanford • Most transactions are small • 99.9% touch 54 cache lines or less • BUT, some go upto 8000 lines (!!!!!) • Thesis : transaction footprint should be unbounded • Added ISA support for the same • Book-keeping, in memory, transaction log • Helps survive interrupts, process migration
So, What’s New? • ISA support • XBEGIN pc • XEND • Rollback Support : Rename Tables snapshot. • Xstate data structure for memory state • has log records of all active transactions • Log = commit record + log entry vector • Log pointer • RW bit
LogTM @ UW-Madison • Motivation : Make the common case fast • Commits are more frequent than aborts • Basic Strategy : similar to UTM • Store new values in place, old values in log • Log properties • Per thread log • Cacheable in virtual memory • i.e. part of thread address space reserved for logging. • Log writes mostly cache hits (small transactions) • Low TLB translation overhead (small transactions)
Conflict Detection • Directory based protocol • Send request to directory • Directory forwards requests to processors • Each processors checks for conflicts • Ack (No conflict), Nack (Conflict) • Resolve conflict based on responses. • Extended Directory states • For taking care of transactional line overflow
More Work @ UW-Madison • VTM (Rajwar+) • Thread Level TM (Goodman +) • Goal: persistent transactions with less overhead • Approach: group transactions by process • Implementation: buffer in cache + overflow table in virtual memory + various interesting optimizations
Summary • Transactions: Promising approach to synchronization • Simple interface + efficient implementation • Uses: optimistic lock removal, lock-free data structures, general-purpose synchronization, parallelization, ?? • Challenges • Implementation • Interface • OS involvement • I/O + rollback