1 / 16

Sparse Distributed Memory (SDM)

Sparse Distributed Memory (SDM). By Uma Ramamurthy Cognitive Science Seminar February 5, 2003. Introduction. How to organize a record … it can be retrieved in the right way under the right circumstances ?

nitza
Download Presentation

Sparse Distributed Memory (SDM)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Sparse Distributed Memory (SDM) By Uma Ramamurthy Cognitive Science Seminar February 5, 2003

  2. Introduction • How to organize a record … it can be retrieved in the right way under the right circumstances ? • How to construct, with neuron-like components, a physical memory that enables such storage and retrieval ? • “… some links (i.e., associations) are learned, but others are a property of the mathematical space for memory items.”

  3. Theory • Memory items as points of the space {0,1}n for large n (between 100 and 10,000) • Contents serve as addresses to memory • Random Access • Distributed • Sparse

  4. SDM in detail… • Address space – Boolean space of dimension 1000 -- enormous space of 21000 locations • Addresses – bit vectors of length 1000 • Choose a random sample of storage locations, say 220, from this address space – hard locations • Median Distance from a random location in the address space to the nearest hard location– 424 (98% of the time, between 411 and 430)

  5. SDM in detail… • Store each datum in many hard locations • Many hard locations participate in retrieval of each datum • Many storage locations participate in a single read or write operation • Each hard location – a bit vector of length 1000 – stores data in 1000 counters • Range of each counter: -40 to 40

  6. SDM in detail… • To access memory item at address ‘x’, locations closest to ‘x’ are accessed • All hard locations within a given distance ‘r’ of ‘x’ will store/provide data for write/read operations • Access Circle – hard locations in a circle of radius ‘r’ with ‘x’ as the center • Hard location ‘y’ is said to be accessible from ‘x’, if ‘y’ is no farther than ‘r’ bits from ‘x’

  7. Writing to SDM • Writing a 1 increments the counter of the bit vector; writing a 0 decrements the counter • To write (0,1,0,0,1,1,1,…) at location ‘x’, 1st counter of ‘x’ is decremented; 2nd counter of ‘x’ incremented; 3rd counter decremented, etc. • Write-operation in SDM – To write at location ‘y’, write to all the hard locations within the access circle of the location ‘y’

  8. Reading from SDM 0 1 1 1 0 0 0 … -3 4 7 8 -1 -2 -5 … • Contents of a hard location – multiset of all the words that have ever been written to that location • Reading at a location – by Majority Rule: The bit vector read at location ‘x’ = if the counters at ‘x’ = • Read data at location ‘x’ – an archtype of data written at ‘x’, but may not be any of them

  9. Reading from SDM (contd.) • Read-operation in SDM: • To read from location ‘y’, pool data read from every hard location accessible from ‘y’ – within the access circle of ‘y’ • To read with a noisy cue or an arbitrary cue: Iterated Reading • read at ‘y’ to get ‘y1’ • read at ‘y1’ to get ‘y2’ • read at ‘y2’ to get ‘y3’ … • if ‘y1’, ‘y2’, ‘y3’, … converges to y’, then y’ is the result of the iterated reading at location ‘y’

  10. Converging/Diverging Sequences in SDM x3 x2 Critical Distance y2 y3 y’ y1 y4 y x1 x5 x x4

  11. Convergence and Divergence • Convergence – Successively read words get closer and closer to one another till they are identical • Divergence – Adjacent words are orthogonal to one another, also to the target • Convergence happens only if the initial address is sufficiently close to the target • Critical Distance – distance beyond which divergence is more likely than convergence • Rapid rate of Convergence/Divergence – fewer than ten iterations, as a rule

  12. Memory Capacity in SDM • Size of the data set with a critical distance of zero • Stored words are no longer retrievable (no convergence) • “full” and “overloaded” memories • Words written only once cannot be retrieved • For the address space of 21000, the memory capacity is 1/10th the number of hard locations – less than 100,000 locations • A hard location can contain up to 100 words

  13. Learning Sequences in SDM • Learning sequences • Present situation should be recognized as similar to some situation(s) in the past • Consequences of that past situation(s) can be retrieved • A sequence stored as a pointer chain, accessed by repeated reads from the memory – a way to include ‘time’ in the memory trace

  14. Interpretations • “knowing that one knows” – Fast convergence – less than 10 iterations • “tip of the tongue” – being about the critical distance from the nearest stored item… slow convergence • “momentary feelings of familiarity” – full or overloaded memory • “rehearsal” – an item is written many times, each time to many locations • “forgetting” – increases with time due to other writes

  15. Associative Memory in IDA Sparse Distribute Memory — Boolean Space — dim = N (enough to code features) bit vector Job List Outgoing Message Sailor Data Working memory Perception Behavior Net Negotiation Deliberation Focus

  16. Work in progress • Only “conscious content” to be written to long-term memories • SDM in ternary space (0, 1, and “don’t care”) • Testing the modified-SDM as Transient Episodic Memory (TEM) • Plans for such a TEM in IDA and attempts to do consolidation from TEM to LTM (Autobiographical Memory)

More Related