1 / 27

Final presentation

Hit Or miss predictor. Final presentation. Software Engineering Lab. Spring 07/08 Supervised by: Zvika Guz Introduced by: Akram Baransi Amir Salameh. Hit or Miss ? !!!. Introduction Cache Memory. Cache RAM is high-speed memory (usually SRAM) .

nitesh
Download Presentation

Final presentation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Hit Or misspredictor Final presentation Software Engineering Lab. Spring 07/08 Supervised by: ZvikaGuz Introduced by: AkramBaransi Amir Salameh Hit or Miss ? !!!

  2. IntroductionCache Memory • Cache RAM is high-speed memory (usually SRAM) . • The Cache stores frequently requested data. • If the CPU needs data, it will check in the high-speed cache memory first before looking in the slower main memory. • Cache memory may be three to five times faster than system DRAM.

  3. IntroductionCache Memory • Most computers have two separate memory caches; L1 cache, located on the CPU, and L2 cache, located between the CPU and DRAM. • L1 cache is faster than L2, and is the first place the CPU looks for its data. If data is not found in L1 cache, the search continues with L2 cache, and then on to DRAM.

  4. IntroductionShared Cache • Shared cache: is a cache which shared among several processors. • In multi-core system, the shared cache is usually overloaded with many accesses from the different cores. • Our goal is to reduce the load from the shared cache. • To achieve this goal we will build a predictor which predict if we going to get a hit or miss when we access the shared cache.

  5. Predictor Requirements • Small size. • Simple and fast. • Implementable with hardware. • Does not need too much power. • Does not predict miss if we have a hit. • Have a high hit rate especially on misses. Hit or Miss ? !!!

  6. Simple PredictorBloom Filter • Bloom filter: is a method representing a set of N elements (a1,…,an)to support membership queries. • The idea is to allocate a vector v of m bits, initially all set to 0. • Choose k independent hash functions, h1,… ,hk ,each with range 1…m . • For each element a, the bits at positions h1(a), ..., hk(a) in v are set to 1.

  7. Simple PredictorBloom Filter • Given a query for b we check the bits at positions h1(b), h2(b), ..., hk(b). • If any of them is 0, then certainly b is not in the set A. • Otherwise we conjecture that b is in the set although there is a certain probability that we are wrong. This is called a “false positive”. • The parameters k and m should be chosen such that the probability of a false positive (and hence a false hit) is acceptable.

  8. Bloom PredictorExample !!Ops ! False Positive Right Prediction Right Prediction Bloom[8]=1 H(504)=8 Bloom[3]=1 Bloom[7]=0 H(151)=7 H(227)=3 Is 151 in A ? Is 504 in A ? Is 227 in A ? Certainly No. I think, Yes it is. Insert (764) Insert (227) Insert (123) Insert (456) I think, Yes it is. H(123) = 11 H(456) = 8 H(764) = 12 H(227) = 3 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 2 2 2 2 2 3 3 3 3 3 4 4 4 4 4 5 5 5 5 5 6 6 6 6 6 7 7 7 7 7 8 8 8 8 8 9 9 9 9 9 10 10 10 10 10 11 11 11 11 11 12 12 12 12 12 13 13 13 13 13 14 14 14 14 14 15 15 15 15 15 Bloom Array A = {123, 456} A = {} A = {123} A = {123, 456, 764} A = {123, 456, 764, 227} H(x) = x % 16

  9. Bloom PredictorIn L2 Cache • We used a separate predictor for each set in the L2 cache. Set 0 Set 0 Set 0 Array 0 Set 0 0 1 1 0 1 0 Set 1 Set 1 Array 1 Set 1 Set 1 1 0 1 1 0 1 Set N Set N Set N Array N Set N 0 0 1 0 1 0

  10. Bloom FilterAdvantages • Small size.  • Simple and fast.  • Implementable with hardware.  • Does not need too much power.  • Does not predict miss if we have a hit. 

  11. Bloom FilterDisadvantages • If A is a dynamic group, and in our case it is a dynamic one, it will be too hard to update the array when removing an element “e” from A, we can’t simply turn off Bloom[H(e)], to do so we must check that there is no element “e1” in A such that H(e)=H(e1). And this take a lot of time.  • If we don’t update the array the hit rate will become low. 

  12. Improvements And Solutions • Using counters instead of binary cells, so when removing an element we simply reduce the appropriate counters. • The problem with this solution is: • The size will become large.

  13. Improvements And Solutions • Note that the number of elements in each set is usually small (cache associative) , allow us to use limited counters, for example 2 bit counters. • In this way we get a small predictor, but we still have problem when the counter reached its saturation, and it happened with low probability.

  14. Improvements And Solutions • Adding an overflow flag for each bloom array allow us to reduce the counter when it reach its saturation in few cases. • Overflow flag = 1, if and only if we tried to increase a saturated counter in the appropriate array. • How does it help? • If the overflow flag is 0, we can reduce a saturated counter, we were unable to do this before.

  15. Improvements And Solutions • How can we solve the problem of the not updates arrays? • Entering the arrays that need update to a queue and every N cycles we update one of them, (in this way the lines in the DRAM updates) • When we enter an array to the queue? • After K failed attempts to reduce a counter in the array due to overflow.

  16. Improvements And Solutions • We don’t have an infinity queue in the hardware, so what can we do if the queue is full and we need to enter an array to it? • We turn on a flag which indicate that the array need update and it not entered to the queue yet, and in the next time that we access the array we will try again to enter it to the queue.

  17. Results analysis • We get all the L2 accesses from simics for 9 benchmarks. • We implemented a simulator to the cache and the predictor with Perl. • In the command line we can choose the configuration that we want, by changing the following parameters:

  18. Results analysis • Cache parameters: • Lines number – the number of the lines in the cache. • Line size – the size of each line in the cache. • Associative – the associative of the cache.

  19. Results analysis • Predictor parameters: • Bloom array size – The number of entries in bloom array. • Bloom max counter – The counter limit for each entry. • Number of hashes – The number of hash functions that the algorithm use.

  20. Results analysis • Predictor parameters: • Bloom max not updated - Number of times of fails to decrement the Bloom counter in a specific entry, and failed due to the fact that the counter is saturated. • Enable bloom update – Enable array update. • Bloom update period – Number of L2 accesses between 2 updates.

  21. Results analysis • In the following graphs we see the hit rate of the predictor versus the cache hit rate. • We configured the predictor and the cache with the following parameters. • Bloom array size = 64 • Bloom max counter = 3 • Associative = 16 • Line size = 64 • Update period = 1

  22. Results analysis

  23. Results analysis

  24. Results analysis

  25. conclusions • Project goal achieved: • We saw in the above graphs that we get a high hit rate on misses, for example the average hit rate on misses with 16M cache is 93.5%. • What’s next? • Using the predictor idea to other units in the computer, for example in the DRAM.

  26. references • http://pages.cs.wisc.edu/~cao/papers/summary-cache/node8.html • http://www.simmtester.com/page/memory/show_glossary.asp • http://i284.photobucket.com/albums/ll32/kwashecka/thanks.gif

More Related