1 / 11

Branch Prediction using Artificial Neurons John Mixter

Branch Prediction using Artificial Neurons John Mixter. Based on Dynamic Branch Prediction with Perceptrons Daniel A. Jim´enez and Calvin Lin Department of Computer Sciences The University of Texas at Austin Austin, TX 78712.

yanni
Download Presentation

Branch Prediction using Artificial Neurons John Mixter

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Branch Prediction using Artificial NeuronsJohn Mixter Based on Dynamic Branch Prediction with Perceptrons Daniel A. Jim´enez and Calvin Lin Department of Computer Sciences The University of Texas at Austin Austin, TX 78712

  2. Normally, the output of a perceptron is the sum of the inputs × the input weight But, because the input to the branch predictor perceptron can only be -1 or +1 ( 0s are changed to -1s), we can simply sum the input weights according to the inputs The output, y is determined by comparing the sum to a threshold. Perceptron Prediction if(Di > 0) Sum+=Wi else Sum-=Wi if(Sum > Threshold) y = 1 else y = 0

  3. Perceptron Training Correction= Actual direction – Predicted direction After the prediction is made, it is compared to the actual direction taken. If they do not agree, the perceptron is trained. If( Di > 0) Wi += Correction else Wi -= Correction

  4. Results The Perceptron, GAp, GAg, Gshare, PAp and PAg predictors ran on 20 benchmark programs with 100 million instructions executed on each. The Perceptron’s neuron count was varied from 1 to 4096 and the shift register size was varied from 1 to 128 bits, for a total of 1040 simulations per benchmark and 20,800 total perceptron simulations. The GAp, GAg, Gshare, PAp and PAg predictors had their table sizes varied from 8 to 4M bytes by changing the number of predictors and the shift register sizes. A total of 11,886 simulations were run. (20,800 + 11,886) × 100 million = 3.267 trillion instructions, ~ 91 hours. Only the best hit rates for each benchmark per block size were charted.

  5. Daniel Jim´enez reported the best prediction rate of 10.1% above Gshare with a hardware cost of 4K. My best prediction rate occurred at 9.09% above Gshare at 32k. One possible reason for the hardware budget difference would be in the actual hardware calculations. I calculated the existing predictors based on their table sizes and the Perceptron size as (# of perceptrons × # of inputs (shift register size) × 8 bits per input) One of the advantages of the perceptron predictor is the fact that the hardware size scales linearly with the size of the history shift register. That means for the same hardware cost, the perceptron history register can be much deeper.

  6. Benchmark Averages

More Related