Bit-True Modeling of Neural Network
Download
1 / 22

Bit-True Modeling of Neural Network SILab presentation Ali Ahmadi June 2007 - PowerPoint PPT Presentation


  • 68 Views
  • Uploaded on

Bit-True Modeling of Neural Network SILab presentation Ali Ahmadi June 2007. Outline. Introduction structures of Neural Network Hopfield LAM BAM Bit-True Arithmetic Training modes for NN hardware Bit-True model of networks Simulation results. Hopfield Network. Single layer

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about ' Bit-True Modeling of Neural Network SILab presentation Ali Ahmadi June 2007' - jaden-franco


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
Bit true modeling of neural network silab presentation ali ahmadi june 2007

Bit-True Modeling of Neural Network

SILab presentation

Ali Ahmadi

June 2007


Outline
Outline

  • Introduction structures of Neural Network

    • Hopfield

    • LAM

    • BAM

  • Bit-True Arithmetic

  • Training modes for NN hardware

  • Bit-True model of networks

  • Simulation results


  • Hopfield network
    Hopfield Network

    • Single layer

    • Fully connected

    [1]


    Bit true modeling of neural network silab presentation ali ahmadi june 2007

    • Weight calculation

      Tji = Tij = if i ≠ j; Tii = 0;

      where api is ith element of pth pattern

    • Updating Neuron for input neuron U

      Sj = then uj =

    [1]


    Lam linear associative memory network
    LAM (Linear Associative Memory) Network

    single-layer feed-forward network

    recover the output pattern from full or partial information in the input pattern

    [1]


    Bit true modeling of neural network silab presentation ali ahmadi june 2007

    • Weight calculation

      Wij = (( 2* aim -1)( 2* bjm -1))

      where aim is ith element of mth pattern

      Threshold values are Ti=

    • Output calculation

      For input pattern b output pattern is a

      Ui = then ai =


    Bam bidirectional associative memory network
    BAM (Bidirectional Associative Memory) Network

    bidirectional

    Two layer with different dimension

    For each pattern we have pair (a, b) related to each layer


    Bit true modeling of neural network silab presentation ali ahmadi june 2007

    In X to Y pass W

    In Y to X pass WT

    • Weight calculation

      Wij =

    • Output calculation

      In forward pass input of jth neuron in layer Y is:

      Net y(j) = then yj =

      In backward pass input of jth neuron in layer X is:

      Net x(j) = then xj =

    [1]


    Bit true arithmetic
    Bit-True Arithmetic

    • SUM

      Inputs are 2’s complementwith length (WL-1)

      output is 2’s complement with length WL

      If inputs haven't same sign, based on carry we make Sign-Extension


    Bit true arithmetic1
    Bit-True Arithmetic

    • Multiply

      Inputs are 2’s complementwith length WL

      output is 2’s complement with length WL


    Training modes for neural network hardware
    Training modes for Neural Network hardware

    • Off-chip learning: training process is performed out of chip with high precision, forward propagation pass in the recall phase is performed on-chip.

    • Chip-in-the-loop learning: chipisused during training but only in forward propagation.

    • On-chip learning: training is done entirely on-chip, sensitive to the use of limited precision weights.


    Bit true model of hopfield
    Bit-true Model of Hopfield

    • Part of code in updating neuron

      sum+=t[j][i] * neuron[i]; // high precision arithmetic

      // arithmetic with finite word-length

      b 1 = t[j][i];

      a1 = neuron[i];

      a = Decimal2TwosComplement(a1, Word-length-1);

      b = Decimal2TwosComplement(b1, Word-length-1);

      c = MulBitTruePrecise(a, b, Word-length-1);

      s = Decimal2TwosComplement(sum, Word-length-1);

      s1 = SumBitTruePrecise(s, c, Word-length-1);

      sum = TwosComplement2Decimal(s1 , Word-length);


    Bit true model of lam
    Bit-true Model of LAM

    • Part of code that calculate value of output neuron for an input pattern (propagation)

      RawOutVect[i]+=W[i][j] * inVect[j]; // high precision arithmetic

      // arithmetic with finite word-length

      b1= W[i][j];

      a1 = inVect[j];

      b = Decimal2TwosComplement(b1, Word-length-1);

      a = Decimal2TwosComplement(a1, Word-length-1);

      c = mulBitTruePrecise(a, b, Word-length-1)

      s = Decimal2TwosComplement(RawOutVect[i], Word-length-1);

      s1 = SumBitTruePrecise(s, c, Word-length-1);

      RawOutVect[i] = TwosComplement2Decimal(s1 , Word-length


    Bit true model of bam
    Bit-true Model of BAM

    //High precision Arithmetic

    Sum += To->Weight[i][j] * From->Output[j];

    //Finit precision Arithmetic

    a1 = To->Weight[i][j] ;

    b1 = From->Output[j];

    a = Decimal2TwosComplement(a1,Wordlength-1);

    b = Decimal2TwosComplement(b1,Wordlength-1);

    c = mulBitTruePrecise(a, b, Wordlength-1);

    s = Decimal2TwosComplement(Sum,Wordlength-1);

    s1 = SumBitTruePrecise(s, c, Wordlength-1);

    Sum = TwosComplement2Decimal(s1 , Wordlength );


    Bit true modeling of neural network silab presentation ali ahmadi june 2007

    Input pattern for train network

    Input test pattern

    Output pattern for different Word-Length

    4 bit

    6 bit

    7 bit

    8 bit

    32 bit

    5 bit

    Simulation result of Hopfield network


    Bit true modeling of neural network silab presentation ali ahmadi june 2007

    Input pattern for train network

    Input test patterns

    Output patterns for WL = 5 bit

    Output patterns for WL = 6 bit

    Output patterns for WL = 7 bit

    Output patterns for WL = 32 bit

    Simulation result of LAM network


    Bit true modeling of neural network silab presentation ali ahmadi june 2007

    input pattern for layer X input pattern for layer Y

    "TINA “ "6843726"

    "ANTJE“ "8034673"

    " LISA " "7260915"

    input test pattern

    "TANE "

    "ANTJE"

    "RISE "


    Bit true modeling of neural network silab presentation ali ahmadi june 2007

    Output for WL=32 bit layer Y

    TINA -> | TINA -> 6843726

    ANTJE -> | ANTJE -> 8034673

    LISA -> | LISA -> 7260915

    6843726 -> | 6843726 -> TINA

    8034673 -> | 8034673 -> ANTJE

    7260915 -> | 7260915 -> LISA

    TANE -> | TINA -> 6843726

    ANTJE -> | ANTJE -> 8034673

    RISE -> | DIVA -> 6060737

    Simulation result of BAM network


    Bit true modeling of neural network silab presentation ali ahmadi june 2007

    Output for WL=2 bit layer Y

    TINA @ -> | TINA @ -> FENHGKO?

    ANTJE@ -> | &165:? -> _+87&9)@

    LISA @ -> | LISA @ -> FENHGKO?

    6843726@ -> | 6843726@ -> ^L^;GI

    8034673@ -> | 8034673@ -> &165:?

    7260915@ -> | 7260915@ -> &165:?

    TANE @ -> | TANE @ -> FENHGKO?

    ANTJE@ -> | YNIJE@ -> H0(@=^/5

    RISE @ -> | RISE @ -> "#.$6Z7,

    Simulation result of BAM network


    Bit true modeling of neural network silab presentation ali ahmadi june 2007

    Output for WL=3 bit layer Y

    TINA -> | TINA -> 8034673

    ANTJE -> | TINA -> 8034673

    LISA -> | TINA -> 8034673

    6843726 -> | 6060737 -> DIVA

    8034673 -> | 8034673 -> TINA

    7260915 -> | 8034673 -> TINA

    TANE -> | TINA -> 8034673

    ANTJE -> | +61>_? -> GOLKIHL?

    RISE -> | TINA -> 8034673

    Simulation result of BAM network


    Bit true modeling of neural network silab presentation ali ahmadi june 2007

    Output for WL=8 bit layer Y

    TINA -> | TINA -> 6843726

    ANTJE -> | ANTJE -> 8034673

    LISA -> | LISA -> 7260915

    6843726 -> | 6843726 -> TINA

    8034673 -> | 8034673 -> ANTJE

    7260915 -> | 7260915 -> LISA

    TANE -> | TINA -> 6843726

    ANTJE -> | ANTJE -> 8034673

    RISE -> | DIVA -> 6060737

    Simulation result of BAM network


    References
    References layer Y

    [1] A.S. Pandya, “Pattern Recognition with Neural network using C++ ,” , 2nd ed. vol. 3, J. New York: IEEE PRESS.

    [2] p.Moerland, E. Fiesler “Neural Network Adaptation for Hardware Implementation”, Handbook of Neural Computation. JAN 97