design tradeoffs of approximate analog neural accelerators
Download
Skip this Video
Download Presentation
Design Tradeoffs of Approximate Analog Neural Accelerators

Loading in 2 Seconds...

play fullscreen
1 / 30

Design Tradeoffs of Approximate Analog Neural Accelerators - PowerPoint PPT Presentation


  • 84 Views
  • Uploaded on

Design Tradeoffs of Approximate Analog Neural Accelerators. Neural-Inspired Accelerators for Computing - January 22, 2013 Renée S t. Amant, Hadi Esmaeilzadeh , Adrian Sampson, Arjang Hassibi , Luis Ceze , Doug Burger. Technology Trends. Shrinking transistors are less reliable

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about ' Design Tradeoffs of Approximate Analog Neural Accelerators' - buzz


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
design tradeoffs of approximate analog neural accelerators

Design Tradeoffs of Approximate Analog Neural Accelerators

Neural-Inspired Accelerators for Computing - January 22, 2013

Renée St. Amant, HadiEsmaeilzadeh, Adrian Sampson, ArjangHassibi, Luis Ceze, Doug Burger

technology trends
Technology Trends
  • Shrinking transistors are less reliable
    • Leakage, variation, noise, faults
  • Precise computation more expensive
    • Motivates research in approximate computing
  • End of Dennardscaling
    • Dark silicon motivates research in acceleration
opportunity
Opportunity
  • Approximate computing – precise results not required
    • trade accuracy for energy efficiency
  • Analog circuits trade accuracy for efficiency
  • Emerging applications are error-tolerant
    • Machine learning, gaming, sensor data processing, augmented reality, etc., etc.
outline
Outline
  • Context / Background
    • Translates general-purpose, approximation-tolerant code segments to neural networks
  • Analog Neural Acceleration
    • Opportunity, tradeoffs, and challenges unique to analog!
  • Related Work
  • Conclusion
context background esmaeilzadeh et al micro 12
Context / Background [Esmaeilzadeh et al., MICRO’12]
  • Learning approach to accelerating approximate programs
    • Goal: accelerate error-tolerant portions of general-purpose code
    • Code transformation to neural network
    • Accelerated execution on Neural Processing Unit (NPU)
neural processing unit npu
Neural Processing Unit (NPU)

Compute outputs for various

network topologies

Configuration of PEs,

Storage, Control

PE

Processing Element (PE)

digital npu left digital pe right
Digital NPU(left), Digital PE (right)

Time multiplexed

[Esmaeilzadeh et al., MICRO’12]

results esmaeilzadeh et al micro 12
Results [Esmaeilzadeh et al., MICRO’12]
  • 2.3x application speedup, 3x energy reduction on average
  • Ideal NPU (potential for analog): 3.4x speedup, 3.7x energy improvement on average
outline1
Outline
  • Context / Background
  • Analog neural acceleration
      • Relevant design components
      • Tradeoffs and challenges
      • Preliminary design
      • Preliminary results
  • Related Work
  • Conclusion
design space of neural processing units
Design Space of Neural Processing Units
  • Analog presents opportunity for increased energy savings

Flexibility

Accuracy

Efficiency

analog neural processing unit anpu
Analog Neural Processing Unit (ANPU)

Efficiently and accurately

compute outputs for various

network topologies

Configuration of APEs,

Storage, Control

APE

Analog Processing Element (APE)

analog digital boundary
Analog/Digital Boundary

Digital

APE

Analog

  • Analog computation is cheap!
  • Conversions are expensive!
  • Boundary affects flexibility
  • Robustness to noise
  • Fan out

Analog

Digital

Analog

Digital

Opportunity: Analog Storage

ape configuration
APE Configuration

Map various topologies to one substrate

APE

APE

APE

APE

2

3

1

  • Time-multiplexed vs. geometric approach
    • Analog efficiency with simultaneous computation
ape configuration1
APE Configuration

Map various topologies to one substrate

APE

APE

APE

APE

Analog outputs

fed to next layer

2

3

1

APE

APE

APE

APE

  • Time-multiplexed vs. geometrical layout
    • Analog efficiency with simultaneous computation
  • Fixed computation width
    • Challenge: Range!
    • Larger range decrease circuit accuracy
    • Maximize efficient simultaneous computation, maintaining accuracy
    • Row width (connections) – hardware / software accuracy tradeoff
value representation
Value Representation
  • Represent values – inputs, weights, intermediates
    • Current? voltage? Some combination? One or more wires?
  • Analog computation circuits have favorites
    • Signal type
    • Signal range
    • Affect accuracy, efficiency
  • Cost of conversion and scaling vs. computation accuracy and efficiency
value representation bit width
Value Representation: Bit Width
  • Number of bits of inputs, weights, outputs
  • Implications on power
  • Hardware / software accuracy
    • More bits, more accuracy?
  • Challenge: Range!

APE

outline2
Outline
  • Context / Background
  • Analog neural acceleration
      • Overview
      • Relevant design components
      • Preliminary APE design
      • Preliminary results
  • Related Work
  • Conclusion
analog processing element design1
Analog Processing Element Design

Weight 0

Input 0

Weight 1

Input 1

Weight 7

Input 7

Current Steering DAC

Ibias

I+

I-

V+

Resistor Ladder (DAC)

V-

Ibias/2 + ΔI

Ibias/2 - ΔI

MUL

ADD

ADC

Output

Clock

circuit power accuracy and delay
Circuit Power, Accuracy, and Delay
  • 8-wide APE, 5-bit inputs, 4-bit weights
  • Power = 23 mW
  • Error below one quantization step at 1.67 GHz
input bit width and energy
Input Bit Width and Energy

*

*

*

*

* S. Galal and M. Horowitz. Energy-efficient floating-point unit design. IEEE Trans. Comput., 60(7):913–922, 2011.

APE input bit-width has exponential effect on energy consumption

bit width and potential accuracy
Bit Width and Potential Accuracy

APE input bit-width and weight bit-width affect achievable accuracy

range
Range

Ibias

V+

V-

I-

I+

Output Bits

1 uA 6 bits

10 uA 6 bits, 7 bits?

100 uA 8 bits

  • Strategy: increase “linear” range
    • Hardware / software accuracy tradeoff
    • Answers at the application level
  • Exponential increase in computation power for linear increase in output bits
outline3
Outline
  • Context / Background
  • Analog neural acceleration
  • Related Work
  • Conclusion
related work approximate computing
Related Work – Approximate Computing
  • Digital hardware techniques [PCMOS]
    • Limited benefit
  • Analog hardware techniques
    • Lack successful integration with high-performance CPU
  • Approximate programming models [EnerJ]
    • ANPU is an implementation of approximate computing
related work hardware n eural networks
Related Work – Hardware Neural Networks
  • Most prior work on analog neural networks
    • Small network, not designed to be fast, old technology, targets very specific applications
  • More recent work
    • SpiNNaker, IBM’s Cognitive Chip, ByMoore, FACETS, neuFlow
    • Goal?
    • Could be considered for NPU implementations
conclusion
Conclusion
  • Analog, fine-grained knobs balance flexibility, accuracy, and efficiency
    • Hardware / software accuracy tradeoff
    • Challenge: Watch your range!
    • Work in Progress
      • Circuit-level accuracy  application-level accuracy
      • Digital / analog boundaries and opportunity analog storage
    • Open questions: Noise?
    • Important with the rise of error-tolerant applications
questions
Questions?
ad