Lectures for 3rd edition
This presentation is the property of its rightful owner.
Sponsored Links
1 / 229

Lectures for 3rd Edition PowerPoint PPT Presentation


  • 85 Views
  • Uploaded on
  • Presentation posted in: General

Lectures for 3rd Edition.

Download Presentation

Lectures for 3rd Edition

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Lectures for 3rd edition

Lectures for 3rd Edition

Note: these lectures are often supplemented with other materials and also problems from the text worked out on the blackboard. You’ll want to customize these lectures for your class. The student audience for these lectures have had exposure to logic design and attend a hands-on assembly language programming lab that does not follow a typical lecture format.


Chapter 1

Chapter 1


Introduction

Introduction

  • This course is all about how computers work

  • But what do we mean by a computer?

    • Different types: desktop, servers, embedded devices

    • Different uses: automobiles, graphics, finance, genomics…

    • Different manufacturers: Intel, Apple, IBM, Microsoft, Sun…

    • Different underlying technologies and different costs!

  • Analogy: Consider a course on “automotive vehicles”

    • Many similarities from vehicle to vehicle (e.g., wheels)

    • Huge differences from vehicle to vehicle (e.g., gas vs. electric)

  • Best way to learn:

    • Focus on a specific instance and learn how it works

    • While learning general principles and historical perspectives


Why learn this stuff

Why learn this stuff?

  • You want to call yourself a “computer scientist”

  • You want to build software people use (need performance)

  • You need to make a purchasing decision or offer “expert” advice

  • Both Hardware and Software affect performance:

    • Algorithm determines number of source-level statements

    • Language/Compiler/Architecture determine machine instructions(Chapter 2 and 3)

    • Processor/Memory determine how fast instructions are executed(Chapter 5, 6, and 7)

  • Assessing and Understanding Performance in Chapter 4


What is a computer

What is a computer?

  • Components:

    • input (mouse, keyboard)

    • output (display, printer)

    • memory (disk drives, DRAM, SRAM, CD)

    • network

  • Our primary focus: the processor (datapath and control)

    • implemented using millions of transistors

    • Impossible to understand by looking at each transistor

    • We need...


Abstraction

Abstraction

  • Delving into the depths reveals more information

  • An abstraction omits unneeded detail, helps us cope with complexityWhat are some of the details that appear in these familiar abstractions?


How do computers work

How do computers work?

  • Need to understand abstractions such as:

    • Applications software

    • Systems software

    • Assembly Language

    • Machine Language

    • Architectural Issues: i.e., Caches, Virtual Memory, Pipelining

    • Sequential logic, finite state machines

    • Combinational logic, arithmetic circuits

    • Boolean logic, 1s and 0s

    • Transistors used to build logic gates (CMOS)

    • Semiconductors/Silicon used to build transistors

    • Properties of atoms, electrons, and quantum dynamics

  • So much to learn!


Instruction set architecture

Instruction Set Architecture

  • A very important abstraction

    • interface between hardware and low-level software

    • standardizes instructions, machine language bit patterns, etc.

    • advantage: different implementations of the same architecture

    • disadvantage: sometimes prevents using new innovationsTrue or False: Binary compatibility is extraordinarily important?

  • Modern instruction set architectures:

    • IA-32, PowerPC, MIPS, SPARC, ARM, and others


Historical perspective

Historical Perspective

  • ENIAC built in World War II was the first general purpose computer

    • Used for computing artillery firing tables

    • 80 feet long by 8.5 feet high and several feet wide

    • Each of the twenty 10 digit registers was 2 feet long

    • Used 18,000 vacuum tubes

    • Performed 1900 additions per second

  • Since then:Moore’s Law: transistor capacity doubles every 18-24 months


Chapter 2

Chapter 2


Instructions

Instructions:

  • Language of the Machine

  • We’ll be working with the MIPS instruction set architecture

    • similar to other architectures developed since the 1980's

    • Almost 100 million MIPS processors manufactured in 2002

    • used by NEC, Nintendo, Cisco, Silicon Graphics, Sony, …


Mips arithmetic

MIPS arithmetic

  • All instructions have 3 operands

  • Operand order is fixed (destination first)Example:C code: a = b + cMIPS ‘code’:add a, b, c (we’ll talk about registers in a bit)“The natural number of operands for an operation like addition is three…requiring every instruction to have exactly three operands, no more and no less, conforms to the philosophy of keeping the hardware simple”


Mips arithmetic1

MIPS arithmetic

  • Design Principle: simplicity favors regularity.

  • Of course this complicates some things...C code:a = b + c + d;MIPS code:add a, b, cadd a, a, d

  • Operands must be registers, only 32 registers provided

  • Each register contains 32 bits

  • Design Principle: smaller is faster. Why?


Registers vs memory

Control

Input

Memory

Datapath

Output

Processor

I/O

Registers vs. Memory

  • Arithmetic instructions operands must be registers, — only 32 registers provided

  • Compiler associates variables with registers

  • What about programs with lots of variables


Memory organization

Memory Organization

  • Viewed as a large, single-dimension array, with an address.

  • A memory address is an index into the array

  • "Byte addressing" means that the index points to a byte of memory.

0

8 bits of data

1

8 bits of data

2

8 bits of data

3

8 bits of data

4

8 bits of data

5

8 bits of data

6

8 bits of data

...


Memory organization1

Memory Organization

  • Bytes are nice, but most data items use larger "words"

  • For MIPS, a word is 32 bits or 4 bytes.

  • 232 bytes with byte addresses from 0 to 232-1

  • 230 words with byte addresses 0, 4, 8, ... 232-4

  • Words are alignedi.e., what are the least 2 significant bits of a word address?

0

32 bits of data

4

32 bits of data

Registers hold 32 bits of data

8

32 bits of data

12

32 bits of data

...


Instructions1

Instructions

  • Load and store instructions

  • Example:C code:A[12] = h + A[8];MIPS code:lw $t0, 32($s3)add $t0, $s2, $t0sw $t0, 48($s3)

  • Can refer to registers by name (e.g., $s2, $t2) instead of number

  • Store word has destination last

  • Remember arithmetic operands are registers, not memory!Can’t write: add 48($s3), $s2, 32($s3)


Our first example

Our First Example

  • Can we figure out the code?

swap(int v[], int k);

{ int temp;

temp = v[k]

v[k] = v[k+1];

v[k+1] = temp;

}

swap:

muli $2, $5, 4

add $2, $4, $2

lw $15, 0($2)

lw $16, 4($2)

sw $16, 0($2)

sw $15, 4($2)

jr $31


So far we ve learned

So far we’ve learned:

  • MIPS— loading words but addressing bytes— arithmetic on registers only

  • InstructionMeaningadd $s1, $s2, $s3$s1 = $s2 + $s3sub $s1, $s2, $s3$s1 = $s2 – $s3lw $s1, 100($s2)$s1 = Memory[$s2+100] sw $s1, 100($s2)Memory[$s2+100] = $s1


Machine language

Machine Language

  • Instructions, like registers and words of data, are also 32 bits long

    • Example: add $t1, $s1, $s2

    • registers have numbers, $t1=9, $s1=17, $s2=18

  • Instruction Format:00000010001100100100000000100000 op rs rt rdshamtfunct

  • Can you guess what the field names stand for?


Machine language1

Machine Language

  • Consider the load-word and store-word instructions,

    • What would the regularity principle have us do?

    • New principle: Good design demands a compromise

  • Introduce a new type of instruction format

    • I-type for data transfer instructions

    • other format was R-type for register

  • Example: lw $t0, 32($s2) 35 18 9 32 op rs rt 16 bit number

  • Where's the compromise?


Stored program concept

Processor

Memory

Stored Program Concept

  • Instructions are bits

  • Programs are stored in memory — to be read or written just like data

  • Fetch & Execute Cycle

    • Instructions are fetched and put into a special register

    • Bits in the register "control" the subsequent actions

    • Fetch the “next” instruction and continue

memory for data, programs,

compilers, editors, etc.


Control

Control

  • Decision making instructions

    • alter the control flow,

    • i.e., change the "next" instruction to be executed

  • MIPS conditional branch instructions:bne $t0, $t1, Label beq $t0, $t1, Label

  • Example: if (i==j) h = i + j; bne $s0, $s1, Labeladd $s3, $s0, $s1Label:....


Control1

Control

  • MIPS unconditional branch instructions:j label

  • Example:if (i!=j) beq $s4, $s5, Lab1 h=i+j;add $s3, $s4, $s5else j Lab2 h=i-j;Lab1:sub $s3, $s4, $s5Lab2:...

  • Can you build a simple for loop?


So far

op rs rt rdshamtfunct

op rs rt 16 bit address

op 26 bit address

So far:

  • InstructionMeaningadd $s1,$s2,$s3$s1 = $s2 + $s3sub $s1,$s2,$s3$s1 = $s2 – $s3lw $s1,100($s2)$s1 = Memory[$s2+100] sw $s1,100($s2)Memory[$s2+100] = $s1bne $s4,$s5,LNext instr. is at Label if $s4 ≠ $s5beq $s4,$s5,LNext instr. is at Label if $s4 = $s5j LabelNext instr. is at Label

  • Formats:

R

I

J


Control flow

Control Flow

  • We have: beq, bne, what about Branch-if-less-than?

  • New instruction:if $s1 < $s2 then $t0 = 1 slt $t0, $s1, $s2 else $t0 = 0

  • Can use this instruction to build "blt $s1, $s2, Label" — can now build general control structures

  • Note that the assembler needs a register to do this,— there are policy of use conventions for registers


Policy of use conventions

Policy of Use Conventions

Register 1 ($at) reserved for assembler, 26-27 for operating system


Constants

Constants

  • Small constants are used quite frequently (50% of operands) e.g., A = A + 5;B = B + 1;C = C - 18;

  • Solutions? Why not?

    • put 'typical constants' in memory and load them.

    • create hard-wired registers (like $zero) for constants like one.

  • MIPS Instructions:addi $29, $29, 4slti $8, $18, 10andi $29, $29, 6ori $29, $29, 4

  • Design Principle: Make the common case fast. Which format?


How about larger constants

filled with zeros

1010101010101010

0000000000000000

1010101010101010

1010101010101010

How about larger constants?

  • We'd like to be able to load a 32 bit constant into a register

  • Must use two instructions, new "load upper immediate" instructionlui $t0, 1010101010101010

  • Then must get the lower order bits right, i.e.,ori $t0, $t0, 1010101010101010

1010101010101010

0000000000000000

0000000000000000

1010101010101010

ori


Assembly language vs machine language

Assembly Language vs. Machine Language

  • Assembly provides convenient symbolic representation

    • much easier than writing down numbers

    • e.g., destination first

  • Machine language is the underlying reality

    • e.g., destination is no longer first

  • Assembly can provide 'pseudoinstructions'

    • e.g., “move $t0, $t1” exists only in Assembly

    • would be implemented using “add $t0,$t1,$zero”

  • When considering performance you should count real instructions


Other issues

Other Issues

  • Discussed in your assembly language programming lab:support for procedureslinkers, loaders, memory layoutstacks, frames, recursionmanipulating strings and pointersinterrupts and exceptionssystem calls and conventions

  • Some of these we'll talk more about later

  • We’ll talk about compiler optimizations when we hit chapter 4.


Overview of mips

Overview of MIPS

  • simple instructions all 32 bits wide

  • very structured, no unnecessary baggage

  • only three instruction formats

  • rely on compiler to achieve performance— what are the compiler's goals?

  • help compiler where we can

op rs rt rdshamtfunct

R

I

J

op rs rt 16 bit address

op 26 bit address


Addresses in branches and jumps

Addresses in Branches and Jumps

  • Instructions:

    bne $t4,$t5,LabelNext instruction is at Label if $t4 ° $t5

    beq $t4,$t5,LabelNext instruction is at Label if $t4 = $t5

    j LabelNext instruction is at Label

  • Formats:

  • Addresses are not 32 bits — How do we handle this with load and store instructions?

op rs rt 16 bit address

I

J

op 26 bit address


Addresses in branches

Addresses in Branches

  • Instructions:

    bne $t4,$t5,LabelNext instruction is at Label if $t4≠$t5

    beq $t4,$t5,LabelNext instruction is at Label if $t4=$t5

  • Formats:

  • Could specify a register (like lw and sw) and add it to address

    • use Instruction Address Register (PC = program counter)

    • most branches are local (principle of locality)

  • Jump instructions just use high order bits of PC

    • address boundaries of 256 MB

op rs rt 16 bit address

I


To summarize

To summarize:


Alternative architectures

Alternative Architectures

  • Design alternative:

    • provide more powerful operations

    • goal is to reduce number of instructions executed

    • danger is a slower cycle time and/or a higher CPI

  • Let’s look (briefly) at IA-32

  • “The path toward operation complexity is thus fraught with peril. To avoid these problems, designers have moved toward simpler instructions”


Ia 32

IA - 32

  • 1978: The Intel 8086 is announced (16 bit architecture)

  • 1980: The 8087 floating point coprocessor is added

  • 1982: The 80286 increases address space to 24 bits, +instructions

  • 1985: The 80386 extends to 32 bits, new addressing modes

  • 1989-1995: The 80486, Pentium, Pentium Pro add a few instructions(mostly designed for higher performance)

  • 1997: 57 new “MMX” instructions are added, Pentium II

  • 1999: The Pentium III added another 70 instructions (SSE)

  • 2001: Another 144 instructions (SSE2)

  • 2003: AMD extends the architecture to increase address space to 64 bits,widens all registers to 64 bits and other changes (AMD64)

  • 2004: Intel capitulates and embraces AMD64 (calls it EM64T) and addsmore media extensions

  • “This history illustrates the impact of the “golden handcuffs” of compatibility“adding new features as someone might add clothing to a packed bag”“an architecture that is difficult to explain and impossible to love”


Ia 32 overview

IA-32 Overview

  • Complexity:

    • Instructions from 1 to 17 bytes long

    • one operand must act as both a source and destination

    • one operand can come from memory

    • complex addressing modese.g., “base or scaled index with 8 or 32 bit displacement”

  • Saving grace:

    • the most frequently used instructions are not too difficult to build

    • compilers avoid the portions of the architecture that are slow

      “what the 80x86 lacks in style is made up in quantity, making it beautiful from the right perspective”


Ia 32 registers and data addressing

IA-32 Registers and Data Addressing

  • Registers in the 32-bit subset that originated with 80386


Ia 32 register restrictions

IA-32 Register Restrictions

  • Registers are not “general purpose” – note the restrictions below


Ia 32 typical instructions

IA-32 Typical Instructions

  • Four major types of integer instructions:

    • Data movement including move, push, pop

    • Arithmetic and logical (destination register or memory)

    • Control flow (use of condition codes / flags )

    • String instructions, including string move and string compare


Ia 32 instruction formats

IA-32 instruction Formats

  • Typical formats: (notice the different lengths)


Summary

Summary

  • Instruction complexity is only one variable

    • lower instruction count vs. higher CPI / lower clock rate

  • Design Principles:

    • simplicity favors regularity

    • smaller is faster

    • good design demands compromise

    • make the common case fast

  • Instruction set architecture

    • a very important abstraction indeed!


Chapter three

Chapter Three


Numbers

Numbers

  • Bits are just bits (no inherent meaning)— conventions define relationship between bits and numbers

  • Binary numbers (base 2)0000 0001 0010 0011 0100 0101 0110 0111 1000 1001...decimal: 0...2n-1

  • Of course it gets more complicated:numbers are finite (overflow)fractions and real numbersnegative numberse.g., no MIPS subi instruction; addi can add a negative number

  • How do we represent negative numbers?i.e., which bit patterns will represent which numbers?


Possible representations

Possible Representations

  • Sign Magnitude: One's Complement Two's Complement000 = +0000 = +0000 = +0001 = +1001 = +1001 = +1010 = +2010 = +2010 = +2011 = +3011 = +3011 = +3100 = -0100 = -3100 = -4101 = -1101 = -2101 = -3110 = -2110 = -1110 = -2111 = -3111 = -0111 = -1

  • Issues: balance, number of zeros, ease of operations

  • Which one is best? Why?


Lectures for 3rd edition

maxint

minint

MIPS

  • 32 bit signed numbers:0000 0000 0000 0000 0000 0000 0000 0000two = 0ten0000 0000 0000 0000 0000 0000 0000 0001two = + 1ten0000 0000 0000 0000 0000 0000 0000 0010two = + 2ten...0111 1111 1111 1111 1111 1111 1111 1110two = + 2,147,483,646ten0111 1111 1111 1111 1111 1111 1111 1111two = + 2,147,483,647ten1000 0000 0000 0000 0000 0000 0000 0000two = – 2,147,483,648ten1000 0000 0000 0000 0000 0000 0000 0001two = – 2,147,483,647ten1000 0000 0000 0000 0000 0000 0000 0010two = – 2,147,483,646ten...1111 1111 1111 1111 1111 1111 1111 1101two = – 3ten1111 1111 1111 1111 1111 1111 1111 1110two = – 2ten1111 1111 1111 1111 1111 1111 1111 1111two = – 1ten


Two s complement operations

Two's Complement Operations

  • Negating a two's complement number: invert all bits and add 1

    • remember: “negate” and “invert” are quite different!

  • Converting n bit numbers into numbers with more than n bits:

    • MIPS 16 bit immediate gets converted to 32 bits for arithmetic

    • copy the most significant bit (the sign bit) into the other bits0010 -> 0000 00101010 -> 1111 1010

    • "sign extension" (lbu vs. lb)


Addition subtraction

Addition & Subtraction

  • Just like in grade school (carry/borrow 1s) 0111 0111 0110+ 0110- 0110- 0101

  • Two's complement operations easy

    • subtraction using addition of negative numbers 0111+ 1010

  • Overflow (result too large for finite computer word):

    • e.g., adding two n-bit numbers does not yield an n-bit number 0111+ 0001note that overflow term is somewhat misleading, 1000it does not mean a carry “overflowed”


Detecting overflow

Detecting Overflow

  • No overflow when adding a positive and a negative number

  • No overflow when signs are the same for subtraction

  • Overflow occurs when the value affects the sign:

    • overflow when adding two positives yields a negative

    • or, adding two negatives gives a positive

    • or, subtract a negative from a positive and get a negative

    • or, subtract a positive from a negative and get a positive

  • Consider the operations A + B, and A – B

    • Can overflow occur if B is 0 ?

    • Can overflow occur if A is 0 ?


Effects of overflow

Effects of Overflow

  • An exception (interrupt) occurs

    • Control jumps to predefined address for exception

    • Interrupted address is saved for possible resumption

  • Details based on software system / language

    • example: flight control vs. homework assignment

  • Don't always want to detect overflow— new MIPS instructions: addu, addiu, subunote: addiu still sign-extends!note: sltu, sltiu for unsigned comparisons


Multiplication

Multiplication

  • More complicated than addition

    • accomplished via shifting and addition

  • More time and more area

  • Let's look at 3 versions based on a gradeschool algorithm 0010 (multiplicand)__x_1011 (multiplier)

  • Negative numbers: convert and multiply

    • there are better techniques, we won’t look at them


Multiplication implementation

Multiplication: Implementation

Datapath

Control


Final version

Final Version

  • Multiplier starts in right half of product

What goes here?


Floating point a brief look

Floating Point (a brief look)

  • We need a way to represent

    • numbers with fractions, e.g., 3.1416

    • very small numbers, e.g., .000000001

    • very large numbers, e.g., 3.15576 ´ 109

  • Representation:

    • sign, exponent, significand: (–1)sign´ significand ´ 2exponent

    • more bits for significand gives more accuracy

    • more bits for exponent increases range

  • IEEE 754 floating point standard:

    • single precision: 8 bit exponent, 23 bit significand

    • double precision: 11 bit exponent, 52 bit significand


Ieee 754 floating point standard

IEEE 754 floating-point standard

  • Leading “1” bit of significand is implicit

  • Exponent is “biased” to make sorting easier

    • all 0s is smallest exponent all 1s is largest

    • bias of 127 for single precision and 1023 for double precision

    • summary: (–1)sign´ (1+significand) ´ 2exponent – bias

  • Example:

    • decimal: -.75 = - ( ½ + ¼ )

    • binary: -.11 = -1.1 x 2-1

    • floating point: exponent = 126 = 01111110

    • IEEE single precision: 10111111010000000000000000000000


Floating point addition

Floating point addition


Floating point complexities

Floating Point Complexities

  • Operations are somewhat more complicated (see text)

  • In addition to overflow we can have “underflow”

  • Accuracy can be a big problem

    • IEEE 754 keeps two extra bits, guard and round

    • four rounding modes

    • positive divided by zero yields “infinity”

    • zero divide by zero yields “not a number”

    • other complexities

  • Implementing the standard can be tricky

  • Not using the standard can be even worse

    • see text for description of 80x86 and Pentium bug!


Chapter three summary

Chapter Three Summary

  • Computer arithmetic is constrained by limited precision

  • Bit patterns have no inherent meaning but standards do exist

    • two’s complement

    • IEEE 754 floating point

  • Computer instructions determine “meaning” of the bit patterns

  • Performance and accuracy are important so there are manycomplexities in real machines

  • Algorithm choice is important and may lead to hardware optimizations for both space and time (e.g., multiplication)

  • You may want to look back (Section 3.10 is great reading!)


Chapter 4

Chapter 4


Performance

Performance

  • Measure, Report, and Summarize

  • Make intelligent choices

  • See through the marketing hype

  • Key to understanding underlying organizational motivationWhy is some hardware better than others for different programs?What factors of system performance are hardware related?(e.g., Do we need a new machine, or a new operating system?)How does the machine's instruction set affect performance?


Lectures for 3rd edition

Which of these airplanes has the best performance?

  • How much faster is the Concorde compared to the 747?

  • How much bigger is the 747 than the Douglas DC-8?

AirplanePassengersRange (mi)Speed (mph)

Boeing 737-100101630598

Boeing 7474704150610

BAC/Sud Concorde13240001350

Douglas DC-8-501468720544


Computer performance time time time

Computer Performance: TIME, TIME, TIME

  • Response Time (latency)— How long does it take for my job to run?— How long does it take to execute a job?— How long must I wait for the database query?

  • Throughput— How many jobs can the machine run at once?— What is the average execution rate?— How much work is getting done?

  • If we upgrade a machine with a new processor what do we increase?

  • If we add a new machine to the lab what do we increase?


Execution time

Execution Time

  • Elapsed Time

    • counts everything (disk and memory accesses, I/O , etc.)

    • a useful number, but often not good for comparison purposes

  • CPU time

    • doesn't count I/O or time spent running other programs

    • can be broken up into system time, and user time

  • Our focus: user CPU time

    • time spent executing the lines of code that are "in" our program


Book s definition of performance

Book's Definition of Performance

  • For some program running on machine X, PerformanceX = 1 / Execution timeX

  • "X is n times faster than Y"PerformanceX / PerformanceY = n

  • Problem:

    • machine A runs a program in 20 seconds

    • machine B runs the same program in 25 seconds


Clock cycles

time

Clock Cycles

  • Instead of reporting execution time in seconds, we often use cycles

  • Clock “ticks” indicate when to start activities (one abstraction):

  • cycle time = time between ticks = seconds per cycle

  • clock rate (frequency) = cycles per second (1 Hz. = 1 cycle/sec)A 4 Ghz. clock has a cycle time


How to improve performance

How to Improve Performance

So, to improve performance (everything else being equal) you can either (increase or decrease?)________ the # of required cycles for a program, or________ the clock cycle time or, said another way, ________ the clock rate.


How many cycles are required for a program

1st instruction

2nd instruction

3rd instruction

...

4th

5th

6th

How many cycles are required for a program?

  • Could assume that number of cycles equals number of instructions

time

This assumption is incorrect,different instructions take different amounts of time on different machines.Why?hint: remember that these are machine instructions, not lines of C code


Different numbers of cycles for different instructions

Different numbers of cycles for different instructions

  • Multiplication takes more time than addition

  • Floating point operations take longer than integer ones

  • Accessing memory takes more time than accessing registers

  • Important point: changing the cycle time often changes the number of cycles required for various instructions (more later)

time


Example

Example

  • Our favorite program runs in 10 seconds on computer A, which has a 4 GHz. clock. We are trying to help a computer designer build a new machine B, that will run this program in 6 seconds. The designer can use new (or perhaps more expensive) technology to substantially increase the clock rate, but has informed us that this increase will affect the rest of the CPU design, causing machine B to require 1.2 times as many clock cycles as machine A for the same program. What clock rate should we tell the designer to target?"

  • Don't Panic, can easily work this out from basic principles


Now that we understand cycles

Now that we understand cycles

  • A given program will require

    • some number of instructions (machine instructions)

    • some number of cycles

    • some number of seconds

  • We have a vocabulary that relates these quantities:

    • cycle time (seconds per cycle)

    • clock rate (cycles per second)

    • CPI (cycles per instruction) a floating point intensive application might have a higher CPI

    • MIPS (millions of instructions per second)this would be higher for a program using simple instructions


Performance1

Performance

  • Performance is determined by execution time

  • Do any of the other variables equal performance?

    • # of cycles to execute program?

    • # of instructions in program?

    • # of cycles per second?

    • average # of cycles per instruction?

    • average # of instructions per second?

  • Common pitfall: thinking one of the variables is indicative of performance when it really isn’t.


Cpi example

CPI Example

  • Suppose we have two implementations of the same instruction set architecture (ISA). For some program,Machine A has a clock cycle time of 250 ps and a CPI of 2.0 Machine B has a clock cycle time of 500 ps and a CPI of 1.2 What machine is faster for this program, and by how much?

  • If two machines have the same ISA which of our quantities (e.g., clock rate, CPI, execution time, # of instructions, MIPS) will always be identical?


Of instructions example

# of Instructions Example

  • A compiler designer is trying to decide between two code sequences for a particular machine. Based on the hardware implementation, there are three different classes of instructions: Class A, Class B, and Class C, and they require one, two, and three cycles (respectively). The first code sequence has 5 instructions: 2 of A, 1 of B, and 2 of CThe second sequence has 6 instructions: 4 of A, 1 of B, and 1 of C.Which sequence will be faster? How much?What is the CPI for each sequence?


Mips example

MIPS example

  • Two different compilers are being tested for a 4 GHz. machine with three different classes of instructions: Class A, Class B, and Class C, which require one, two, and three cycles (respectively). Both compilers are used to produce code for a large piece of software.The first compiler's code uses 5 million Class A instructions, 1 million Class B instructions, and 1 million Class C instructions.The second compiler's code uses 10 million Class A instructions, 1 million Class B instructions, and 1 million Class C instructions.

  • Which sequence will be faster according to MIPS?

  • Which sequence will be faster according to execution time?


Benchmarks

Benchmarks

  • Performance best determined by running a real application

    • Use programs typical of expected workload

    • Or, typical of expected class of applicationse.g., compilers/editors, scientific applications, graphics, etc.

  • Small benchmarks

    • nice for architects and designers

    • easy to standardize

    • can be abused

  • SPEC (System Performance Evaluation Cooperative)

    • companies have agreed on a set of real program and inputs

    • valuable indicator of performance (and compiler technology)

    • can still be abused


Benchmark games

Benchmark Games

  • An embarrassed Intel Corp. acknowledged Friday that a bug in a software program known as a compiler had led the company to overstate the speed of its microprocessor chips on an industry benchmark by 10 percent. However, industry analysts said the coding error…was a sad commentary on a common industry practice of “cheating” on standardized performance tests…The error was pointed out to Intel two days ago by a competitor, Motorola …came in a test known as SPECint92…Intel acknowledged that it had “optimized” its compiler to improve its test scores. The company had also said that it did not like the practice but felt to compelled to make the optimizations because its competitors were doing the same thing…At the heart of Intel’s problem is the practice of “tuning” compiler programs to recognize certain computing problems in the test and then substituting special handwritten pieces of code…Saturday, January 6, 1996 New York Times


Spec 89

SPEC ‘89

  • Compiler “enhancements” and performance


Spec cpu2000

SPEC CPU2000


Spec 2000

SPEC 2000

Does doubling the clock rate double the performance?

Can a machine with a slower clock rate have better performance?


Experiment

Experiment

  • Phone a major computer retailer and tell them you are having trouble deciding between two different computers, specifically you are confused about the processors strengths and weaknesses(e.g., Pentium 4 at 2Ghz vs. Celeron M at 1.4 Ghz )

  • What kind of response are you likely to get?

  • What kind of response could you give a friend with the same question?


Amdahl s law

Amdahl's Law

Execution Time After Improvement = Execution Time Unaffected +( Execution Time Affected / Amount of Improvement )

  • Example:"Suppose a program runs in 100 seconds on a machine, with multiply responsible for 80 seconds of this time. How much do we have to improve the speed of multiplication if we want the program to run 4 times faster?"How about making it 5 times faster?

  • Principle: Make the common case fast


Example1

Example

  • Suppose we enhance a machine making all floating-point instructions run five times faster. If the execution time of some benchmark before the floating-point enhancement is 10 seconds, what will the speedup be if half of the 10 seconds is spent executing floating-point instructions?

  • We are looking for a benchmark to show off the new floating-point unit described above, and want the overall benchmark to show a speedup of 3. One benchmark we are considering runs for 100 seconds with the old floating-point hardware. How much of the execution time would floating-point instructions have to account for in this program in order to yield our desired speedup on this benchmark?


Remember

Remember

  • Performance is specific to a particular program/s

    • Total execution time is a consistent summary of performance

  • For a given architecture performance increases come from:

    • increases in clock rate (without adverse CPI affects)

    • improvements in processor organization that lower CPI

    • compiler enhancements that lower CPI and/or instruction count

    • Algorithm/Language choices that affect instruction count

  • Pitfall: expecting improvement in one aspect of a machine’s performance to affect the total performance


Lets build a processor

operation

a

ALU

32

result

32

b

32

Lets Build a Processor

  • Almost ready to move into chapter 5 and start building a processor

  • First, let’s review Boolean Logic and build the ALU we’ll need(Material from Appendix B)


Review boolean algebra gates

Review: Boolean Algebra & Gates

  • Problem: Consider a logic function with three inputs: A, B, and C.Output D is true if at least one input is trueOutput E is true if exactly two inputs are trueOutput F is true only if all three inputs are true

  • Show the truth table for these three functions.

  • Show the Boolean equations for these three functions.

  • Show an implementation consisting of inverters, AND, and OR gates.


An alu arithmetic logic unit

operation

op

a

b

res

result

An ALU (arithmetic logic unit)

  • Let's build an ALU to support the andi and ori instructions

    • we'll just build a 1 bit ALU, and use 32 of them

  • Possible Implementation (sum-of-products):

a

b


Review the multiplexor

S

A

C

B

Review: The Multiplexor

  • Selects one of the inputs to be the output, based on a control input

  • Lets build our ALU using a MUX:

note: we call this a 2-input mux

even though it has 3 inputs!

0

1


Different implementations

Different Implementations

  • Not easy to decide the “best” way to build something

    • Don't want too many inputs to a single gate

    • Don’t want to have to go through too many gates

    • for our purposes, ease of comprehension is important

  • Let's look at a 1-bit ALU for addition:

  • How could we build a 1-bit ALU for add, and, and or?

  • How could we build a 32-bit ALU?

cout = a b + a cin + b cin

sum = a xor b xor cin


Building a 32 bit alu

Building a 32 bit ALU


What about subtraction a b

What about subtraction (a – b) ?

  • Two's complement approach: just negate b and add.

  • How do we negate?

  • A very clever solution:


Adding a nor function

Adding a NOR function

  • Can also choose to invert a. How do we get “a NOR b” ?


Tailoring the alu to the mips

Tailoring the ALU to the MIPS

  • Need to support the set-on-less-than instruction (slt)

    • remember: slt is an arithmetic instruction

    • produces a 1 if rs < rt and 0 otherwise

    • use subtraction: (a-b) < 0 implies a < b

  • Need to support test for equality (beq $t5, $t6, $t7)

    • use subtraction: (a-b) = 0 implies a = b


Supporting slt

Supporting slt

  • Can we figure out the idea?

all other bits

Use this ALU for most significant bit


Supporting slt1

Supporting slt


Test for equality

Test for equality

  • Notice control lines:0000 = and0001 = or0010 = add0110 = subtract0111 = slt1100 = NOR

  • Note: zero is a 1 when the result is zero!


Conclusion

Conclusion

  • We can build an ALU to support the MIPS instruction set

    • key idea: use multiplexor to select the output we want

    • we can efficiently perform subtraction using two’s complement

    • we can replicate a 1-bit ALU to produce a 32-bit ALU

  • Important points about hardware

    • all of the gates are always working

    • the speed of a gate is affected by the number of inputs to the gate

    • the speed of a circuit is affected by the number of gates in series(on the “critical path” or the “deepest level of logic”)

  • Our primary focus: comprehension, however,

    • Clever changes to organization can improve performance(similar to using better algorithms in software)

    • We saw this in multiplication, let’s look at addition now


Problem ripple carry adder is slow

Problem: ripple carry adder is slow

  • Is a 32-bit ALU as fast as a 1-bit ALU?

  • Is there more than one way to do addition?

    • two extremes: ripple carry and sum-of-products

      Can you see the ripple? How could you get rid of it?

      c1 = b0c0 + a0c0 +a0b0

      c2 = b1c1 + a1c1 +a1b1c2 =

      c3 = b2c2 + a2c2 +a2b2 c3 =

      c4 = b3c3 + a3c3 +a3b3 c4 =

      Not feasible! Why?


Carry lookahead adder

Carry-lookahead adder

  • An approach in-between our two extremes

  • Motivation:

    • If we didn't know the value of carry-in, what could we do?

    • When would we always generate a carry? gi = ai bi

    • When would we propagate the carry? pi = ai + bi

  • Did we get rid of the ripple?

    c1 = g0 + p0c0

    c2 = g1 + p1c1 c2 =

    c3 = g2 + p2c2 c3 =

    c4 = g3 + p3c3 c4 =Feasible! Why?


Use principle to build bigger adders

Use principle to build bigger adders

  • Can’t build a 16 bit adder this way... (too big)

  • Could use ripple carry of 4-bit CLA adders

  • Better: use the CLA principle again!


Alu summary

ALU Summary

  • We can build an ALU to support MIPS addition

  • Our focus is on comprehension, not performance

  • Real processors use more sophisticated techniques for arithmetic

  • Where performance is not critical, hardware description languages allow designers to completely automate the creation of hardware!


Chapter five

Chapter Five


The processor datapath control

The Processor: Datapath & Control

  • We're ready to look at an implementation of the MIPS

  • Simplified to contain only:

    • memory-reference instructions: lw, sw

    • arithmetic-logical instructions: add, sub, and, or, slt

    • control flow instructions: beq, j

  • Generic Implementation:

    • use the program counter (PC) to supply instruction address

    • get the instruction from memory

    • read registers

    • use the instruction to decide exactly what to do

  • All instructions use the ALU after reading the registersWhy? memory-reference? arithmetic? control flow?


More implementation details

More Implementation Details

  • Abstract / Simplified View:Two types of functional units:

    • elements that operate on data values (combinational)

    • elements that contain state (sequential)


State elements

State Elements

  • Unclocked vs. Clocked

  • Clocks used in synchronous logic

    • when should an element that contains state be updated?

cycle time


An unclocked state element

An unclocked state element

  • The set-reset latch

    • output depends on present inputs and also on past inputs


Latches and flip flops

Latches and Flip-flops

  • Output is equal to the stored value inside the element(don't need to ask for permission to look at the value)

  • Change of state (value) is based on the clock

  • Latches: whenever the inputs change, and the clock is asserted

  • Flip-flop: state changes only on a clock edge(edge-triggered methodology)

"logically true",

— could mean electrically low

A clocking methodology defines when signals can be read and written

— wouldn't want to read a signal at the same time it was being written


D latch

D-latch

  • Two inputs:

    • the data value to be stored (D)

    • the clock signal (C) indicating when to read & store D

  • Two outputs:

    • the value of the internal state (Q) and it's complement


D flip flop

D flip-flop

  • Output changes only on the clock edge


Our implementation

Our Implementation

  • An edge triggered methodology

  • Typical execution:

    • read contents of some state elements,

    • send values through some combinational logic

    • write results to one or more state elements


Register file

Register File

  • Built using D flip-flops

Do you understand? What is the “Mux” above?


Abstraction1

Abstraction

  • Make sure you understand the abstractions!

  • Sometimes it is easy to think you do, when you don’t


Register file1

Register File

  • Note: we still use the real clock to determine when to write


Simple implementation

Simple Implementation

  • Include the functional units we need for each instruction

Why do we need this stuff?


Building the datapath

Building the Datapath

  • Use multiplexors to stitch them together


Control2

Control

  • Selecting the operations to perform (ALU, read/write, etc.)

  • Controlling the flow of data (multiplexor inputs)

  • Information comes from the 32 bits of the instruction

  • Example: add $8, $17, $18 Instruction Format:000000 10001 10010 01000 00000100000 op rs rt rd shamt funct

  • ALU's operation based on instruction type and function code


Control3

Control

  • e.g., what should the ALU do with this instruction

  • Example: lw $1, 100($2) 35 2 1 100 op rs rt 16 bit offset

  • ALU control input0000 AND0001OR0010add0110subtract0111set-on-less-than1100NOR

  • Why is the code for subtract 0110 and not 0011?


Control4

ALUOp

computed from instruction type

Control

  • Must describe hardware to compute 4-bit ALU control input

    • given instruction type 00 = lw, sw01 = beq, 10 = arithmetic

    • function code for arithmetic

  • Describe it using a truth table (can turn into gates):


Control5

Control

  • Simple combinational logic (truth tables)


Our simple control structure

Our Simple Control Structure

  • All of the logic is combinational

  • We wait for everything to settle down, and the right thing to be done

    • ALU might not produce “right answer” right away

    • we use write signals along with clock to determine when to write

  • Cycle time determined by length of the longest path

We are ignoring some details like setup and hold times


Single cycle implementation

Single Cycle Implementation

  • Calculate cycle time assuming negligible delays except:

    • memory (200ps), ALU and adders (100ps), register file access (50ps)


Where we are headed

Where we are headed

  • Single Cycle Problems:

    • what if we had a more complicated instruction like floating point?

    • wasteful of area

  • One Solution:

    • use a “smaller” cycle time

    • have different instructions take different numbers of cycles

    • a “multicycle” datapath:


Multicycle approach

Multicycle Approach

  • We will be reusing functional units

    • ALU used to compute address and to increment PC

    • Memory used for instruction and data

  • Our control signals will not be determined directly by instruction

    • e.g., what should the ALU do for a “subtract” instruction?

  • We’ll use a finite state machine for control


Multicycle approach1

Multicycle Approach

  • Break up the instructions into steps, each step takes a cycle

    • balance the amount of work to be done

    • restrict each cycle to use only one major functional unit

  • At the end of a cycle

    • store values for use in later cycles (easiest thing to do)

    • introduce additional “internal” registers


Instructions from isa perspective

Instructions from ISA perspective

  • Consider each instruction from perspective of ISA.

  • Example:

    • The add instruction changes a register.

    • Register specified by bits 15:11 of instruction.

    • Instruction specified by the PC.

    • New value is the sum (“op”) of two registers.

    • Registers specified by bits 25:21 and 20:16 of the instructionReg[Memory[PC][15:11]] <= Reg[Memory[PC][25:21]] op Reg[Memory[PC][20:16]]

    • In order to accomplish this we must break up the instruction.(kind of like introducing variables when programming)


Breaking down an instruction

Breaking down an instruction

  • ISA definition of arithmetic:Reg[Memory[PC][15:11]] <= Reg[Memory[PC][25:21]] op Reg[Memory[PC][20:16]]

  • Could break down to:

    • IR <= Memory[PC]

    • A <= Reg[IR[25:21]]

    • B <= Reg[IR[20:16]]

    • ALUOut <= A op B

    • Reg[IR[20:16]] <= ALUOut

  • We forgot an important part of the definition of arithmetic!

    • PC <= PC + 4


Idea behind multicycle approach

Idea behind multicycle approach

  • We define each instruction from the ISA perspective (do this!)

  • Break it down into steps following our rule that data flows through at most one major functional unit (e.g., balance work across steps)

  • Introduce new registers as needed (e.g, A, B, ALUOut, MDR, etc.)

  • Finally try and pack as much work into each step (avoid unnecessary cycles)while also trying to share steps where possible(minimizes control, helps to simplify solution)

  • Result: Our book’s multicycle Implementation!


Five execution steps

Five Execution Steps

  • Instruction Fetch

  • Instruction Decode and Register Fetch

  • Execution, Memory Address Computation, or Branch Completion

  • Memory Access or R-type instruction completion

  • Write-back stepINSTRUCTIONS TAKE FROM 3 - 5 CYCLES!


Step 1 instruction fetch

Step 1: Instruction Fetch

  • Use PC to get instruction and put it in the Instruction Register.

  • Increment the PC by 4 and put the result back in the PC.

  • Can be described succinctly using RTL "Register-Transfer Language"IR <= Memory[PC];PC <= PC + 4;Can we figure out the values of the control signals?What is the advantage of updating the PC now?


Step 2 instruction decode and register fetch

Step 2: Instruction Decode and Register Fetch

  • Read registers rs and rt in case we need them

  • Compute the branch address in case the instruction is a branch

  • RTL:A <= Reg[IR[25:21]];B <= Reg[IR[20:16]];ALUOut <= PC + (sign-extend(IR[15:0]) << 2);

  • We aren't setting any control lines based on the instruction type (we are busy "decoding" it in our control logic)


Step 3 instruction dependent

Step 3 (instruction dependent)

  • ALU is performing one of three functions, based on instruction type

  • Memory Reference:ALUOut <= A + sign-extend(IR[15:0]);

  • R-type:ALUOut <= A op B;

  • Branch:if (A==B) PC <= ALUOut;


Step 4 r type or memory access

Step 4 (R-type or memory-access)

  • Loads and stores access memoryMDR <= Memory[ALUOut];orMemory[ALUOut] <= B;

  • R-type instructions finishReg[IR[15:11]] <= ALUOut;The write actually takes place at the end of the cycle on the edge


Write back step

Write-back step

  • Reg[IR[20:16]] <= MDR;

    Which instruction needs this?


Summary1

Summary:


Simple questions

Simple Questions

  • How many cycles will it take to execute this code? lw $t2, 0($t3)lw $t3, 4($t3)beq $t2, $t3, Label#assume notadd $t5, $t2, $t3sw $t5, 8($t3)Label:...

  • What is going on during the 8th cycle of execution?

  • In what cycle does the actual addition of $t2 and $t3 takes place?


Review finite state machines

Review: finite state machines

  • Finite state machines:

    • a set of states and

    • next state function (determined by current state and the input)

    • output function (determined by current state and possibly input)

    • We’ll use a Moore machine (output based only on current state)


Review finite state machines1

Review: finite state machines

  • Example: B. 37 A friend would like you to build an “electronic eye” for use as a fake security device. The device consists of three lights lined up in a row, controlled by the outputs Left, Middle, and Right, which, if asserted, indicate that a light should be on. Only one light is on at a time, and the light “moves” from left to right and then from right to left, thus scaring away thieves who believe that the device is monitoring their activity. Draw the graphical representation for the finite state machine used to specify the electronic eye. Note that the rate of the eye’s movement will be controlled by the clock speed (which should not be too great) and that there are essentially no inputs.


Implementing the control

Implementing the Control

  • Value of control signals is dependent upon:

    • what instruction is being executed

    • which step is being performed

  • Use the information we’ve accumulated to specify a finite state machine

    • specify the finite state machine graphically, or

    • use microprogramming

  • Implementation can be derived from specification


Graphical specification of fsm

Graphical Specification of FSM

  • Note:

    • don’t care if not mentioned

    • asserted if name only

    • otherwise exact value

  • How many state bits will we need?


Finite state machine for control

Finite State Machine for Control

  • Implementation:


Pla implementation

PLA Implementation

  • If I picked a horizontal or vertical line could you explain it?


Rom implementation

m

n

ROM Implementation

  • ROM = "Read Only Memory"

    • values of memory locations are fixed ahead of time

  • A ROM can be used to implement a truth table

    • if the address is m-bits, we can address 2m entries in the ROM.

    • our outputs are the bits of data that the address points to.m is the "height", and n is the "width"

0 0 0 0 0 1 1

0 0 1 1 1 0 0

0 1 0 1 1 0 0

0 1 1 1 0 0 0

1 0 0 0 0 0 0

1 0 1 0 0 0 1

1 1 0 0 1 1 0

1 1 1 0 1 1 1


Rom implementation1

ROM Implementation

  • How many inputs are there?6 bits for opcode, 4 bits for state = 10 address lines(i.e., 210 = 1024 different addresses)

  • How many outputs are there?16 datapath-control outputs, 4 state bits = 20 outputs

  • ROM is 210 x 20 = 20K bits (and a rather unusual size)

  • Rather wasteful, since for lots of the entries, the outputs are the same— i.e., opcode is often ignored


Rom vs pla

ROM vs PLA

  • Break up the table into two parts— 4 state bits tell you the 16 outputs, 24 x 16 bits of ROM— 10 bits tell you the 4 next state bits, 210 x 4 bits of ROM— Total: 4.3K bits of ROM

  • PLA is much smaller— can share product terms— only need entries that produce an active output— can take into account don't cares

  • Size is (#inputs ´ #product-terms) + (#outputs ´ #product-terms)For this example = (10x17)+(20x17) = 510 PLA cells

  • PLA cells usually about the size of a ROM cell (slightly bigger)


Another implementation style

Another Implementation Style

  • Complex instructions: the "next state" is often current state + 1


Details

Details


Microprogramming

Microprogramming

  • What are the “microinstructions” ?


Microprogramming1

Microprogramming

  • A specification methodology

    • appropriate if hundreds of opcodes, modes, cycles, etc.

    • signals specified symbolically using microinstructions

  • Will two implementations of the same architecture have the same microcode?

  • What would a microassembler do?


Microinstruction format

Microinstruction format


Maximally vs minimally encoded

Maximally vs. Minimally Encoded

  • No encoding:

    • 1 bit for each datapath operation

    • faster, requires more memory (logic)

    • used for Vax 780 — an astonishing 400K of memory!

  • Lots of encoding:

    • send the microinstructions through logic to get control signals

    • uses less memory, slower

  • Historical context of CISC:

    • Too much logic to put on a single chip with everything else

    • Use a ROM (or even RAM) to hold the microcode

    • It’s easy to add new instructions


Microcode trade offs

Microcode: Trade-offs

  • Distinction between specification and implementation is sometimes blurred

  • Specification Advantages:

    • Easy to design and write

    • Design architecture and microcode in parallel

  • Implementation (off-chip ROM) Advantages

    • Easy to change since values are in memory

    • Can emulate other architectures

    • Can make use of internal registers

  • Implementation Disadvantages, SLOWER now that:

    • Control is implemented on same chip as processor

    • ROM is no longer faster than RAM

    • No need to go back and make changes


Historical perspective1

Historical Perspective

  • In the ‘60s and ‘70s microprogramming was very important for implementing machines

  • This led to more sophisticated ISAs and the VAX

  • In the ‘80s RISC processors based on pipelining became popular

  • Pipelining the microinstructions is also possible!

  • Implementations of IA-32 architecture processors since 486 use:

    • “hardwired control” for simpler instructions (few cycles, FSM control implemented using PLA or random logic)

    • “microcoded control” for more complex instructions(large numbers of cycles, central control store)

  • The IA-64 architecture uses a RISC-style ISA and can be implemented without a large central control store


Pentium 4

Pentium 4

  • Pipelining is important (last IA-32 without it was 80386 in 1985)

  • Pipelining is used for the simple instructions favored by compilers“Simply put, a high performance implementation needs to ensure that the simple instructions execute quickly, and that the burden of the complexities of the instruction set penalize the complex, less frequently used, instructions”

Chapter 7

Chapter 6


Pentium 41

Pentium 4

  • Somewhere in all that “control we must handle complex instructions

  • Processor executes simple microinstructions, 70 bits wide (hardwired)

  • 120 control lines for integer datapath (400 for floating point)

  • If an instruction requires more than 4 microinstructions to implement, control from microcode ROM (8000 microinstructions)

  • Its complicated!


Chapter 5 summary

Chapter 5 Summary

  • If we understand the instructions…We can build a simple processor!

  • If instructions take different amounts of time, multi-cycle is better

  • Datapath implemented using:

    • Combinational logic for arithmetic

    • State holding elements to remember bits

  • Control implemented using:

    • Combinational logic for single-cycle implementation

    • Finite state machine for multi-cycle implementation


Chapter six

Chapter Six


Pipelining

Pipelining

  • Improve performance by increasing instruction throughput

    Ideal speedup is number of stages in the pipeline. Do we achieve this?

Note: timing assumptions changedfor this example


Pipelining1

Pipelining

  • What makes it easy

    • all instructions are the same length

    • just a few instruction formats

    • memory operands appear only in loads and stores

  • What makes it hard?

    • structural hazards: suppose we had only one memory

    • control hazards: need to worry about branch instructions

    • data hazards: an instruction depends on a previous instruction

  • We’ll build a simple pipeline and look at these issues

  • We’ll talk about modern processors and what really makes it hard:

    • exception handling

    • trying to improve performance with out-of-order execution, etc.


Basic idea

Basic Idea

  • What do we need to add to actually split the datapath into stages?


Pipelined datapath

Pipelined Datapath

Can you find a problem even if there are no dependencies? What instructions can we execute to manifest the problem?


Corrected datapath

Corrected Datapath


Graphically representing pipelines

Graphically Representing Pipelines

  • Can help with answering questions like:

    • how many cycles does it take to execute this code?

    • what is the ALU doing during cycle 4?

    • use this representation to help understand datapaths


Pipeline control

Pipeline Control


Pipeline control1

Pipeline control

  • We have 5 stages. What needs to be controlled in each stage?

    • Instruction Fetch and PC Increment

    • Instruction Decode / Register Fetch

    • Execution

    • Memory Stage

    • Write Back

  • How would control be handled in an automobile plant?

    • a fancy control center telling everyone what to do?

    • should we use a finite state machine?


Pipeline control2

Pipeline Control

  • Pass control signals along just like the data


Datapath with control

Datapath with Control


Dependencies

Dependencies

  • Problem with starting next instruction before first is finished

    • dependencies that “go backward in time” are data hazards


Software solution

Software Solution

  • Have compiler guarantee no hazards

  • Where do we insert the “nops” ?sub$2, $1, $3and $12, $2, $5or$13, $6, $2add$14, $2, $2sw$15, 100($2)

  • Problem: this really slows us down!


Forwarding

what if this $2 was $13?

Forwarding

  • Use temporary results, don’t wait for them to be written

    • register file forwarding to handle read/write to same register

    • ALU forwarding


Forwarding1

Forwarding

  • The main idea (some details not shown)


Can t always forward

Can't always forward

  • Load word can still cause a hazard:

    • an instruction tries to read a register following a load instruction that writes to the same register.

  • Thus, we need a hazard detection unit to “stall” the load instruction


Stalling

Stalling

  • We can stall the pipeline by keeping an instruction in the same stage


Hazard detection unit

Hazard Detection Unit

  • Stall by letting an instruction that won’t write anything go forward


Branch hazards

Branch Hazards

  • When we decide to branch, other instructions are in the pipeline!

  • We are predicting “branch not taken”

    • need to add hardware for flushing instructions if we are wrong


Flushing instructions

Flushing Instructions

Note:we’ve also moved branch decision to ID stage


Branches

Branches

  • If the branch is taken, we have a penalty of one cycle

  • For our simple design, this is reasonable

  • With deeper pipelines, penalty increases and static branch prediction drastically hurts performance

  • Solution: dynamic branch prediction

A 2-bit prediction scheme


Branch prediction

Branch Prediction

  • Sophisticated Techniques:

    • A “branch target buffer” to help us look up the destination

    • Correlating predictors that base prediction on global behaviorand recently executed branches (e.g., prediction for a specificbranch instruction based on what happened in previous branches)

    • Tournament predictors that use different types of prediction strategies and keep track of which one is performing best.

    • A “branch delay slot” which the compiler tries to fill with a useful instruction (make the one cycle delay part of the ISA)

  • Branch prediction is especially important because it enables other more advanced pipelining techniques to be effective!

  • Modern processors predict correctly 95% of the time!


Improving performance

Improving Performance

  • Try and avoid stalls! E.g., reorder these instructions:

    lw $t0, 0($t1)

    lw $t2, 4($t1)

    sw $t2, 0($t1)

    sw $t0, 4($t1)

  • Dynamic Pipeline Scheduling

    • Hardware chooses which instructions to execute next

    • Will execute instructions out of order (e.g., doesn’t wait for a dependency to be resolved, but rather keeps going!)

    • Speculates on branches and keeps the pipeline full (may need to rollback if prediction incorrect)

  • Trying to exploit instruction-level parallelism


Advanced pipelining

Advanced Pipelining

  • Increase the depth of the pipeline

  • Start more than one instruction each cycle (multiple issue)

  • Loop unrolling to expose more ILP (better scheduling)

  • “Superscalar” processors

    • DEC Alpha 21264: 9 stage pipeline, 6 instruction issue

  • All modern processors are superscalar and issue multiple instructions usually with some limitations (e.g., different “pipes”)

  • VLIW: very long instruction word, static multiple issue (relies more on compiler technology)

  • This class has given you the background you need to learn more!


Chapter 6 summary

Chapter 6 Summary

  • Pipelining does not improve latency, but does improve throughput


Chapter seven

Chapter Seven


Memories review

Memories: Review

  • SRAM:

    • value is stored on a pair of inverting gates

    • very fast but takes up more space than DRAM (4 to 6 transistors)

  • DRAM:

    • value is stored as a charge on capacitor (must be refreshed)

    • very small but slower than SRAM (factor of 5 to 10)


Exploiting memory hierarchy

Exploiting Memory Hierarchy

  • Users want large and fast memories! SRAM access times are .5 – 5ns at cost of $4000 to $10,000 per GB.DRAM access times are 50-70ns at cost of $100 to $200 per GB.Disk access times are 5 to 20 million ns at cost of $.50 to $2 per GB.

  • Try and give it to them anyway

    • build a memory hierarchy

2004


Locality

Locality

  • A principle that makes having a memory hierarchy a good idea

  • If an item is referenced,temporal locality: it will tend to be referenced again soon

    spatial locality: nearby items will tend to be referenced soon.

    Why does code have locality?

  • Our initial focus: two levels (upper, lower)

    • block: minimum unit of data

    • hit: data requested is in the upper level

    • miss: data requested is not in the upper level


Cache

Cache

  • Two issues:

    • How do we know if a data item is in the cache?

    • If it is, how do we find it?

  • Our first example:

    • block size is one word of data

    • "direct mapped"

For each item of data at the lower level,

there is exactly one location in the cache where it might be.

e.g., lots of items at the lower level share locations in the upper level


Direct mapped cache

Direct Mapped Cache

  • Mapping: address is modulo the number of blocks in the cache


Direct mapped cache1

Direct Mapped Cache

  • For MIPS:

    What kind of locality are we taking advantage of?


Direct mapped cache2

Direct Mapped Cache

  • Taking advantage of spatial locality:


Hits vs misses

Hits vs. Misses

  • Read hits

    • this is what we want!

  • Read misses

    • stall the CPU, fetch block from memory, deliver to cache, restart

  • Write hits:

    • can replace data in cache and memory (write-through)

    • write the data only into the cache (write-back the cache later)

  • Write misses:

    • read the entire block into the cache, then write the word


Hardware issues

Hardware Issues

  • Make reading multiple words easier by using banks of memory

  • It can get a lot more complicated...


Performance2

Performance

  • Increasing the block size tends to decrease miss rate:

  • Use split caches because there is more spatial locality in code:


Performance3

Performance

  • Simplified model:execution time = (execution cycles + stall cycles) ´ cycle timestall cycles = # of instructions ´ miss ratio ´ miss penalty

  • Two ways of improving performance:

    • decreasing the miss ratio

    • decreasing the miss penalty

      What happens if we increase block size?


Decreasing miss ratio with associativity

Decreasing miss ratio with associativity

Compared to direct mapped, give a series of references that:

  • results in a lower miss ratio using a 2-way set associative cache

  • results in a higher miss ratio using a 2-way set associative cache

    assuming we use the “least recently used” replacement strategy


An implementation

An implementation


Performance4

Performance


Decreasing miss penalty with multilevel caches

Decreasing miss penalty with multilevel caches

  • Add a second level cache:

    • often primary cache is on the same chip as the processor

    • use SRAMs to add another cache above primary memory (DRAM)

    • miss penalty goes down if data is in 2nd level cache

  • Example:

    • CPI of 1.0 on a 5 Ghz machine with a 5% miss rate, 100ns DRAM access

    • Adding 2nd level cache with 5ns access time decreases miss rate to .5%

  • Using multilevel caches:

    • try and optimize the hit time on the 1st level cache

    • try and optimize the miss rate on the 2nd level cache


Cache complexities

Cache Complexities

  • Not always easy to understand implications of caches:

Theoretical behavior of Radix sort vs. Quicksort

Observed behavior of Radix sort vs. Quicksort


Cache complexities1

Cache Complexities

  • Here is why:

  • Memory system performance is often critical factor

    • multilevel caches, pipelined processors, make it harder to predict outcomes

    • Compiler optimizations to increase locality sometimes hurt ILP

  • Difficult to predict best algorithm: need experimental data


Virtual memory

Virtual Memory

  • Main memory can act as a cache for the secondary storage (disk)

  • Advantages:

    • illusion of having more physical memory

    • program relocation

    • protection


Pages virtual memory blocks

Pages: virtual memory blocks

  • Page faults: the data is not in memory, retrieve it from disk

    • huge miss penalty, thus pages should be fairly large (e.g., 4KB)

    • reducing page faults is important (LRU is worth the price)

    • can handle the faults in software instead of hardware

    • using write-through is too expensive so we use writeback


Page tables

Page Tables


Page tables1

Page Tables


Making address translation fast

Making Address Translation Fast

  • A cache for address translations: translation lookaside buffer

Typical values: 16-512 entries,

miss-rate: .01% - 1%

miss-penalty: 10 – 100 cycles


Tlbs and caches

TLBs and caches


Tlbs and caches1

TLBs and Caches


Modern systems

Modern Systems


Modern systems1

Modern Systems

  • Things are getting complicated!


Some issues

Some Issues

  • Processor speeds continue to increase very fast— much faster than either DRAM or disk access times

  • Design challenge: dealing with this growing disparity

    • Prefetching? 3rd level caches and more? Memory design?


Chapters 8 9

Chapters 8 & 9

(partial coverage)


Interfacing processors and peripherals

Interfacing Processors and Peripherals

  • I/O Design affected by many factors (expandability, resilience)

  • Performance:— access latency — throughput— connection between devices and the system— the memory hierarchy— the operating system

  • A variety of different users (e.g., banks, supercomputers, engineers)


Lectures for 3rd edition

I/O

  • Important but neglected“The difficulties in assessing and designing I/O systems haveoften relegated I/O to second class status”“courses in every aspect of computing, from programming tocomputer architecture often ignore I/O or give it scanty coverage”“textbooks leave the subject to near the end, making it easierfor students and instructors to skip it!”

  • GUILTY!— we won’t be looking at I/O in much detail — be sure and read Chapter 8 in its entirety.— you should probably take a networking class!


I o devices

I/O Devices

  • Very diverse devices— behavior (i.e., input vs. output)— partner (who is at the other end?)— data rate


I o example disk drives

I/O Example: Disk Drives

  • To access data:— seek: position head over the proper track (3 to 14 ms. avg.)— rotational latency: wait for desired sector (.5 / RPM)— transfer: grab the data (one or more sectors) 30 to 80 MB/sec


I o example buses

I/O Example: Buses

  • Shared communication link (one or more wires)

  • Difficult design:— may be bottleneck— length of the bus— number of devices— tradeoffs (buffers for higher bandwidth increases latency)— support for many different devices— cost

  • Types of buses:— processor-memory (short high speed, custom design)— backplane (high speed, often standardized, e.g., PCI)— I/O (lengthy, different devices, e.g., USB, Firewire)

  • Synchronous vs. Asynchronous— use a clock and a synchronous protocol, fast and smallbut every device must operate at same rate andclock skew requires the bus to be short— don’t use a clock and instead use handshaking


I o bus standards

I/O Bus Standards

  • Today we have two dominant bus standards:


Other important issues

Other important issues

  • Bus Arbitration:— daisy chain arbitration (not very fair)— centralized arbitration (requires an arbiter), e.g., PCI— collision detection, e.g., Ethernet

  • Operating system:— polling— interrupts— direct memory access (DMA)

  • Performance Analysis techniques:— queuing theory— simulation— analysis, i.e., find the weakest link (see “I/O System Design”)

  • Many new developments


Pentium 42

Pentium 4

  • I/O Options


Fallacies and pitfalls

Fallacies and Pitfalls

  • Fallacy: the rated mean time to failure of disks is 1,200,000 hours, so disks practically never fail.

  • Fallacy: magnetic disk storage is on its last legs, will be replaced.

  • Fallacy: A 100 MB/sec bus can transfer 100 MB/sec.

  • Pitfall: Moving functions from the CPU to the I/O processor, expecting to improve performance without analysis.


Multiprocessors

Multiprocessors

  • Idea: create powerful computers by connecting many smaller onesgood news: works for timesharing (better than supercomputer)bad news: its really hard to write good concurrent programs many commercial failures


Questions

Questions

  • How do parallel processors share data?— single address space (SMP vs. NUMA)— message passing

  • How do parallel processors coordinate? — synchronization (locks, semaphores)— built into send / receive primitives— operating system protocols

  • How are they implemented?— connected by a single bus — connected by a network


Supercomputers

Supercomputers

Plot of top 500 supercomputer sites over a decade:


Using multiple processors an old idea

Using multiple processors an old idea

  • Some SIMD designs:

  • Costs for the the Illiac IV escalated from $8 million in 1966 to $32 million in 1972 despite completion of only ¼ of the machine. It took three more years before it was operational! “For better or worse, computer architects are not easily discouraged”Lots of interesting designs and ideas, lots of failures, few successes


Topologies

Topologies


Clusters

Clusters

  • Constructed from whole computers

  • Independent, scalable networks

  • Strengths:

    • Many applications amenable to loosely coupled machines

    • Exploit local area networks

    • Cost effective / Easy to expand

  • Weaknesses:

    • Administration costs not necessarily lower

    • Connected using I/O bus

  • Highly available due to separation of memories

  • In theory, we should be able to do better


Google

Google

  • Serve an average of 1000 queries per second

  • Google uses 6,000 processors and 12,000 disks

  • Two sites in silicon valley, two in Virginia

  • Each site connected to internet using OC48 (2488 Mbit/sec)

  • Reliability:

    • On an average day, 20 machines need rebooted (software error)

    • 2% of the machines replaced each year

      In some sense, simple ideas well executed. Better (and cheaper) than other approaches involving increased complexity


Concluding remarks

Concluding Remarks

  • Evolution vs. Revolution“More often the expense of innovation comes from being too disruptive to computer users”“Acceptance of hardware ideas requires acceptance by software people; therefore hardware people should learn about software. And if software people want good machines, they must learn more about hardware to be able to communicate with and thereby influence hardware engineers.”


  • Login