1 / 23

Computer Architecture

Computer Architecture. Chapter 2 CSC 180 Dr. Adam Anthony. Overview. The Stored-Program Computer Model Machine instructions Program execution External Devices Other Computer Architecture Concepts. Schematic of a Processor.

orsen
Download Presentation

Computer Architecture

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Computer Architecture Chapter 2 CSC 180 Dr. Adam Anthony

  2. Overview • The Stored-Program Computer Model • Machine instructions • Program execution • External Devices • Other Computer Architecture Concepts

  3. Schematic of a Processor Numbers, letters, colors, etc are input values for Arithmetic Logic Unit Control Unit tells ALU to Add/Multiply/Compare Register values Control Unit can Load/Save Data in registers from memory via the bus 32 or 64 bit ‘command’ given to control unit from outside

  4. Registers • Temporary storage for data • Like ‘scratch’ paper • Brings data to a more convenient location for further processing • Processor only computes using input from registers • Processor only writes the result to a register • Modern processors can process a handful of computations per cycle, but there are 1000’s of registers • Why so many extra?

  5. Caching • Another analogy for registers: a backpack • Instead of going back to your room for every book you need, you carry a few around for convenience • L1/L2/L3 Cache on processors = amount of data that can be held in the registers • L1 is faster than L2, L2 faster than L3 • To a certain point, more cache = better performance but, • It’s more about how you use it: • Backpack analogy: which books do you pack? (you can always go back for more books, but it takes time) • Backpack is full, but you need a new book: Which book do you leave behind? • Some books need to go back to the library (RAM): when should you stop what you are doing and do that? • Maybe you have a friend do it? (L2, L3 Cache!) • Caching strategies are an ongoing and profitable research area

  6. The Stored Program Computer • Traditionally called the “vonNeumann Architecture” • He wrote the first draft with only his name on it and it was unintentionally distributed worldwide! • Early days of computers: • “Writing” a program involved rewiring the processor • vonNeuman (and friends J. Presper Eckert and John Mauchly) had the following insights: • Most computations involved a small number of unique tasks (add, multiply, compare, load, store…) • A command, such as ‘add’ can be encoded as a binary number • If the processor can switch functionality based on which command is given, then • Programs can be stored in memory just like data!

  7. Instruction Encoding • A numeric ‘command’ for a processor is called an instruction • Some processors are reduced instruction set computers (RISC) • Only the most basic instruction types are permitted (add, subtract, load, store) • IBM makes RISC chips (G5 in old Macs and XBOX 360, Cell in PS3) • Others are called complex instruction set computers (CISC) • Use commands like: ADDLOADSTORE(Register,Laddress,Saddress) • Takes less memory • Intel uses a (technically) CISC architecture • In reality, the chip is RISC, but CISC is provided to the programmer using abstraction!

  8. Dissecting an Instruction • Observations: • 3 5 A 7: hexadecimal! (ever see a memory dump?) • Op-code will always be 4 bits (how many different instructions, then?) • Operand: supplementary information • 0011 0101 10100111 • STORE Reg5 ADDRESS • How the operand is split depends on the op-code • 16 bit instruction: teaching example only • New processors use 64 bit instructions • Allows for more freedom in expressing instructions

  9. Understanding OpCodes • The entire instruction enters the control Unit • Let’s use an even smaller processor with 4 registers (numbered 00 01 10 11) and 8-bit instructions: • ADD has the op-code 00 and MULT has op-code 11 • ADD r0 r1 r2 ~ 00 00 01 10

  10. Understanding OpCodes • ADD r0 r1 r2 ~ 00 00 01 10 ADD MULT R0 = 5 R1 = 4 R2 = 5 R2 = 0 R3 = 0 00 00 01 10

  11. Schematic of an 8-bit 40-year old processor

  12. Summary (So Far) • Processors can: • Add/Subtract numbers in registers • Multiply/Divide numbers in registers • Load/Store data to/from registers • Read CH 2.4! • Perform Multi-bit logic operations • Shift bit patterns • Processors do NOTHING until they receive an instruction • Instructions come from programs which are stored in Memory, over the Bus. • Next: how instructions are delivered to the processor

  13. Special-Purpose Registers • Most registers are there to speed up calculations • Some registers are reserved for a special purpose and only hold specific pieces of information • Instruction register: holds the current instruction command (part of control unit) • Program counter: ‘bookmark’ that records where the next instruction is located

  14. Executing a Program

  15. Executing a Program

  16. Executing a Program

  17. Two Special Instructions • Normally, the processor assumes that the next instruction will be right next in line after the current instruction • Hence why it is called the program ‘counter’ • JUMP instruction: • Based on a logical test (AND, OR, EQUAL, etc) go to a non-consecutive instruction • In games, something different happens depending on what button you press (fork in the road) • NO-OP or IDLE instruction: • Processors never stop as long as the computer is on • If nothing else to do, execute the NO-OP instruction, and set the program counter to go back to the NO-OP instruction until further notice (i.e. the program counter stays the same!)

  18. Interacting With External Devices • Data goes to output devices (screen, printer) in a similar way as it goes to memory: • STOREPRNT R1 • Likewise for input devices (mouse, keyboard) • READKBD R1 • All devices intersect with the Memory Bus • Different binary ‘signals’ tell the keyboard, screen, RAM, etc. that the incoming data is for them

  19. A completed Computer Architecture Lots and lots of tiny wires Digital Logic Chips

  20. A Lesson in Clock Speed • Newegg “customer choice” ASUS P6X58D Premium Motherboard: • Supports intel’s new 4-core i7 processor (2.6 Ghz each = 10.4 Ghz in parallel) • Front Side Bus (FSB): 6.4 GT/S (Transfers per second) • About half of what the processor(s) can handle! • RAM support: (Expensive) 3-channel 1600 MHZ DDR3 memory (1.6 GHZ * 3 channels: 4.8 GHZ) • Can’t even keep up with Bus! • Hard drives are even slower • Computers are only as fast as their slowest part • Processors have hit the “memory bottleneck”

  21. Breaking the Bottle-Neck • Pipelining • Start fetching the next instruction before the first one is finished running • But what if it is a JUMP? • Increase bandwidth (allow multiple simultaneous processors) • Multiple BUS paths: adds complexity in design, particularly when a processor has parallel cores • Parallelism (next slide) • Overclocking • Make BUS/RAM run faster than it is supposed to • Generates enough heat to melt computer • Requires special cooling devices

  22. A note on Parallelism • A processor’s speed is limited by the speed of light • Smaller processor = faster processor • Smaller Logic gates = smaller processor • Intel’s primary goal: make logic gates smaller • Intel’s problem: “Increasing the speed of processors is a waste of time/money/effort because RAM can’t keep up, but we have all this extra room on a chip now since we made smaller gates!” • Intel’s “solution”: Put 2 (or 4 or 8 or more) processors side-by-side on the same chip. • Two processors can work on data simultaneously • Technically doubles performance • But writing parallel computer programs is tricky, sometimes impossible

  23. Conclusion • Everything we do on a computer gets reduced to a bunch of 1’s and 0’s • But we never see them, thanks to abstraction! • Processors do different tasks based on the instructions sent • Programs and data both live in RAM • Speed of a computer depends on much more than just the processor

More Related