1 / 64

Chapter 5: Computer Systems Organization

Chapter 5: Computer Systems Organization. Invitation to Computer Science,. Objectives. In this chapter, you will learn about: The components of a computer system Putting all the pieces together – the Von Neumann architecture The future : non-Von Neumann architectures. Introduction.

olaj
Download Presentation

Chapter 5: Computer Systems Organization

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 5: Computer Systems Organization Invitation to Computer Science,

  2. Objectives In this chapter, you will learn about: • The components of a computer system • Putting all the pieces together – the Von Neumann architecture • The future: non-Von Neumann architectures Invitation to Computer Science

  3. Introduction • Remember that Computer science is the study of algorithms including • Their formal and mathematical properties (Chapter 1-3) • their hardware realization (Chapter 4-5) • Their linguistic realizations. • Their applications. • Computer organization examines the computer as a collection of interacting “functional units” • Functional units may be built out of the circuits already studied • Higher level of abstraction assists in understanding by reducing complexity Invitation to Computer Science

  4. Figure 5.1 The Concept of Abstraction Invitation to Computer Science

  5. The Components of a Computer System • Von Neumann architecture has four functional units: • Memory • The unit that stores and retrieves instructions and data. • Input/Output • Handles communication with the outside world. • Sequential execution of instructions • Arithmetic/Logic unit • Performs mathematical and logical operations. • Control unit • Repeats the following 3 tasks repeatedly • Fetches an instruction from memory • Decodes the instruction • Executes the instruction • Stored program concept Invitation to Computer Science

  6. Figure 5.2 Components of the Von Neumann Architecture Invitation to Computer Science

  7. Memory and Cache • Information stored and fetched from memory subsystem • Random Access Memory (RAM) maps addresses to memory locations • Cache memory keeps values currently in use in faster memory to speed access times Invitation to Computer Science

  8. Memory and Cache (continued) • RAM (Random Access Memory) • Memory made of addressable “cells” • Current standard cell size is 1 byte = 8 bits • All memory cells accessed in equal time • Memory address • The address is an unsigned binary number N long  Address space is then 2N cells Invitation to Computer Science

  9. Memory and Cache (continued) • Memory Size • Memory size is in power of 2 • 210 =1K 1 kilobyte • 220 =1M 1 megabyte • 230 =1G 1 gigabyte • 240 =1T 1 terabyte • If the MAR is N digits long, the largest address is….? • The maximum memory size for a MAR with • N = 16 is..? _______________ • N = 20 is..? _______________ • N = 31 is..? ________________ Invitation to Computer Science

  10. Figure 5.3 Structure of Random Access Memory Invitation to Computer Science

  11. Memory and Cache (continued) • Parts of the memory subsystem • Fetch/store controller • Fetch: retrieve a value from memory • Store: store a value into memory • Memory address register (MAR) • Memory data register (MDR) • Memory cells, with decoder(s) to select individual cells Invitation to Computer Science

  12. Decoder Circuit • A decoder is a control circuit which has • N input and 2N output • A 3 to 23 example of decoder 0 1 2 3 4 5 6 7 a b c o0 o1 o2 o3 o4 o5 o6 o7 0 0 0 1 0 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 0 0 0 1 0 0 0 1 0 0 0 0 0 0 1 1 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 1 0 1 0 0 0 0 0 1 0 0 1 1 0 0 0 0 0 0 0 1 0 1 1 1 0 0 0 0 0 0 0 1 a b c Invitation to Computer Science

  13. Memory and Cache (continued) • Fetch operation • Load the address in to the MAR • The address of the desired memory cell is moved into the MAR • Decode the address in the MAR • Fetch/store controller signals a “fetch,” accessing the memory cell. • The memory unit must translate the N-bit address stored in the MAR into the set of signals needed to access that one specific memory cell • A decoder circuit is used for such a purpose • Copy the content of the memory location into the MDR • The value at the MAR’s location flows into the MDR Invitation to Computer Science

  14. Memory and Cache (continued) • Store operation • Load the address into the MAR • The address of the cell where the value should go is placed in the MAR • Load the value into the MDR • The new value is placed in the MDR • Decode the address in the MAR and store the content of the MDR into that memory location • Fetch/store controller signals a “store,” copying the MDR’s value into the desired cell Invitation to Computer Science

  15. Example: The Units Of A Computer Invitation to Computer Science

  16. Fetch: LOAD X f D X D LOAD X D Invitation to Computer Science

  17. Store: STORE X s D X D STORE X D Invitation to Computer Science

  18. Memory and Cache (continued) • Memory register • Very fast memory location • Given a name, not an address • Serves some special purpose • Modern computers have dozens or hundreds of registers Invitation to Computer Science

  19. Figure 5.7 Overall RAM Organization Invitation to Computer Science

  20. Cache Memory • Memory access is much slower than processing time • Faster memory is too expensive to use for all memory cells • Locality principle • Once a value is used, it is likely to be used again • Small size, fast memory just for values currently in use speeds computing time Invitation to Computer Science

  21. Input/Output and Mass Storage • Communication with outside world and external data storage • Human interfaces: monitor, keyboard, mouse • Archival storage: not dependent on constant power • External devices vary tremendously from each other Invitation to Computer Science

  22. Input/Output and Mass Storage (continued) • Volatile storage • Information disappears when the power is turned off • Example: RAM • Nonvolatile storage • Information does not disappear when the power is turned off • Example: mass storage devices such as disks and tapes Invitation to Computer Science

  23. Input/Output and Mass Storage (continued) • Mass storage devices • Direct access storage device • Hard drive, CD-ROM, DVD, etc. • Uses its own addressing scheme to access data • Sequential access storage device • Tape drive, etc. • Stores data sequentially • Used for backup storage these days Invitation to Computer Science

  24. Input/Output and Mass Storage (continued) • Direct access storage devices • Data stored on a spinning disk • Disk divided into concentric rings (sectors) • Read/write head moves from one ring to another while disk spins • Access time depends on: • Time to move head to correct sector • Time for sector to spin to data location Invitation to Computer Science

  25. Figure 5.8 Overall Organization of a Typical Disk Invitation to Computer Science

  26. Input/Output and Mass Storage (continued) • I/O controller • Intermediary between central processor and I/O devices • Processor sends request and data, then goes on with its work • I/O controller interrupts processor when request is complete Invitation to Computer Science

  27. Figure 5.9 Organization of an I/O Controller Invitation to Computer Science

  28. The Arithmetic/Logic Unit • Actual computations are performed • Primitive operation circuits • Arithmetic (ADD, etc.) • Comparison (CE, etc.) • Logic (AND, etc.) • Data inputs and results stored in registers • Multiplexor selects desired output Invitation to Computer Science

  29. Multiplexor • A multiplexor is a control circuit that selects one of the input lines to allow its value to be output. • To do so, it uses the selector lines to indicate which input line to select. • A multiplexor It has 2N input lines, N selector lines and one output. • The N selector lines are set to 0s or 1s. When the values of the N selector lines are interpreted as a binary number, they represent the number of the input line that must be selected. • With N selector lines you can represent numbers between 0 and 2N-1. • The single output is the value on the line represented by the number formed by the selector lines. Invitation to Computer Science

  30. Multiplexor Circuit 0 1 2 2N-1 multiplexor circuit 2N input lines output For our example, the output is the value on the line numbered 1 . . . . . N selector lines Interpret the selector lines as a binary number. Suppose that the selector line write the number 00….01 which is equal to 1. Invitation to Computer Science

  31. Multiplexor Circuit (continued) multiplexor with N=1 0 1 A multiplexor can be built with AND-OR-NOT gates Invitation to Computer Science

  32. RECALL: THE ARITHMETIC/LOGIC UNIT USES A MULTIPLEXOR R Register R Other registers AL1 ALU AL2 condition code register circuits GT EQ LT multiplexor output (In the lab you will see where this will go) selector lines Invitation to Computer Science

  33. The Arithmetic/Logic Unit (continued) • ALU process • Values for operations copied into ALU’s input register locations • All circuits compute results for those inputs • Multiplexor selects the one desired result from all values • Result value copied to desired result register Invitation to Computer Science

  34. Figure 5.12 Using a Multiplexor Circuit to Select the Proper ALU Result Invitation to Computer Science

  35. The Control Unit • Manages stored program execution • Task • Fetch from memory the next instruction to be executed • Decode it: determine what is to be done • Execute it: issue appropriate command to ALU, memory, and I/O controllers Invitation to Computer Science

  36. Machine Language Instructions • Can be decoded and executed by control unit • Parts of instructions • Operation code (op code) • Unique unsigned-integer code assigned to each machine language operation • Address field(s) • Memory addresses of the values on which operation will work Invitation to Computer Science

  37. 00001001 0000000001100011 0000000001100100 ADD X Y Figure 5.14 Typical Machine Language Instruction Format Invitation to Computer Science

  38. Logic/Control Operations compare jump jumpgt jumpeq jumplt jumpneq halt Operations Available • Arithmetic Operations • load • store • clear • add • increment • subtract • decrement • I/0 Operations • in • out There is a different Operation Code (OpCode) for each Operation Invitation to Computer Science

  39. Machine Language Instructions (continued) • Operations of machine language • Data transfer • Move values to and from memory and registers • Arithmetic/logic • Perform ALU operations that produce numeric values Invitation to Computer Science

  40. Machine Language Instructions (continued) • Operations of machine language (continued) • Compares • Set bits of compare register to hold result • Branches • Jump to a new memory address to continue processing Invitation to Computer Science

  41. Control Unit Registers And Circuits • Parts of control unit • Links to other subsystems • Instruction decoder circuit • Two special registers: • Program Counter (PC) • Stores the memory address of the next instruction to be executed • Instruction Register (IR) • Stores the code for the current instruction Invitation to Computer Science

  42. Figure 5.16 Organization of the Control Unit Registers and Circuits Invitation to Computer Science

  43. Putting All the Pieces Together—the Von Neumann Architecture • Subsystems connected by a bus • Bus: wires that permit data transfer among them • At this level, ignore the details of circuits that perform these tasks: Abstraction! • Computer repeats fetch-decode-execute cycle indefinitely Invitation to Computer Science

  44. Figure 5.18 The Organization of a Von Neumann Computer Invitation to Computer Science

  45. The Control Unit 5) As all instructions use the address, except for halt, the address is sent to either the MAR or PC- which depends on the instruction. 2) The fetch places the instruction in the IR. IR PC 1 0 0 1 address 3) The opcode is sent to the decoder 1) After a fetch, the PC is incremented by 1. +1 instruction decoder line 9 4) The decoder sends out signals for execution enable add Invitation to Computer Science

  46. The Future: Non-Von Neumann Architectures • Physical limitations on speed of Von Neumann computers • Non-Von Neumann architectures explored to bypass these limitations • Parallel computing architectures can provide improvements: multiple operations occur at the same time Invitation to Computer Science

  47. What are the fastest computer in the world? Visit the site http://www.top500.org/list/2005/06/ to find out! Invitation to Computer Science

  48. The Future: Non-Von Neumann Architectures (continued) • SIMD architecture • Single instruction/Multiple data • Multiple processors running in parallel • All processors execute same operation at one time • Each processor operates on its own data • Suitable for “vector” operations Invitation to Computer Science

  49. Figure 5.21 A SIMD Parallel Processing System Invitation to Computer Science

  50. The Future: Non-Von Neumann Architectures (continued) • MIMD architecture • Multiple instruction/Multiple data • Multiple processors running in parallel • Each processor performs its own operations on its own data • Processors communicate with each other Invitation to Computer Science

More Related