1 / 33

CENG 450 Computer Systems & Architecture Lecture 1

CENG 450 Computer Systems & Architecture Lecture 1. Amirali Baniasadi amirali@ece.uvic.ca. CENG 450: Computer Architecture. Instructor: Amirali Baniasadi EOW 441, Only by appt. Call or email with your schedule.

Download Presentation

CENG 450 Computer Systems & Architecture Lecture 1

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CENG 450Computer Systems & ArchitectureLecture 1 Amirali Baniasadi amirali@ece.uvic.ca

  2. CENG 450: Computer Architecture Instructor: Amirali Baniasadi EOW 441, Only by appt. Call or email with your schedule. Email: amirali@ece.uvic.ca Office Tel: 721-8613 Web Page for this class will be at http://www.ece.uvic.ca/~amirali/courses/ceng450.html Text: Computer Architecture A Quantitative Approach Third edition, by Patterson and Hennessy, Morgan Kauffman Publishers Lecture notes will be posted on the course web page in advance.

  3. CourseStructure • Lectures: • 1 week on Overview and Introduction (Chap 1) • 2 weeks on ISA Design (Chap 2) • 6 weeks on Proc. Design (Chap 3 ,4) • 4 weeks on Memory and I/O (Chap 5) • Reading assignments posted on the web for each week. • NO Homework: Problems will be posted on the web site so you can prepare for exams/quizzes. • Quizzes: 4 in class quizzes. Dates will be announced in advance. • Note that the above is approximate.

  4. Course Philosophy • Book to be used as supplement for lectures (If a topic is not covered in the class, or a detail not presented in the class, that means I expect you to read on your own to learn those details) • One Project (25%) • Four Quizzes (25%)- Will be announced in advance. • Final Exam(50%) • IMPORTANT NOTE: Must get passing grade in all components to pass the course. Failing any of the three components will result in failing the course.

  5. Project • Labs start at Week of Jan 21st. • Processor design.

  6. Topics • Computer Architecture? • History • Technology • Moore’s law & Virtuous circle • Language evolution • Components of a computer • Instruction set architecture (ISA)

  7. How many “computers” do you have? • Three different computing markets: 1.Desktop Computing: low-end systems, high performance workstations. Price $500 to $5000 2.Servers: web servers. Should be available and reliable. Availability: be ready if components fail. Scalability: ability to grow 3.Embedded computers: Hidden computers, ex. cell phones, washing machine, palmtop, watch… Minimize memory and power. Often not programmable.

  8. What is “Computer Architecture” Computer Architecture: Behind the doors! Computer Architecture = Instruction Set Architecture + Machine Organization + Hardware Instruction Set Architecture: Visible to Compiler. RISC vs. CISC. Machine Organization: Importance of Von Newman design.

  9. ISA • 1950s: Hardwired Control, easy to implement, limited resources • 1960s: Microprogramming, more flexibility. • 1970s: CISC: • Compilers in infancy so ISA designed for programmers. • Expensive & small memory: Highly encoded, Multiple size instructions (e.g., x86 from 1-17 bytes), ISA approximates high level languages, • 1980s: RISC: • Better compiler, cheaper memory, “elemental instructions” • 2000s: More resources, post-RISC? CISC:”walk-across-the-room-without-stepping-on-the-dog” RISC:”walk-walk-walk-step over dog-walk-walk”

  10. History 1. “Big Iron” Computers: Used vacuum tubes, electric relays and bulk magnetic storage devices. No microprocessors. No memory. Example: ENIAC (1945), IBM Mark 1 (1944)

  11. History Von Newmann: Invented EDSAC (1949). First Stored Program Computer. Uses Memory. Importance: We are still using the same basic design.

  12. Computer Components Memory Processor (CPU) Printer Screen Disk . . . Output Control keyboard Mouse Disk . . . Input

  13. Computer Components • Datapath of a von Newman machine OP1 + OP2 ... Op1 Op2 General-purpose Registers ALU i/p registers Op1 Op2 Bus ALU ALU o/p register OP1 + OP2

  14. Computer Components • Processor(CPU): • Active part of the motherboard • Performs calculations & activates devices • Gets instruction & data from memory • Components are connected via Buses • Bus: • Collection of parallel wires • Transmits data, instructions, or control signals • Motherboard • Physical chips for I/O connections, memory, & CPU

  15. Computer Components • CPU consists of • Datapath (ALU+ Registers): • Performs arithmetic & logical operations • Control (CU): • Controls the data path, memory, & I/O devices • Sends signals that determine operations of datapath, memory, input & output

  16. Technology Change • Technology changes rapidly • HW • Vacuum tubes: Electron emitting devices • Transistors: On-off switches controlled by electricity • Integrated Circuits( IC/ Chips): Combines thousands of transistors • Very Large-Scale Integration( VLSI): Combines millions of transistors • What next? • SW • Machine language: Zeros and ones • Assembly language: Mnemonics • High-Level Languages: English-like • Artificial Intelligence languages: Functions & logic predicates • Object-Oriented Programming: Objects & operations on objects

  17. Moore’s Prediction

  18. 1 0 0 , 0 0 0 6 4 M 1 6 M 1 0 , 0 0 0 4 M y t i c 1 M a p 1 0 0 0 a c t 2 5 6 K i b K 1 0 0 6 4 K 1 6 K 1 0 1 9 9 6 1 9 7 6 1 9 7 8 1 9 8 0 1 9 8 2 1 9 8 4 1 9 8 6 1 9 8 8 1 9 9 0 1 9 9 2 1 9 9 4 Y e a r o f i n t r o d u c t i o n Moore’s Law: • A new generation of memory chips is introduced every 3 years • Each new generation has 4 times as much memory as its predecessor • Computer technology doubles every 1.5 years: Example: DRAM capacity

  19. Technology => dramatic change • Processor • logic capacity: about 30% per year • clock rate: about 20% per year • Memory • DRAM capacity: about 60% per year (4x every 3 years) • Memory speed: about 10% per year • Cost per bit: improves about 25% per year • Disk • capacity: about 60% per year Question: Does every thing look OK?

  20. Software Evolution. • Machine language • Assembly language • High-level languages • Subroutine libraries • There is a large gap between what is convenient for computers & what is convenient for humans • Translation/Interpretation is needed between both

  21. 0 0 0 0 0 0 0 0 1 0 1 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1 1 1 0 0 0 0 1 1 0 0 0 0 0 1 0 0 0 0 1 1 0 0 0 1 1 0 0 0 1 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1 1 0 0 1 1 1 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 0 1 0 1 1 0 0 1 1 1 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 0 1 1 0 0 0 1 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 Language Evolution swap (int v[], int k) { int temp temp = v[k]; v[k] = v[k+1]; v[k+1] = temp; } High-level language program (in C) swap: muli $2, $5, 4 add $2, $4, $2 lw $15, 0($2) lw $18, 4($2) sw $18, 0($2) sw $15, 4($2) jr $31 Assembly language program (for MIPS) Binary machine language program (for MIPS)

  22. HW - SW Components • Hardware • Memory components • Registers • Register file • memory • Disks • Functional components • Adder, multiplier, dividers, . . . • Comparators • Control signals • Software • Data • Simple • Characters • Integers • Floating-point • Pointers • Structured • Arrays • Structures ( records) • Instructions • Data transfer • Arithmetic • Shift • Control flow • Comparison • . . .

  23. Things You Will Learn • Assembly language introduction/Review • How to analyze program performance • How to design processor components • How to enhance processors performance (caches, pipelines, parallel processors, multiprocessors)

  24. The Processor Chip

  25. Branch Control Data cache Floating-point datapath Bus Integer data-path Instruction cache Processor Chip Major Blocks • Example: Intel Pentium • Area: 91 mm2 • ~ 3.3 million transistors ( 1 million for cache memory)

  26. Memory • Categories • Volatile memory • Loses information when power is switched-off • Non-volatile memory • Keeps information when power is switched-off • Types • Cache: • Volatile • Fast but expensive • Smaller capacity • Placed closer to the processor • Main memory • Volatile • Less expensive • More capacity • Secondary memory • Nonvolatile • Low cost • Very slow • Unlimited capacity

  27. Input-Output (I/O) • I/O devices have the hardest organization • Wide range of speeds • Graphics vs. keyboard • Wide range of requirements • Speed • Standard • Cost . . . • Least amount of research done in this area

  28. Our Primary Focus • The processor (datapath and control) • Implemented using millions of transistors • Impossible to understand by looking at each transistor • We need abstraction • Hides lower-level details to offer simple model at higher level • Advantages • Intensive & thorough research into the depths • Reveals more information • Omits unneeded details • Helps us cope with complexity • Examples of abstraction: • Language hierarchy • Instruction set architecture (ISA)

  29. Instruction Set Architecture (ISA) • Instruction set: • Complete set of instructions used by a machine • ISA: • Abstract interface between the HW and lowest-level SW. It encompasses information needed to write machine-language programs including • Instructions • Memory size • Registers used • . . .

  30. Instruction Set Architecture (ISA) • ISA is considered part of the SW • Several implementations for the same ISA can exist • Modern ISA’s: • 80x86/Pentium/K6, PowerPC, DEC Alpha, MIPS, SPARC, HP • We are going to study MIPS • Advantages: • Different implementations of the same architecture • Easier to change than HW • Standardizes instructions, machine language bit patterns, etc. • Disadvantage: • Sometimes prevents using new innovations

  31. Instruction Set Architecture (ISA) Fetch Instruction From Memory DecodeInstruction determine its size & action Fetch Operand data Execute instruction & compute results or status Store Result in memory Determine Next Instruction’s address • Instruction Execution Cycle

  32. What Should we Learn? • A specific ISA (MIPS) • Performance issues - vocabulary and motivation • Instruction-Level Parallelism • How to Use Pipelining to improve performance • Exploiting Instruction-Level Parallelism w/ Software Approach • Memory: caches and virtual memory • I/O

  33. What is Expected From You? • Read textbook & readings! • Be up-to-date! • Come back with your input & questions for discussion! • Appreciate and participate in teamwork!

More Related