1 / 9

C SINGH, JUNE 7-8, 2010

Advanced Computers Architecture Lecture 6 By Rohit Khokher Department of Computer Science, Sharda University, Greater Noida, India. C SINGH, JUNE 7-8, 2010. Advanced Computers Architecture, UNIT 1. IWW 2010 , ISATANBUL, TURKEY. High Performance Architectures.

randi
Download Presentation

C SINGH, JUNE 7-8, 2010

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Advanced Computers Architecture Lecture 6 By Rohit Khokher Department of Computer Science, Sharda University, Greater Noida, India C SINGH, JUNE 7-8, 2010 Advanced Computers Architecture, UNIT 1 IWW 2010, ISATANBUL, TURKEY

  2. High Performance Architectures • Who needs high performance systems? • How do you achieve high performance? • How to analyses or evaluate performance? C SINGH, JUNE 7-8, 2010 Advanced Computers Architecture, UNIT 1 IWW 2010, ISATANBUL, TURKEY

  3. Outline of my lecture • Classification • ILP Architectures • Data Parallel Architectures • Process level Parallel Architectures • Issues in parallel architectures • Cache coherence problem • Interconnection networks C SINGH, JUNE 7-8, 2010 Advanced Computers Architecture, UNIT 1 IWW 2010, ISATANBUL, TURKEY

  4. Performance of Computer • Computer performance is characterized by the amount of useful work accomplished by a computer system compared to the time and resources used. Depending on the context, good computer performance may involve one or more of the following: • Short response time for a given piece of work • High throughput (rate of processing work) • Low utilization of computing resource(s) • High availability of the computing system or application • Fast (or highly compact) data compression and decompression • High bandwidth / short data transmission time C SINGH, JUNE 7-8, 2010 Advanced Computers Architecture, UNIT 1 IWW 2010, ISATANBUL, TURKEY

  5. Performance of Parallel Computers • Speed UP • Amdahl’s Law C SINGH, JUNE 7-8, 2010 Advanced Computers Architecture, UNIT 1 IWW 2010, ISATANBUL, TURKEY

  6. Performance Metrics • Computer performance metrics include • availability, • response time, • channel capacity, • latency, • completion time, • service time, • bandwidth, • throughput, • relative efficiency, • scalability, • performance per watt, • compression ratio, • Instruction path length and speed up. C SINGH, JUNE 7-8, 2010 Advanced Computers Architecture, UNIT 1 IWW 2010, ISATANBUL, TURKEY

  7. Thread Level Parallelization In the threads model of parallel programming, a single process can have multiple, concurrent execution paths. Perhaps the most simple analogy that can be used to describe threads is the concept of a single program that includes a number of subroutines: C SINGH, JUNE 7-8, 2010 Advanced Computers Architecture, UNIT 1 IWW 2010, ISATANBUL, TURKEY

  8. The main program a.out is scheduled to run by the native operating system. a.out loads and acquires all of the necessary system and user resources to run. • a.out performs some serial work, and then creates a number of tasks (threads) that can be scheduled and run by the operating system concurrently. • Each thread has local data, but also, shares the entire resources of a.out. This saves the overhead associated with replicating a program's resources for each thread. Each thread also benefits from a global memory view because it shares the memory space of a.out. • A thread's work may best be described as a subroutine within the main program. Any thread can execute any subroutine at the same time as other threads. • Threads communicate with each other through global memory (updating address locations). This requires synchronization constructs to insure that more than one thread is not updating the same global address at any time. • Threads can come and go, but a.out remains present to provide the necessary shared resources until the application has completed. • Threads are commonly associated with shared memory architectures and operating systems. C SINGH, JUNE 7-8, 2010 Advanced Computers Architecture, UNIT 1 IWW 2010, ISATANBUL, TURKEY

  9. Implementation • From a programming perspective, threads implementations commonly comprise: • A library of subroutines that are called from within parallel source code • A set of compiler directives imbedded in either serial or parallel source code • In both cases, the programmer is responsible for determining all parallelism. • Unrelated standardization efforts have resulted in two very different implementations of threads: POSIX Threads and OpenMP. • POSIX Threads • Library based; requires parallel coding • Specified by the IEEE POSIX 1003.1c standard (1995). • C Language only • Commonly referred to as Pthreads. • Most hardware vendors now offer Pthreads in addition to their proprietary threads implementations. • ASSIGNMENT 1 A CASE STUDY OF POSIX THREAD C SINGH, JUNE 7-8, 2010 Advanced Computers Architecture, UNIT 1 IWW 2010, ISATANBUL, TURKEY

More Related