280 likes | 385 Views
Review Quiz. 1) Why a multicycle implementation can be better than a single cycle implementation. It is less expensive It is usually faster Its average CPI is smaller It allows a faster clock rate It has a simpler design. Answer: 1,2,4 . 2) A pipelined implementation can.
E N D
Review Quiz csci4203/ece4363
1) Why a multicycle implementation can be better than a single cycle implementation • It is less expensive • It is usually faster • Its average CPI is smaller • It allows a faster clock rate • It has a simpler design Answer: 1,2,4
2) A pipelined implementation can • Increase throughput • Decrease the latency of each operation • Decrease cache misses • Increase TLB misses • Decrease clock cycle time Answer: 1,5
3) Which of the following techniques can reduce penalty of data dependencies • Data forwarding • Instruction scheduling • Out-of-order superscalar implementation • Deeper pipelined implementation Answer: 1,2,3
4) What is the advantages of loop unrolling? • It increases ILP for more effective scheduling • It reduces branch instructions • It decreases instruction cache misses • It reduces memory operations • It reduces compile time Answer: 1,2
5) What is the advantages of a split cache • It decreases cache miss rate • It doubles cache access bandwidth • It is less expensive • It is easier to design Answer: 2
6) Which of the micro-architectures are likely to boost performance of the following loop While (p!= NULL) { m=p->data; p= p->next; } • Deep pipelining • Accurate branch prediction • An efficient cache hierarchy • Speculative execution support Answer: 2, 3, 4
7) Using the 2-bit branch prediction scheme, what will be the branch misprediction rate for the following loop. For (i=1; i<10000; i++) { for (j=1; j<3; j++) statements; } • About 10% • About 50% • About 66% • About 33% • About 1% Answer: 4
8) You have decided to design a cache hierarchy with two levels caching – L1 and L2. Which of the following configurations are likely to be used? • L1 write-through and L2 write-back • L1 is united and L2 is split • L1 has a line size larger than L2 line size • L1 is private and L2 is shared in CMP Answer: 1, 4
9) It takes a long time (millions of cycles) to get the first byte from a disk. So what should be done to reduce the cost. • Use larger pages • Increase the size of TLB • Two levels of TLB • Disk caching – keep frequently used files in memory. Answer: 1, 4
10) Page table is large and often space inefficient. What techniques can be used to deal with it? • Two level page tables • Hashing tables • Linked list • Sequential search table Answer: 1, 2
11) Which of the following techniques may reduce the cache miss penalty. • Requested word first (or critical word first) • Multi-level caches • Increase the size of main memory without interleaving • Have a faster memory bus Answer: 1, 2, 4
12) Which of the following techniques can usually help reducing total penalty of capacity misses? • Split the cache into two • Increase associativity • Increase line size • Cache prefetching Answer: 3, 4
13) Cache performance is more important in which of the following conditions? • When the bus bandwidth is insufficient • When CPIperfect is low and the clock rate is high • When CPIperfect is high and the clock rate is low • When the main memory is not large enough Answer: 1,2
14) Which of the following statements are true for TLB • TLB caches frequently used Virtual-to-Physical translations • TLB is usually smaller than caches • TLB uses write-through policy • TLB misses can be handled by either software or hardware Answer: 1, 2, 4
15) Which of the following designs will see greater impact when we move from the 32-bit MIPS architecture to 64-bit MIPS? • Virtual memory support • Data path design • Control path design • Floating point functional unit • Cache design Answer: 1, 2, 5
16) Which of the following statements are true for Microprogramming? • Microprogramming can be used to implement structured control design. • Microprogramming simplifies control designs and allows for a faster, more reliable design. • Microprogramming control yield faster processor • Microprogramming is used in recent Intel Pentium processors Answer: 1, 2, 4
17) In a pipelined implementation, what hazards may often occur? • Control hazards • Data hazards • Floating point exceptions • Structure hazards Answer: 1, 2, 4
18) My program has a very high cache miss rate (> 50%). I traced it down to the following function. What cache miss type is it? Assume a typical cache with 32B line size. functionA(float *a, float *b, int n) { for (i=1,i<n; i++) *a++ = *a + *b++; } • Capacity miss • Compulsory miss • Conflict miss • Cold miss Answer: 3
19) What are the major motivations for Virtual Memory? • To support multiprogramming • To increase system throughput • To allow efficient and safe sharing 4) To remove programming burdens of a small physical memory Answer: 3, 4
20) Among page fault, TLB miss, branch misprediction and cache misses, which of the following statements are true? • Page fault is most expensive • L1 cache miss may cost less than a branch misprediction • TLB misses usually cost more than L1 misses. • A TLB miss will always cause a respective L1 miss Answer: 1, 2, 3
21) Assume we have a four line fully associative instruction cache. Which replacement algorithm works best for the following loop. 1) LRU 2) Random 3) MRU 4) NRU 5) FIFO For (i=1; i<n; i++) { line1; line2; line3; line 4; line5; } Answer: 3
22) Which of the following techniques can reduce control hazards in a pipelined processor? • Branch prediction • Loop unrolling • Procedure in-lining • Predicated execution Answer: 1,2,3,4
23) Which techniques can be used to reduce conflict misses? • Increase associativity • Adding a victim cache • Using large lines • Use shared caches Answer: 1,2
24) Which of the following statements are true for RAID (Redundant Array of Inexpensive Disks). • RAID0 has no redundancy • RAID1 is most expensive • RAID5 has the best reliability • RAID4 is better than RAID3 because it supports efficient small reads and writes Answer:1, 2, 4
25) Adding new features to a machine may require changes to be made to the ISA. Which of the following features can get by without changing the ISA? • Predicated instructions • Software controlled data speculation • Static branch prediction • Software controlled cache prefetching Answer: 3,4
26) A branch predictor is similar to a cache in many aspects. Which of the following cache parameters can be avoided in a simple branch predictor? • Associativity • Line size • Replacement algorithms • Write policy • Tag size Answer: 1,2,3,4,5
27) Assume cache size=32KB, line size=32B,what are the number of bits used for index and tag in a 4-way associative cache? Assume 16GB of physical memory. 1) Tag = 19, Index= 8 2) Tag = 20, Index= 10 3) Tag = 21, Index= 8 4) Tag = 17, Index= 10 Answer: 3