1 / 32

CGS 3863 Operating Systems Spring 2013

CGS 3863 Operating Systems Spring 2013. Dan C. Marinescu Office: HEC 439 B Office hours: M- Wd 11:30 AM – 1 PM. Last time: Discussion of the paper “Architecture of Complexity” by H. A. Simon Modularity, Abstractions, Layering, Hierarchy Today: Complexity of Computer Systems

melvyn
Download Presentation

CGS 3863 Operating Systems Spring 2013

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CGS 3863 Operating Systems Spring 2013 Dan C. Marinescu Office: HEC 439 B Office hours: M-Wd 11:30 AM – 1 PM

  2. Last time: Discussion of the paper “Architecture of Complexity” by H. A. Simon Modularity, Abstractions, Layering, Hierarchy Today: Complexity of Computer Systems Resource Sharing and Complexity Bandwidth and Latency Iteration Names and The Basic Abstractions Memory. Critical Properties of Memory Systems. Read-Write Coherence Before or After Atomicity. RAID Next time Discussion of the paper “Hints for Computer Systems Design” by Butler Lamson. Lecture 4 – Thursday September 2, 2010 Lecture 4

  3. Names and fundamental abstractions • The fundamental abstractions • Storage  mem, disk, data struct, File Systems, disk arrays • Interpreters cpu, programming language e.g. java VM • Communication wire, Ethernet rely on names. • Naming: • Flat • Hierarchical Lecture 4

  4. Computers a distinct species of complex systems • The complexity of computer systems not limited by the laws of physics  distant bounds on composition • Digital systems are noise-free. • The hardware is controlled by software • The rate of change unprecedented • The cost of digital hardware has dropped in average 30% per year for the past 35 years Lecture 4

  5. Analog, digital, and hybrid systems • Analog systems: • the noise from individual components accumulate and • the number of components is limited • Digital systems: • are noise-free • the number of components is not limited • regeneration  restoration of digital signal levels • static discipline the range of the analog values a device accepts for each input digital value should be wider than the range of analog output values • digital components could fail but big mistakes are easier to detect than small ones!! • Hybrid systems  e.g., quantum computers and quantum communication systems Lecture 4

  6. Computers are controlled by software • Composition of hardware limited by laws of physics. • Composition of software is not physically constrained; • software packages of 107 lines of code exist • Abstractions hide the implementation beneath module interfaces and allow the • creation of complex software • modification of the modules • Abstractions can be leaky. Example, representation of integers, floating point numbers. Lecture 4

  7. Exponential growth of computers • Unprecedented: • when a system is ready to be released it may already be obsolete. • when one of the parameters of a system changes by a factor of • 2 other components must be drastically altered due to the incommensurate scaling. • 10  the systems must be redesigned; E.g.; balance CPU, memory, and I/O bandwidth; • does not give pause to developers • to learn lessons from existing systems • find and correct all errors • negatively affects “human engineering”  ability to build reliable and user-friendly systems • the legal and social frameworks are not ready Lecture 4

  8. Coping with complexity of computer systems • Modularity, abstraction, layering, and hierarchy are necessary but not sufficient. • An additional technique  iteration • Iteration • Design increasingly more complex functionality in the system • Test the system at each stage of the iteration to convince yourself that the design is sound • Easier to make changes during the design process Lecture 4

  9. Iteration – design principles • Make it easy to change • the simplest version must accommodate all changes required by successive versions • do not deviate from the original design rationale • think carefully about modularity  it is very hard to change it. • Take small steps; rebuild the system every day, to discover design flaws and errors. Ask others to test it. • Don’t rush to implementation. Think hard before starting to program. • Use feedback judiciously • use alpha and beta versions • do not be overconfident from an early success • Study failures  understand that complex systems fail for complex reasons. Lecture 4

  10. Curbing complexity • In absence of physical laws curb the complexity by good judgment. Easier said than done because: • tempted to add new features than in the previous generation • competitors have already incorporated the new features • the features seem easy to implement • the technology has improved • human behavior: arrogance, pride, overconfidence… Lecture 4

  11. Critical elements of information revolution! Lecture 4

  12. Lecture 4

  13. The relation between homo sapiens and the computers • The feelings of the homo sapiens: • Hate • Frustration • Lack of understanding • The Operating System • A program to “domesticate” the computer. • Transforms a “bare machine” into a “user machine” • Controls and facilitates access to computing resources; optimizes the use of resources. • The relation went through several stages: • Many-to-one • One-to-one • Many-to-many • Peer-to-peer Lecture 4

  14. Resource sharing and complexity • A main function of the OS is resource sharing. • Sharing computer resources went through several stages with different levels of complexity: • Many-to-one • One-to-one • Many-to-many • Peer-to-peer Lecture 4

  15. Lecture 4

  16. Lecture 4

  17. Names and the three basic abstractions • Memory  stores named objects • write(name, value) value  READ(name) • file system: /dcm/classes/Fall09/Lectures/Lecture5.ppt • Interpreters  manipulates named objects • machine instructions ADD R1,R2 • modules  Variables call sort(table) • Communication Links connect named objects • HTTP protocol used by the Web and file systems Host: boticelli.cs.ucf.edu put /dcm/classes/Fall09/Lectures/Lecture5.ppt get /dcm/classes/Fall09/Lectures/Lecture5.ppt Lecture 4 17

  18. Latency and Bandwidth Lecture 4 • Important concepts for physical characterization. • Applies to all three abstractions. • Informal • Bandwidth  number of operations per second! • Latency  to get there • The bandwidth of the CPU, Memory, and I/O sbsystems must be balanced. 18

  19. Lecture 4 19

  20. Memory • Hardware memory: • Devices • RAM (Random Access Memory) chip • Flash memory  non-volatile memory that can be erased and reprogrammed • Magnetic tape • Magnetic Disk • CD and DVD • Systems • RAID • File systems • DBMS (Data Base management Systems) Lecture 4 20

  21. Attributes of the storage medium/system • Durability  the time it remembers • Stability  whether or not the data is changed during the storage • Persistence  property of data storage system, it keeps trying to preserve the data Lecture 4 21

  22. Critical properties of a storage medium/system • Read/Write Coherence  the result of a READ of a memory cell should be the same as the most recent WRITE to that cell. • Before-or-after atomicity the result of every READ or WRITE is as if that READ or WRITE occurred either completely before or completely after any other READ or WRITE Lecture 4 22

  23. Lecture 4 23

  24. Why it is hard to guarantee the critical properties? • Concurrency  multiple threads could READ/WRITE to the same cell • Remote storage  The delay to reach the physical storage may not guarantee FIFO operation • Optimizations  data may be buffered to increase I/O efficiency • Cell size may be different than data size  data may be written to multiple cells. • Replicated storage  difficult to maintain consistency. Lecture 4 24

  25. Access type; access time • Sequential access • Tapes • CD/DVD • Random access devices • Disk • Seek • Search time • Read/Write time • RAM Lecture 4 25

  26. Physical memory organization Lecture 4 • RAM  two dimensional array. To select a flip-flop provide the x and y coordinates. • Tapes  blocks of a given length and gaps (special combination of bits. • Disk: • Multiple platters • Cylinders correspond to a particular position of the moving arm • Track  circular pattern of bits on a given platter and cylinder • Record  multiple records on a track 26

  27. Names and physical addresses Figure 2.2 from textrbook Lecture 4 Location addressed memory  the hardware maps the physical coordinates to consecutive integers, addresses Associative memory  unrestricted mapping; the hardware does not impose any constraints in mapping the physical coordinates 27

  28. RAID – Redundant Array of Inexpensive Disks Lecture 4 • The abstraction put to work to increase performance and durability. • Raid 0  allows concurrent reading and writing. Increases performance but does not improve reliability. • Raid 1  increases durability by replication the block of data on multiple disks (mirroring) • Raid 2  Disks are synchronized and striped in very small stripes, often in single bytes/words. • Error correction calculated across corresponding bits on disks, and is stored on multiple parity disks (Hamming codes). 28

  29. Lecture 4 29

  30. RAID (cont’d) Lecture 4 Raid 3 Striped set with dedicated parity or bit interleaved parity or byte level parity. Raid 4  improves reliability, it adds error correction 30

  31. RAID (cont’d) Lecture 4 • Raid 5  striped disks with parity combines three or more disks to protect data against loss of any one disk. • The storage capacity of the array is reduced by one disk • Raid 6  striped disks with dual parity combines four or more disks to protect against loss of any two disks.  • Makes larger RAID groups more practical. • Large-capacity drives lengthen the time needed to recover from the failure of a single drive. Single parity RAID levels are vulnerable to data loss until the failed drive is rebuilt: the larger the drive, the longer the rebuild will take. Dual parity gives time to rebuild the array without the data being at risk if a (single) additional drive fails before the rebuild is complete. 31

  32. Figure 2.3 from textbook Lecture 4 32

More Related