1 / 32

Networking and the Internet (2)

Networking and the Internet (2). Useful text-book: Coope, S et al (2002) Computer Systems McGraw Hill. Last Week: Why does Networking matter? Some thoughts about e-Business Hardware foundation – Computer architecture Week 2 Focus Computer architecture – working through operations

libba
Download Presentation

Networking and the Internet (2)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Networking and the Internet (2) Useful text-book: Coope, S et al (2002) Computer Systems McGraw Hill • Last Week: • Why does Networking matter? • Some thoughts about e-Business • Hardware foundation – Computer architecture • Week 2 Focus • Computer architecture – working through operations • Software – Operating Systems, File storage • Operating system foundations • Interrupts, concurrency and multi-programming • Scheduling and Dispatching • Time-sharing and Online systems • Graphical Operating Systems, including Windows

  2. Computer Architecture • Processor executes instructions from memory, • and works on data in memory • Other data flows through the bus Memory Processor 1234567890-= QWERTYUIOP[]# ASDFGHJKL;’ ZXCVBNM,./ Output (Information) Input (Data) Bus Other long-term Storage Disk Storage

  3. Instruction Control Unit(starts at 310) Registers 1 2 Instruction Register Memory 600 4 602 6 604 4660 606 4608 608 4660 60A -14 Computer Processor & Machine code Clock Arithmetic and Logic Unit Central Processor Unit 1 = Load 2 = Store 3 = Add 4 = Subtract 5 = Multiply Memory 310 1 1 600 314 1 2 602 318 5 1 2 31A 2 1 604 ...

  4. Operating Systems How we control this hardware

  5. Operating Systems • Though the processor is simple and serial, we want to do more complex things, often several at once • An operating system is a program that provides the building blocks of complex systems • Some simply encapsulate function to save every application from having to include a copy • Others handle specific hardware, presenting a generic interface that hides behaviour unique to that hardware • Sometimes the interface is so generic that it has little to do with the hardware – file structures are the best example • Modern operating systems make it look as if the computer is doing several things at the same time • Our operating system is Windows XP

  6. Concurrent Operations • To give the appearance of doing several things at once: • OS must stay ready to accept work: • keystrokes, mouse clicks, signals from modem, printer ready to receive another buffer of data • These can interrupt a computation already being run • It then does a bit of the required work, • then goes back to an interrupted task, and so on. • We say the machine is doing things “concurrently” – they’re not simultaneous, but they look it! • The key is switching the CPU between logical processes • In theory, you could go round “polling” – high overhead • In practice, concurrency depends on hardware interrupts

  7. Essentials of an operating system • Controls the hardware • Lets applications be less hardware-specific by abstracting operations (who cares how big a track is!) • Reduces havoc that can be done by rogue programs by restricting use of risky instructions (such as those giving direct access to hardware) • Allows processes to update files with integrity • Encapsulates commonly-used functions • Manages resources, including storage • Supports concurrent operations • Success judged by performance, in terms of Availability, Reliability, Response-time, Throughput

  8. In the beginning... • 21 June 1948: First stored-program computer (Manchester University “Baby”) ran its first program • Program keyed directly into memory • Results displayed as dots on a CRT • When program finished, it stopped • Next machines used tape or card for I/O • Monitors developed in 1950s • encapsulate standard functions (for example, Input/Output) • automate running of programs one after another • still one program at a time • then added SPOOLING to overlap input and output Too much investment to let it sit around waiting for humans to press buttons

  9. True Operating Systems • Introduced with Ferranti Atlas and IBM System/360 • Applied concurrency to user work as well as to SPOOL • Potential to run complementary jobs alongside each other • OS became a resource manager • Sharing processor resource between jobs • Providing separate memory for use by each job • Controlling allocation of tapes and other hardware • Scheduling jobs to fit resources available • Used interrupts to switch control between processes • Need to be sure we understand how they work • Foundation for on-line systems with terminals

  10. On-line Computing • Terminal attached to mainframe computer • Operating system “time-shared” processor among users • Developed initially with slow lines and typewriter-like terminals or teleprinters, which sent a character at a time • It was expensive to read every keystroke • so switched to using “Block mode”: • User types into terminal buffer, presses Enter to transmit • Most transaction processing is done that way, even today • Weaknesses of mainframe + terminal are • poor bandwidth: can’t track mouse, write graphics • can’t take shortcuts based on every keystroke

  11. Basic Concepts Covered So Far • Faking concurrency • multiprogramming: appearing to do several things at once • processes and threads • For multiple users – or one user with several balls in the air • Interrupts • There’s more to come on: • I/O buffering • Spooling (offline, online) • Multiprocessing • Using multiple processors to get more power • symmetrical, clustering or master-slave

  12. What Operating Systems Do OS/360 and Batch Scheduling How Online Computing is different

  13. OS/360 • “Betting the Business” for IBM in late 1960s • Environment: • Batch processing • Real memory only (Ferranti Atlas had paging, but the idea hadn’t yet crossed the Atlantic) • Physical cards used for job entry • Line-printers used for output (up to 1000 lines/min) • Tapes and disks expensive but heavily used • Concepts • Job Control Language (JCL) to describe each job • Each program ran in a Job Step

  14. Whole Job Step 1, e.g. compile program Needs card input + workfile Step 2, e.g. link-edit program Needs object deck + workfile Step 3, e.g. run program Needs module + input data Step 4, e.g. sort output Needs output data Step 5, e.g. run utility on result Needs sort output + target file Structure of a Job • Must run steps in sequence • Don’t run step if previous one failed • Must have input available at start of step • Need somewhere to write results • Usually generate spool output • Normally expects the Program run in each step to be on disk //ERIC JOB //COMP EXEC PGM=PLIOPT //SYSIN DD * //SYSPRINT DD * //SYSOUT DD DSN=“ERIC.PLI.OBJ” //LINK EXEC PGM=LKEDIT /* and so on

  15. Scheduling Jobs • Back in the days of monitors, scheduling was easy • When a job finishes, load the next one and run it • If I/O is spooled, next one will be loaded from spool file that contains images of cards read in earlier.. • ..and CPU time needs to be shared between the running job and spool I/O (which uses predictably little CPU) • Gets harder when more than one real job can run • Have to match resource requirements with availability • Need to be concerned with sequence of jobs • Optimization needs awareness of job type(processor-heavy, I/O heavy, etc.) • Specified with complex Job Control Language

  16. Job 5 Job 3 Job 4 Wasted fragment of storage Job 2 Job 1 Print spool Card spool Operating System Jobqueue Job initiators OS/360 Batch Scheduling • Memory was critical resource(>$1000 per kilobyte) • Divided into partitions or regions, each with a job initiator • OS reads each job in and places it on a (prioritized) queue • When job completes in region,initiator looks on queue for more work • Must fit in available memory • Necessary resources must be available • Otherwise try continue looking down queue memory

  17. Batch Scheduling Problems • Storage • With fixed-size partitions, you can have long queues for a big partition while one or more small ones are free • With variable-sizes (“regions”), you get fragmentation • Various steps may have different space requirements • Resource allocation • Need to collect together all resources needed by a step: such as files and dedicated hardware • Can still waste run time (e.g. if you get a tape drive but haven’t mounted the right tape on it) • Addressed by extensions to Job Control Language(even more chances to make mistakes with it!)

  18. Risks of Deadlock OS/360 enforced pre-allocation to avoid deadlock • Consider two jobs, A & B • Job A • is holding tape drive 1 • can’t complete until it can update file X.Y.Z • Job B • is writing file X.Y.Z • can’t close the file until it’s written a log to tape • BUT there are no tape drives available • Therefore both jobs will stall, tying up both regions • But you can’t pre-allocate with on-line users

  19. Scheduling Today • We’re not usually concerned with resources on a home PC • Schedule an anti-virus upgrade and it’ll run quickly(though running the virus checker might be much slower) • We may be concerned about sequence • Better to be sure the upgrade is complete before scanning • In business systems: • Duration can be longer (that’s why ITS constrains disk space; otherwise the back-ups would take too long) • Sequence can be critical – don’t pay out before taking in • Every transaction must run once and once only

  20. Scheduling and Dispatching • Once we’ve scheduled jobs to start, we have to divide machine cycles between the concurrent processes • With a small degree of multiprogramming (few regions), that can be fairly simple • Whenever a region waits, we dispatch another one • Dispatcher is like a micro-level scheduler • Often have different dispatch and initiator priorities • Storage limitations addressed with Virtual Memory • Allowed increase in number of initiators • Increasing degree of multiprogramming .. • .. and need for more complex dispatching – there’s no point in dispatching a process without (real) memory

  21. Online Systems • Expands scale of allocation and dispatching problems • “Jobs” become very long-running • so we can’t pre-assign all resources and hold until job end • Total number of jobs grows greatly • Jobs need to sleep when user isn’t interacting • Each interaction is like a mini job step • Two basic approaches to managing this • Treat each user as a process and tune the dispatcher • Unix, VM/370, MVS/TSO and VMS work this way • Treat each user as a thread on a “timesharing” process • CICS and IMS work this way • High-performance web servers too

  22. Time-sharing • Consider a system with 100 on-line users • Average think time 10 seconds(so overall arrival rate is 10 interactions per second) • Average CPU demand of 50msec per interaction • Therefore load should be 50% plus overhead • If load is homogeneous, “round-robin” dispatching is OK • But what if a few users need 1 sec & the rest 10 msec? • Queue will build up if the long task is allowed to complete • OK, let’s preempt task on expiry of a time-slice; suspend task (take the CPU away) if it takes too long, and make it wait until next time round

  23. Life Cycle of an Interaction User hits ENTER Task runs until interrupted for some reason Task Created Executable Task Dispatched Running Finishes On Ready Queue Terminated Time expires Enters WAIT Requeued on Event Blocked

  24. Dispatching Tasks • Let’s assume a Ready Queue of tasks waiting to run • OS adds tasks to queue according to a priority pattern (cheapest is FIFO*, but we may want to improve on that) • Dispatch the task at the head of the queue • When it gives up control, dispatch new head of queue • May want to maintain queue of blocked tasks too • What if task doesn’t give up control? • Need to interrupt it at end of time slice to let others run(Windows 3.x didn’t do this except for DOS tasks) • Can return it directly to Ready Queue (with risk that it’ll consume too much CPU time) • Maybe we should favour short interactions over long *First In First Out

  25. Graphical Operating Systems Development of Windows

  26. Graphical Operating Systems • WIMP concepts • Windows, Icons, Menus, Pointer • invented late 70s at Xerox Palo Alto Research Center • First marketed in Lisa machine – very expensive • Later in Xerox “Daisy” – still too costly to succeed • Apple made success of the idea in Macintosh – 1984 • Massive advertising campaign • Succeeded first with designers and Desk Top Publishing • IBM & Microsoft followed with 16-bit OS/2 V.1 (1987) • Full WIMP approach in 1988 • IBM Went it alone for OS/2 V2 – incompatible interface

  27. Early Flavours of Windows • Windows 1 and 2 • Never really made it – Lisa ideas done less well than Mac • OS/2 Version 1 (Microsoft/IBM collaboration) • Windows 386 • Exploited 80386 virtual machine code; Win2 interface • Windows 3.0 (1990) – The breakthrough • Picked up OS/2 V1 user interface, with simpler API • Added Windows 386 process mgt (multiple DOS boxes) • Still no pre-emption of ill-behaved Windows processes • Windows 3.1 • Enhanced Win3.0 without major architecture changes • Added some new GUI controls

  28. Flavours of Windows since 1993 • Windows for Workgroups (3.1 and 3.11) • Added networking capabilities • Introduced (some) 32-bit code, such as file I/O • You could bolt-on IP network support – free but not trivial • Windows 95 – New user interface • Integrated IP networking • Much more 32-bit code • Long file-names (and file-types, unlike OS/2) • Still not fully pre-emptive multitasking, but improved capability to detect and abort ill-behaved processes • Windows 98 – Minor changes from W95 • Adds Internet-like interface to Windows 95 • Meant to be “last of the line of Windows 3.x successors”

  29. Windows NT • NT V3.x • Microsoft’s OS/2 successor, with full Windows GUI • Kernel based on experiences with VMS (at Digital) • Data permissions RACF style, by user and group • 32-bit, with some limitations on use of 16-bit applications • NT V4.0 • Enhanced from NT V3.51 • Added Windows 95 user interface • Improved tolerance of 16-bit applications… • …providing they don’t try to access the hardware directly • Full pre-emptive multitasking – Windows processes get time-sliced, so ill-behaved ones can’t hog the processor

  30. Increasing price Current Windows Flavours • Windows 2000 – Intended to unify NT and 9x families • To avoid duplicate development effort • Replaced NT for professionals and large & small Servers • But Domestic version didn’t run all W98 software, so… • Millennium Edition (of Windows 98) • Stopgap because 98/NT integration wasn’t complete • Windows XP – finally did unite NT and 9x families • Came in versions for different purposes • XP Home edition, • Professional edition for corporate clients • Servers • Similar approach with Windows Vista (2007) and W7

  31. Summary of Week 2 • Computer does simple things, in sequence • Instruction counter contains address of next instruction to run • Operating systems package these into useful facilities • Including ability to run programs concurrently • Processor is a resource that the Operating System uses • OS treats each program as a task • Gives it a bit of time on the processor • Then passes the processor to another task • OS gets control back through interrupts • Voluntary (supervisor call by user program) • External (such as disk I/O completion, timer interrupt)

  32. Checkpoint Questions • Describe differences between B2C & B2B e-Commerce • 1. • 2. • What are the main functions of an Operating System? • 1. 2. • 3. 4. • What does the “Instruction Counter” point to?

More Related