1 / 261

Introduction to Operating Systems: Definition, Responsibilities, and Computer Systems

This chapter provides an overview of operating systems, including their definition, responsibilities, and components found in computer systems. It covers topics such as instruction execution, kernel versus user mode, and the purpose of a computer. The chapter also explores how operating systems manage hardware and resources, ensure correct operation, and provide a user-friendly environment. Additionally, it discusses the distinction between application and system software and the role of the operating system in managing security, disk, networks, and I/O devices.

damona
Download Presentation

Introduction to Operating Systems: Definition, Responsibilities, and Computer Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS 346 – Chapter 1 • Operating system – definition • Responsibilities • What we find in computer systems • Review of • Instruction execution • Compile – link – load – execute • Kernel versus user mode

  2. Questions  • What is the purpose of a computer? • What if all computers became fried or infected? • How did Furman function before 1967 (the year we bought our first computer)? • Why do people not like computers?

  3. Definition • How do you define something? Possible approaches: • What it consists of • What is does (a functional definition) – purpose • What if we didn’t have it • What else it’s similar to • OS = set of software between user and HW • Provides “environment” for user to work • Convenience and efficiency • Manage the HW / resources • Ensure correct and appropriate operation of machine • 2 Kinds of software: application and system • Distinction is blurry; no universal definition for “system”

  4. Some responsibilities • Can glean from table of contents  • Book compares an OS to a government • Don’t worry about details for now • Security: logins • Manage resources • Correct and efficient use of CPU • Disk: “memory management” • network access • File management • I/O, terminal, devices • Kernel vs. shell

  5. Big picture • Computer system has: CPU, main memory, disk, I/O devices • Turn on computer: • Bootstrap program already in ROM comes to life • Tells where to find the OS on disk. Load the OS. • Transfer control to OS once loaded. • From time to time, control is “interrupted” • Examples? • Memory hierarchy • Several levels of memory in use from registers to tape • Closer to CPU: smaller, faster, more expensive • OS must decide who belongs where

  6. Big picture (2) • von Neumann program execution • Fetch, decode, execute, data access, write result • OS usually not involved unless problem • Compiling • 1 source file  1 object file • 1 entire program  1 executable file • “link” object files to produce executable • Code may be optimized to please the OS • When you invoke a program, OS calls a “loader” program that precedes execution • I/O • Each device has a controller, a circuit containing registers and a memory buffer • Each controller is managed by a device driver (software)

  7. 2 modes • When CPU executing instructions, nice to know if the instruction is on behalf of the OS • OS should have the highest privileges  kernel mode • Some operations only available to OS • Examples? • Users should have some restriction  user mode • A hardware bit can be set if program is running in kernel mode • Sometimes, the user needs OS to help out, so we perform a system call

  8. Management topics • What did we ask the OS to do during lab? • File system • Program vs. process • “job” and “task” are synonyms of process • Starting, destroying processes • Process communication • Make sure 2 processes don’t interfere with each other • Multiprogramming • CPU should never be idle • Multitasking: give each job a short quantum of time to take turns • If a job needs I/O, give CPU to another job

  9. More topics • Scheduling: deciding the order to do the jobs • Detect system “load” • In a real-time system, jobs have deadlines. OS should know worst-case execution time of jobs • Memory hierarchy • Higher levels “bank” the lower levels • OS manages RAM/disk decision • Virtual memory: actual size of RAM is invisible to user. Allow programmer to think memory is huge • Allocate and deallocate heap objects • Schedule disk ops and backups of data

  10. CS 346 – Chapter 2 • OS services • OS user interface • System calls • System programs • How to make an OS • Implementation • Structure • Virtual machines • Commitment • For next day, please finish chapter 2.

  11. OS services 2 types • For the user’s convenience • Shell • Running user programs • Doing I/O • File system • Detecting problems  • Internal/support • Allocating resources • System security • Accounting • Infamous KGB spy ring uncovered due to discrepancy in billing of computer time at Berkeley lab

  12. User interface • Command line = shell program • Parses commands from user • Supports redirection of I/O (stdin, stdout, stderr) • GUI • Pioneered by Xerox PARC, made famous by Mac • Utilizes additional input devices such as mouse • Icons or hotspots on screen • Hybrid approach • GUI allowing several terminal windows • Window manager

  13. System calls • “an interface for accessing an OS service within a computer program” • A little lower level than an API, but similar • Looks like a function call • Examples • Performing any I/O request, because these are not defined by the programming language itself e.g. read(file_ptr, str_buf_ptr, 80); • assembly languages typically have “syscall” instruction. When is it used? How? • If many parameters, they may be put on runtime stack

  14. Types of system calls • Controlling a process • File management • Device management • Information • Communication between processes • What are some specific examples you’d expect to find?

  15. System programs • Also called system utilities • Distinction between “system call” and “system program” • Examples • Shell commands like ls, lp, ps, top • Text editors, compilers • Communication: e-mail, talk, ftp • Miscellaneous: cal, fortune • What are your favorites? • Higher level software includes: • Spreadsheets, text formatters, etc. • But, boundary between “application” and “utility” software is blurry. A text formatter is a type of compiler!

  16. OS design ideas • An OS is a big program, so we should consider principles of systems analysis and software engineering • In design phase, need to consider policies and mechanisms • Policy = What should we do; should we do X • Mechanism = how to do X • Example:  a way to schedule jobs (policy) versus: what input needed to produce schedule, how schedule decision is specified (mechanism)

  17. Implementation • Originally in assembly • Now usually in C (C++ if object-oriented) • Still, some code needs to be in assembly • Some specific device driver routines • Saving/restoring registers • We’d like to use HLL as much as possible – why? • Today’s compilers produce very efficient code – what does this tell us? • How to improve performance of OS: • More efficient data structure, algorithm • Exploit HW and memory hierarchy • Pay attention to CPU scheduling and memory management

  18. Kernel structure • Possible to implement minimal OS with a few thousand lines of code  monolithic kernel • Modularize like any other large program • After about 10k loc, difficult to prove correctness • Layered approach to managing the complexity • Layer 0 is the HW • Layer n is the user interface • Each layer makes use of routines and d.s. defined at lower levels • # layers difficult to predict: many subtle dependencies • Many layers  lots of internal system call overhead 

  19. Kernel structure (2) • kernel • Kernel = minimal support for processes and memory management • (The rest of the OS is at user level) • Adding OS services doesn’t require changing kernel, so easier to modify OS • The kernel must manage communication between user program and appropriate OS services (e.g. file system) • Microsoft gave up on kernel idea for Windows XP • OO Module approach • Components isolated (OO information hiding) • Used by Linux, Solaris • Like a layered approach with just 2 layers, a core and everything else

  20. Virtual machine • How to make 1 machine behave like many • Give users the illusion they have access to real HW, distinct from other users • Figure 2.17 levels of abstraction: • Processes / kernels / VM’s / VM implementations / host HW As opposed to: • Processes / kernels / different machines • Why do it? • To test multiple OS’s on the same HW platform • Host machine’s real HW protected from virus in a VM bubble

  21. VM implementation • It’s hard! • Need to painstakingly replicate every HW detail, to avoid giving away the illusion • Need to keep track of what each guest OS is doing (whether it’s in kernel or user mode) • Each VM must interpret its assembly code – why? Is this a problem? • Very similar concept: simulation • Often, all we are interested in is changing the HW, not the OS; for example, adding/eliminating the data cache • Write a program that simulates every HW feature, providing the OS with the expected behavior

  22. CS 346 – Chapter 3 • What is a process • Scheduling and life cycle • Creation • Termination • Interprocess communication: purpose, how to do it • Client-server: sockets, remote procedure call • Commitment • Please read through section 3.4 by Wednesday and 3.6 by Friday.

  23. Process • Goal: to be able to run > 1 program concurrently • We don’t have to finish one before starting another • Concurrent doesn’t mean parallel • CPU often switches from one job to another • Process = a program that has started but hasn’t yet finished • States: • New, Ready, Running, Waiting, Terminated • What transitions exist between these states?

  24. Contents • A process consists of: • Code (“text” section) • Program Counter • Data section • Run-time stack • Heap allocated memory • A process is represented in kernel by a Process Control Block, containing: • State • Program counter • Register values • Scheduling info (e.g. priority) • Memory info (e.g. bounds) • Accounting (e.g. time) • I/O info (e.g. which files open) • What is not stored here?

  25. Scheduling • Typically many processes are ready, but only 1 can run at a time. • Need to choose who’s next from ready queue • Can’t stay running for too long! • At some point, process needs to be switched out temporarily back to the ready queue (Fig. 3.4) • What happens to a process ? (Fig 3.7) • New process enters ready queue. At some point it can run. • After running awhile, a few possibilities: • Time quantum expires. Go back to ready queue. • Need I/O. Go to I/O queue, do I/O, re-enter ready queue! • Interrupted. Handle interrupt, and go to ready queue. • Context switch overhead 

  26. Creation • Processes can spawn other processes. • Parent / child relationship • Tree • Book shows Solaris example: In the beginning, there was sched, which spawned init (the ancestor of all user processes), the memory manager, and the file manager. • Process ID’s are unique integers (up to some max e.g. 215) • What should happen when process created? • OS policy on what resources for baby: system default, or copy parent’s capabilities, or specify at its creation • What program does child run? Same as parent, or new one? • Does parent continue to execute, or does it wait (i.e. block)?

  27. How to create • Unix procedure is typical… • Parent calls fork( ) • This creates duplicate process. • fork( ) returns 0 for child; positive number for parent; negative number if error. (How could we have error?) • Next, we call exec( ) to tell child what program to run. • Do this immediately after fork • Do inside the if clause that corresponds to case that we are inside the child! • Parent can call wait( ) to go to sleep. • Not executing, not in ready queue

  28. Termination • Assembly programs end with a system call to exit( ). • An int value is returned to parent’s wait( ) function. This lets parent know which child has just finished. • Or, process can be killed prematurely • Why? • Only the parent (or ancestor) can kill another process – why this restriction? • When a process dies, 2 possible policies: • OS can kill all descendants (rare) • Allow descendants to continue, but set parent of dead process to init

  29. IPC Examples • Allowing concurrent access to information  • Producer / consumer is a common paradigm • Distributing work, as long as spare resources (e.g. CPU) are around • A program may need result of another program • IPC more efficient than running serially and redirecting I/O • A compiler may need result of timing analysis in order to know which optimizations to perform • Note: ease of programming is based on what OS and programming language allow

  30. 2 techniques • Shared memory • 2 processes have access to an overlapping area of memory • Conceptually easier to learn, but be careful! • OS overhead only at the beginning: get kernel permission to set up shared region • Message passing • Uses system calls, with kernel as middle man – easier to code correctly • System call overhead for every message  we’d want amount of data to be small • Definitely better when processes on different machines • Often, both approaches are possible on the system

  31. Shared memory • Usually forbidden to touch another process’ memory area • Each program must be written so that the shared memory request is explicit (via system call) • An overlapping “buffer” can be set up. Range of addresses. But there is no need for the buffer to be contiguous in memory with the existing processes. • Then, the buffer can be treated like an array (of char) • Making use of the buffer (p. 122) • Insert( ) function • Remove( ) function • Circular array… does the code make sense to you?

  32. Shared memory (2) • What could go wrong?... How to fix? • Trying to insert into full buffer • Trying to remove from empty buffer • Sound familiar? • Also: both trying to insert. Is this a problem?

  33. Message passing • Make continual use of system calls: • Send( ) • Receive( ) • Direct or indirect communication? • Direct: send (process_num, the_message) Hard coding the process we’re talking to • Indirect: send (mailbox_num, the_message) Assuming we’ve set up a “mailbox” inside the kernel • Flexibility: can have a communication link with more than 2 processes. e.g. 2 producers and 1 consumer • Design issues in case we have multiple consumers • We could forbid it • Could be first-come-first-serve

  34. Synchronization • What should we do when we send/receive a message? • Block (or “wait”): • Go to sleep until counterpart acts. • If you send, sleep until received by process or mailbox. • If you receive, block until a message available. How do we know? • Don’t block • Just keep executing. If they drop the baton it’s their fault. • In case of receive( ), return null if there is no message (where do we look?) • We may need some queue of messages (set up in kernel) so we don’t lose messages!

  35. Buffer messages • The message passing may be direct (to another specific process) or indirect (to a mailbox – no process explicitly stated in the call). • But either way, we don’t want to lose messages. • Zero capacity: sender blocks until recipient gets message • Bounded capacity (common choice): Sender blocks if the buffer is full. • Unbounded capacity: Assume buffer is infinite. Never block when you send.

  36. Socket • Can be used as an “endpoint of communication” • Attach to a (software) port on a “host” computer connected to the Internet • 156.143.143.132:1625 means port # 1625 on the machine whose IP number is 156.143.143.132 • Port numbers < 1024 are pre-assigned for “well known” tasks. For example, port 80 is for a Web server. • With a pair of sockets, you can communicate between them. • Generally used for remote I/O

  37. Implementation • Syntax depends on language. • Server • Create socket object on some local port. • Wait for client to call. Accept connection. • Set up output stream for client. • Write data to client. • Close client connection. • Go back to wait • Client • Create socket object to connect to server • Read input analogous to file input or stdin • Close connection to server

  38. Remote procedure call • Useful application of inter-process communication (the message-passing version) • Systematic way to make procedure call between processes on the network • Reduce implementation details for user • Client wants to call foreign function with some parameters • Tell kernel server’s IP number and function name • 1st message: ask server which port corresponds with function • 2nd message: sending function call with “marshalled” parameters • Server daemon listens for function call request, and processes • Client receives return value • OS should ensure function call successful (once)

  39. CS 346 – Chapter 4 • Threads • How they differ from processes • Definition, purpose Threads of the same process share: code, data, open files • Types • Support by kernel and programming language • Issues such as signals • User thread implementation: C and Java • Commitment • For next day, please read chapter 4

  40. Thread intro • Also called “lightweight process” • One process may have multiple threads of execution • Allows a process to do 2+ things concurrently  • Games • Simulations • Even better: if you have 2+ CPU’s, you can execute in parallel • Multicore architecture  demand for multithreaded applications for speedup • More efficient than using several concurrent processes

  41. Threads • A process contains: • Code, data, open files, registers, memory usage (stack + heap), program counter • Threads of the same process share • Code, data, open files • What is unique to each thread? • Can you think of example of a computational algorithm where threads would be a great idea? • Splitting up the code • Splitting up the data • Any disadvantages?

  42. 2 types of threads • User threads • Can be managed / controlled by user • Need existing programming language API support: POSIX threads in C Java threads • Kernel threads • Management done by the kernel •  Possible scenarios • OS doesn’t support threading • OS support threads, but only at kernel level – you have no direct control, except possibly by system call • User can create thread objects and manipulate them. These objects map to “real” kernel threads.

  43. Multithreading models • Many-to-one: User can create several thread objects, but in reality the kernel only gives you one. Multithreading is an illusion • One-to-one: Each user thread maps to 1 real kernel thread. Great but costly to OS. There may be a hard limit to # of live threads. • Many-to-many: A happy compromise. We have multithreading, but the number of true threads may be less than # of thread objects we created. • A variant of this model “two-level” allows user to designate a thread as being bound to one kernel thread.

  44. Thread issues • What should OS do if a thread calls fork( )? • Can duplicate just the calling thread • Can duplicate all threads in the process • exec ( ) is designed to replace entire current process • Cancellation • kill thread before it’s finished • “Asynchronous cancellation” = kill now. But it may be in the middle of an update, or it may have acquired resources. You may have noticed that Windows sometimes won’t let you delete a file because it thinks it’s still open. • “Deferred cancellation”. Thread periodically checks to see if it’s time to quit. Graceful exit.

  45. Signals • Reminiscent of exception in Java • Occurs when OS needs to send message to a process • Some defined event generates a signal • OS delivers signal • Recipient must handle the signal. Kernel defines a default handler – e.g. kill the process. Or, user can write specific handler. • Types of signals • Synchronous: something in this program caused the event • Asynchronous: event was external to my program

  46. Signals (2) • But what if process has multiple threads? Who gets the signal? For a given signal, choose among 4 possibilities: • Deliver signal to the 1 appropriate thread • Deliver signal to all threads • Have the signal indicate which threads to contact • Designate a thread to receive all signals • Rules of thumb… • Synchronous event  just deliver to 1 thread • User hit ctrl-C  kill all threads

  47. Thread pool • Like a motor pool • When process starts, can create a set of threads that sit around and wait for work • Motivation • overhead in creating/destroying • We can set a bound for total number of threads, and avoid overloading system later • How many threads? • User can specify • Kernel can base on available resources (memory and # CPU’s) • Can dynamically change if necessary

  48. POSIX threads • aka “Pthreads” • C language • Commonly seen in UNIX-style environments: • Mac OS, Linux, Solaris • POSIX is a set of standards for OS system calls • Thread support is just one aspect • POSIX provides an API for thread creation and synchronization • API specifies behavior of thread functionality, but not the low-level implementation

  49. Pthread functions • pthread_attr_init • Initialize thread attributes, such as • Schedule priority • Stack size • State • pthread_create • Start new thread inside the process. • We specify what function to call when thread starts, along with the necessary parameter • The thread is due to terminate when its function returns • pthread_join • Allows us to wait for a child thread to finish

  50. Example code #include <pthread.h> int sum; main() { pthread_t tid; pthread_attr attr; pthread_attr_init(&attr); pthread_create(&tid, &attr, fun, argv[1]); pthread join(tid, NULL); printf(“%d\n”, sum); } int fun(char *param) ... void *fun(void *param) { // compute a sum: // store in global // variable ... }

More Related