Advanced operating systems
This presentation is the property of its rightful owner.
Sponsored Links
1 / 44

Advanced Operating Systems PowerPoint PPT Presentation


  • 142 Views
  • Uploaded on
  • Presentation posted in: General

Advanced Operating Systems. Main Text: “Advanced Concepts in Operating Systems: Distributed, Database, & Multiprocessor Operating Systems” by Mukesh Singhal and Niranjan G. Shivaratri. McGraw Hill publishers. Project Reference Text: “Unix Network programming”, Richard Stevens, Prentice Hall.

Download Presentation

Advanced Operating Systems

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Advanced operating systems

Advanced Operating Systems

  • Main Text:

    • “Advanced Concepts in Operating Systems: Distributed, Database, & Multiprocessor Operating Systems” by Mukesh Singhal and Niranjan G. Shivaratri. McGraw Hill publishers.

  • Project Reference Text:

    • “Unix Network programming”, Richard Stevens, Prentice Hall.

  • Reference Papers:

    • Recommend reading appropriate reference papers given in each chapter end.

B. Prabhakaran


Contact information

Contact Information

B. Prabhakaran

Department of Computer Science

University of Texas at Dallas

Mail Station EC 31, PO Box 830688

Richardson, TX 75083

Email: [email protected]

Fax: 972 883 2349

URL: http://www.utdallas.edu/~praba/cs6378.html

Phone: 972 883 4680

Office: ES 3.706

Office Hours: 12.15 – 2 pm, Fridays

Other times by appointments through email

Announcements: Made in class and on course web page.

TA: TBA.

B. Prabhakaran


Course outline

Course Outline

Proposed Outline. Might be modified

based on time availability:

  • Introduction to operating systems, inter-process communication. (Chapters 1 & 2).

  • Distributed Operating Systems

    • Architecture (Chapter 4)

    • Clock Synchronization, Ordering (Chapter 5)

    • Distributed Mutual Exclusion (Chapter 6)

    • Distributed Deadlock Detection (Chapter 7)

    • Agreement Protocols (Chapter 8)

  • Distributed Resource Management

    • Distributed File Systems (Chapter 9)

    • Distributed Shared Memory (Chapter 10)

    • Distributed Scheduling (Chapter 11)

B. Prabhakaran


Course outline1

Course Outline ...

  • Recovery & Fault Tolerance

    • Chapters 12 and 13

  • Concurrency Control/ Security

    • Depending on time availability

Discussions will generally follow the main text. However,

additional/modified topics might be introduced from other

texts and/or papers. References to those materials will be

given at appropriate time.

B. Prabhakaran


Evaluation

Evaluation

  • 1 Mid-term: in class. 75 minutes. Mix of MCQs (Multiple Choice Questions) & Short Questions.

  • 1 Final Exam: 75 minutes or 2 hours (Mix of MCQs and Short Questions).

  • 3 Quizzes: in class. 5-6 MCQs or very short questions. 15-20 minutes each.

  • Homeworks/assignments: 3 or 4 spread over the semester.

  • Programming Projects:

    • 1 preparatory (to warm up on thread and socket programming)

    • 1 Long project with an intermediate and final submission.

B. Prabhakaran


Grading

Grading

  • Home works: 5%

  • Quizzes: 15%

  • Mid-term: 25%

  • Final: 25%

  • Preparatory Project: 5%

  • Intermediate & Final Projects (combined): 25%

B. Prabhakaran


Schedule

Schedule

  • Quizzes: Dates announced in class & web, a week ahead. Mostly just before midterm and final.

  • Mid-term: October 5, 2007

  • Final Exam: November 30, 2007 (As per UTD schedule)

  • Subject to minor changes

  • Quiz and homework schedules will be announced in class and course web page, giving sufficient time for submission.

  • Likely project deadlines:

    • Preparatory project: September 8, 2007

    • Intermediate project: October 14, 2007

    • Final Project: November 26, 2007

B. Prabhakaran


Programming projects

Programming Projects

  • No copying/sharing of code/results will be tolerated. Any instance of cheating in projects/homeworks/exams will be reported to the University.

  • No copying code from the Internet.

  • 2 individual students copying code from Internet independently: still considered copying in the project !!

  • Individual projects.

  • Projects might involve Unix, C/C++/Java programming, network programming.

  • Deadlines will be strictly followed for projects and homeworks submissions.

  • Projects submissions through Web CT.

  • Demo will be needed

B. Prabhakaran


Two track course

Two-track Course

  • Programming Project Discussions:

    • Announcements in class

    • Minimal Discussions in class

    • Design discussions during office hours based on individual needs

  • Theory (Algorithms) Discussions:

    • Full discussion in class

    • Office hours for clarification if needed

B. Prabhakaran


Web ct

Web CT

  • Go to: http://webct6.utdallas.edu

  • Web CT has a discussion group that can be used for project and other course discussions.

B. Prabhakaran


Cheating

Cheating

  • Academic dishonesty will be taken seriously.

  • Cheating students will be handed over to Head/Dean for further action.

  • Remember: home works/projects (exams too !) are to be done individually.

  • Any kind of cheating in home works/ projects/ exams will be dealt with as per UTD guidelines.

  • Cheating in any stage of projects will result in 0 for the entire set of projects.

B. Prabhakaran


Projects

Projects

  • Involves exercises such as ordering, deadlock detection, load balancing, message passing, and implementing distributed algorithms (e.g., for scheduling, etc.).

  • Platform: Linux/Windows, C/C++/Java. Network programming will be needed. Multiple systems will be used.

  • Specific details and deadlines will be announced in class and course webpage.

  • Suggestion: Learn network socket programming and threads, if you do not know already. Try simple programs for file transfer, talk, etc.

  • Sample programs and tutorials available at:

    • http://www.utdallas.edu/~praba/projects.html

B. Prabhakaran


Homeworks

Homeworks

  • 3 – 4 home works, announced in class and course web page.

  • Homeworks Submission:

    • Submit on paper to TA/Instructor.

B. Prabhakaran


Advanced operating systems

Basic Computer Organization

  • Input Unit

  • Output Unit

  • CPU

  • Memory

  • ALU (Arithmetic & Logic Unit)

  • Secondary Storage

Disk

Memory

ALU

CPU

I/O

Display

Keyboard

B. Prabhakaran


Simplified view of os

Simplified View of OS

OS

Kernel

User i

Process j

Physical

Memory

OS Tools

Code

Code

User

Processes

Code

Code

Data

Data

User

Processes ..

Data

Virtual

Memory

Data

Tools ++

Memory Space

B. Prabhakaran


Distributed view of the system

Distributed View of the System

hardware

hardware

hardware

Process

hardware

hardware

B. Prabhakaran


Inter process communication

Inter-Process Communication

  • Need for exchanging data/messages among processes belonging to the same or different group.

  • IPC Mechanisms:

    • Shared Memory: Designate and use some data/memory as shared. Use the shared memory to exchange data.

      • Requires facilities to control access to shared data.

    • Message Passing: Use “higher” level primitives to “send” and “receive” data.

      • Requires system support for sending and receiving messages.

    • Operation oriented language constructs

      • Request-response action

      • Similar to message passing with mandatory response

      • Can be implemented using shared memory too.

B. Prabhakaran


Ipc examples

IPC Examples

  • Parallel/distributed computation such as sorting: shared memory is more apt.

    • Using message passing/RPC might need an array/data manager of some sort.

  • Client-server type: message passing or RPC may suit better.

    • Shared memory may be useful, but the program is more clear with the other types of IPCs.

  • RPC vs. Message Passing: if response is not a must, atleast immediately, simple message passing should suffice.

B. Prabhakaran


Shared memory

Shared Memory

Writers/

Producers

Only one process

can write at any

point in time. No

access to readers.

Shared Memory

Multiple readers can

access. No access to

Writers.

Readers/

Consumers

B. Prabhakaran


Shared memory possibilities

Shared Memory: Possibilities

  • Locks (unlocks)

  • Semaphores

  • Monitors

  • Serializers

  • Path expressions

B. Prabhakaran


Message passing

Message Passing

  • Blocked Send/Receive: Both sending and receiving process get blocked till the message is completely received. Synchronous.

  • Unblocked Send/Receive: Both sender and receiver are not blocked. Asynchronous.

  • Unblocked Send/Blocked Receive: Sender is not blocked. Receiver waits till message is received.

  • Blocked Send/Unblocked Receive: Useful ?

  • Can be implemented using shared memory. Message passing: a language paradigm for human ease.

B. Prabhakaran


Un blocked

Un/blocked

  • Blocked message exchange

    • Easy to: understand, implement, verify correctness

    • Less powerful, may be inefficient as sender/receiver might waste time waiting

  • Unblocked message exchange

    • More efficient, no waste on waiting

    • Needs queues, i.e., memory to store messages

    • Difficult to verify correctness of programs

B. Prabhakaran


Message passing possibilities

Message Passing: Possibilities

receiver i

receiver i

sender

receiver j

sender

receiver j

receiver k

receiver k

B. Prabhakaran


Message passing possibilities1

Message Passing: Possibilities...

sender i

sender i

receiver

sender j

receiver

sender j

sender k

sender k

B. Prabhakaran


Naming

Naming

  • Direct Naming

    • Specify explicitly the receiver process-id.

    • Simple but less powerful as it needs the sender/receiver to know the actual process-id to/from which a message is to be sent/received.

    • Not suitable for generic client-server models

  • Port Naming

    • receiver uses a single port for getting all messages, good for client-server.

    • more complex in terms of language structure, verification

  • Global Naming (mailbox)

    • suitable for client-server, difficult to implement on a distributed network.

    • complex for language structure and verification

B. Prabhakaran


Communicating sequential processes csp

Communicating Sequential Processes (CSP)

processreader-writer

OKtoread, OKtowrite: integer (initially = value);

busy: boolean (initially = 0);

*[busy = 0; writer?request() ->

busy := 1; writer!OKtowrite;



busy = 0; reader?request() ->

busy := 1; reader!OKtoread;



busy = 1; reader?readfin() -> busy := 0;



busy = 1; writer?writefn() -> busy := 0;

]

B. Prabhakaran


Csp drawbacks

CSP: Drawbacks

  • Requires explicit naming of processes in I/O commands.

  • No message buffering; input/output command gets blocked (or the guards become false) -> Can introduce delay and inefficiency.

B. Prabhakaran


Operation oriented constructs

Operation oriented constructs

Remote Procedure Call (RPC):

Task A

Task B

Service declarations

Service declarations

send xyz; wait for result

……

RPC: abc(xyz, ijk);

…….

return ijk

  • Service declaration: describes in and out parameters

  • Can be implemented using message passing

  • Caller: gets blocked when RPC is invoked.

  • Callee implementation possibilities:

    • Can loop “accepting” calls

    • Can get “interrupted” on getting a call

    • Can fork a process/thread for calls

B. Prabhakaran


Rpc issues

RPC: Issues

  • Pointer passing, global variables passing can be difficult.

  • If processes on different machines, data size (number of bits for a data type) variations need to be addressed.

    • Abstract Data Types (ADTs) are generally used to take care of these variations.

    • ADTs are language like structures that specify how many bits are being used for integer, etc…

    • What does this imply?

  • Multiple processes can provide the same service? Naming needs to be solved.

  • Synchronous/blocked message passing is equivalent to RPC.

B. Prabhakaran


Advanced operating systems

Ada

task proc-buffer is

entrystore(x:buffer);

remove(y:buffer);

end;

task [type] <name> is

entry specifications

end

task body proc-buffer is

temp: buffer;

begin

loop

when flag accept store(x: buffer);

temp := x; flag :=0;

end store;

When !flag accept remove(y:buffer);

y := temp; flag :=1;

end remove;

end loop

end proc-buffer.

Task body <name> is

Declaration of local

variables

begin

list of statements

…..

accept <entry id> (<formal parameters> do

body of the accept statement

end<entry id>

exceptions

Exception handlers

end;

B. Prabhakaran


Ada message passing

Ada Message Passing

Task A

Task B

entry store(...)

xyz is sent from

Task B to A

….

accept Store(.)

…...

….

Store(xyz);

…..

result

Somewhat similar to executing procedure call. Parameter value

for the entry procedure is supplied by the calling task. Value of

Result, if any, is returned to the caller.

B. Prabhakaran


Rpc design

RPC Design

  • Structure

    • Caller: local call + stub

    • Callee: stub + actual procedure

  • Binding

    • Where to execute? Name/address of the server that offers a service

    • Name server with inputs from service specifications of a task.

  • Parameter & results

    • Packing: Convert to remote machine format

    • Unpacking: Convert to local machine format

B. Prabhakaran


Rpc execution

RPC Execution

Binding

Server

Register

Services

Receive

Query

Stub

Local Proc.

Stub

Remote Proc.

Local

call

Query

binding

server

Return

Server Address

Execute

procedure

Unpack

Params

packing

Local

call

Wait

Pack

results

Unpack

result

Return

Return

Caller

Callee

B. Prabhakaran


Rpc semantics

RPC Semantics

  • At least once

    • A RPC results in zero or more invocation.

    • Partial call, i.e., unsuccessful call: zero, partial, one or more executions.

  • Exactly once

    • Only one call maximum

    • Unsuccessful? : zero, partial, or one execution

  • At most once

    • Zero or one. No partial executions.

B. Prabhakaran


Rpc implementation

RPC Implementation

  • Sending/receiving parameters:

    • Use reliable communication? :

    • Use datagrams/unreliable?

    • Implies the choice of semantics: how many times a RPC may be invoked.

B. Prabhakaran


Rpc disadvantage

RPC Disadvantage

  • Incremental results communication not possible: (e.g.,) response from a database cannot return first few matches immediately. Got to wait till all responses are decided.

B. Prabhakaran


Distributed operating systems

Distributed Operating Systems

Issues :

  • Global Knowledge

  • Naming

  • Scalability

  • Compatibility

  • Process Synchronization

  • Resource Management

  • Security

  • Structuring

  • Client-Server Model

B. Prabhakaran


Dos issues

DOS: Issues ..

  • Global Knowledge

    • Lack of global shared memory, global clock, unpredictable message delays

    • Lead to unpredictable global state, difficult to order events (A sends to B, C sends to D: may be related)

  • Naming

    • Need for a name service: to identify objects (files, databases), users, services (RPCs).

    • Replicated directories? : Updates may be a problem.

    • Need for name to (IP) address resolution.

    • Distributed directory: algorithms for update, search, ...

B. Prabhakaran


Dos issues1

DOS: Issues ..

  • Scalability

    • System requirements should (ideally) increase linearly with the number of computer systems

    • Includes: overheads for message exchange in algorithms used for file system updates, directory management...

  • Compatibility

    • Binary level: Processor instruction level compatibility

    • Execution level: same source code can be compiled and executed

    • Protocol level: Mechanisms for exchanging messages, information (e.g., directories) understandable.

B. Prabhakaran


Dos issues2

DOS: Issues ..

  • Process Synchronization

    • Distributed shared memory: difficult.

  • Resource Management

    • Data/object management: Handling migration of files, memory values. To achieve a transparent view of the distributed system.

    • Main issues: consistency, minimization of delays, ..

  • Security

    • Authentication and authorization

B. Prabhakaran


Dos issues3

DOS: Issues ..

  • Structuring

    • Monolithic Kernel: Not needed (e.g.,) file management not needed fully on diskless workstations.

    • Collective kernel: distributed functionality on all systems.

      • Micro kernel + set of OS processes

      • Micro kernel: functionality for task, memory, processor management. Runs on all systems.

      • OS processes: set of tools. Executed as needed.

    • Object-oriented system: services as objects.

      • Object types: process, directory, file, …

      • Operations on the objects: encapsulated data can be manipulated.

B. Prabhakaran


Dos communication

DOS: Communication

Computer

Switch

B. Prabhakaran


Iso osi reference model

ISO-OSI Reference Model

Application

Application

Presentation

Presentation

Session

Session

Communication Network

Transport

Transport

Network

Network

Network

Network

Datalink

Datalink

Datalink

Datalink

Physical

Physical

Physical

Physical

B. Prabhakaran


Un reliable communication

Un/reliable Communication

  • Reliable Communication

    • Virtual circuit: one path between sender & receiver. All packets sent through the path.

    • Data received in the same order as it is sent.

    • TCP (Transmission Control Protocol) provides reliable communication.

  • Unreliable communication

    • Datagrams: Different packets are sent through different paths.

    • Data might be lost or out of sequence.

    • UDP (User datagram Protocol) provides unreliable communication.

B. Prabhakaran


  • Login