cot 4600 operating systems spring 2011
Download
Skip this Video
Download Presentation
COT 4600 Operating Systems Spring 2011

Loading in 2 Seconds...

play fullscreen
1 / 31

COT 4600 Operating Systems Spring 2011 - PowerPoint PPT Presentation


  • 110 Views
  • Uploaded on

COT 4600 Operating Systems Spring 2011. Dan C. Marinescu Office: HEC 304 Office hours: Tu-Th 5:00 – 6:00 PM. Last time: Midterm Today Solution of the midterm problems and midterm discussion Next time Virtualization Locks. Lecture 14 – Tuesday, March 15, 2011. Problem 1  Naming.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'COT 4600 Operating Systems Spring 2011' - buck


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
cot 4600 operating systems spring 2011

COT 4600 Operating Systems Spring 2011

Dan C. Marinescu

Office: HEC 304

Office hours: Tu-Th 5:00 – 6:00 PM

lecture 14 tuesday march 15 2011
Last time:

Midterm

Today

Solution of the midterm problems and midterm discussion

Next time

Virtualization

Locks

Lecture 14 – Tuesday, March 15, 2011

Lecture 14

problem 1 naming
Problem 1 Naming
  • Conceptual/abstract model: a description which retains the most important characteristics of the process/object in a given context.
    • The model of an airplane wing
    • The abstraction of an interpreter

Lecture 14

naming model see lecture 8
Naming MODEL – (see Lecture 8)
  • Naming allows objects
    • to refer to each other
    • to locate another objects
    • to determine the properties of other objects
  • Naming model  a general scheme to resolve a name which can be applied for a spectrum of objects and implementations; examples
    • Searching for an address in phone book
    • Compiling a program which uses a library
    • Identifying the registers used during execution of a compiled program
  • Four components:
      • Name space (the universe of names)
      • Mapping function
      • The universe of values
      • Context
    • A function which has as input: a name and a context and produces a value

Lecture 14

the role of the context for modular sharing
The role of the context for modular sharing

Modular sharing: allows one module to use another module developed independently, without the danger of name collision (page 117 of the text book)

All names encountered during the name resolution for module A are resolved using the context of A; in other words trhe context of module A points to all modules used by A (including A);

The context allows modular sharing; if both modules A and B use a function called W, but the two functions are different (call them WA and WBfor convenience) then the context of A will point to the function WA and the context of B will point to WB

Lecture 14

problem 2 unix file system
Problem 2 – UNIX file system
  • Lecture 8
    • Basic concepts related to file handling
    • How UNIX users access a file – API
    • Internal implementation of file operations for a generic system
  • How to map logical to physical organization
    • UNIX

Lecture 14

b the software layer the file abstraction
B. The software layer: the file abstraction
  • File: memory abstraction used by the application and OS layers
    • linear array of bits/bytes
    • properties:
      • durable  information will not be changed in time
      • has a name 
      • allows access to individual bits/bytes  has a cursor which defines the current position in the file.
  • The OS provides an API (Application Programming Interface) supporting a range of file manipulation operations.
  • A user must first OPEN a file before accessing it and CLOSE it after it has finished with it. This strategy:
    • allows different access rights (READ, WRITE, READ-WRITE)
    • coordinate concurrent access to the file
  • Some file systems
    • use OPEN and CLOSE to enforce before-or-after atomicity
    • support all-or-nothing atomicity e.g., ensure that if the system crashes before a CLOSE either all or none of WRITEs are carried out

Lecture 8

api for the unix file system
API for the Unix File System

OPEN(name, flags, model)  connect to a file

Open an existing file called name, or

Create a new file with permissions set to mode if flags is set.

Set the file pointer (cursor) to 0.

Return the file descriptor (fd).

CLOSE(fd)  disconnect from a file

Delete file descriptor fd.

READ(fd, buf,n)  read from file

Read n bytes from file fd into buf; start at the current cursor position and update the file cursor (cursor = cursor + n).

WRITE(fd, buf,n)  write to file

Write n bytes to the file fd from buf; start at the current cursor position and update the file cursor (cursor = cursor + n).

SEEK(fd, offset,whence)  move cursor of file

Set the cursor position of file fd to offset from the position specified by whence (beginning, end, current position)

Lecture 8

api for the unix file system cont d
API for the Unix File System (cont’d)

FSYNC(fd)  make all changes to file fd durable.

STAT(name) read metadata

CHMOD, CHOWN  change access mode/ownership

RENAME(from_name,to_name)  change file name

LINK(name, link_name) create a hard link

UNLINK(name) remove name from directory

SYMLINK(name, link_name) create a symbolic link

MKDIR(name) create directory name

RMDIR(name) delete directory name

CHDIR(name)  change current directory to name

CHROOT  Change the default root directory name

MOUNT(name,device)  mount the file system name onto device

UNMOUNT(name)  unmount file system name

Lecture 8

logical and physical organization
Logical and physical organization
  • Logical/physical records:
    • a file consists of a set of logical records
    • a persistent storage media (e.g., disk) is organized as a sequence of blocks/physical records.
  • Hierarchical logical organization
    • Records
    • File
    • Directory
    • File system

Lecture 14

how to map logical to physical organization
How to map logical to physical organization
  • Define a control structure which holds information about a file a directory, or a file system. For a file this information includes:
    • Creation date
    • Owner
    • Size
    • Access rights (read, write, execute)
    • The physical location of the file
  • Store this meta information on the persistent storage (disk).
  • In UNIX
    • call such a control structure – inode
    • an inode describes a file, a directory, or a file system
  • The inode table stores the block where the inodes for directories are located

Lecture 14

how to access the inode for a file in unix
How to access the inode for a file in UNIX
  • Given a path name the following steps are taken
    • Go to Block0 to locate the physical bloc where the inode table is stored
    • Go to the inode table to locate the inode for the root file system
    • Search the inode of the root file system for the inode of the next directory in the path for the file name.
    • Reccursively search until you locate the inode of the directory containing the file.
    • Identify the inode for the file.
  • Note: this process is conducted when an OPEN file name is issued; then the control information from the inode is transferred to a control block in main memory.
  • A READ operation does not need to access the inode table. It

Lecture 14

unix open and read
Unix OPEN and READ
  • If the file exists then the OPEN consists of the following steps
    • Locate the inode of the file using the procedure described earlier
    • Check if the operation is allowed
    • If the operation is allowed then transfer the information from the inode to the open file table for the process
  • If the file does not exists then the OPEN creates the inode and adds the file to the table of open files.
  • The READ only accesses the table of open files in main memory
    • Uses the file descriptor to locate the file entry in the table of open files
    • Determines the position of the cursor
    • Maps the cursor to a disk block
    • Transfers the data a from the block where the cursor is pointing at
    • Continues transferring blocks from the disk to the memory buffer until the entire length of the data is covered.

Lecture 14

problem 3 nfs see lecture 13
Problem 3  NFS (see Lecture 13)
  • Developed at Sun Microsystems in early to early 1980s.
  • Application of the client-server paradigm.
  • Objectives:
    • Design a shared file system to support collaborative work
    • Simplify the management of a set of workstations
      • Facilitate the backups
      • Uniform, administrative policies
  • Main design goals
    • Compatibility with existing applications  NFS should provide the same semantics as a local UNIX file system
    • Ease of deployment  NFS implementation should be easily ported to existing systems
    • Broad scope  NSF clients should be able to run under a variety of operating systems
    • Efficiency  the users of the systems should not notice a substantial performance degradation when accessing a remote file system relative to access to a local file system

Lecture 14

nfs clients and servers
NFS clients and servers
  • Should provide transparent access to remote file systems.
  • It mounts a remote file system in the local name space  it perform a function analogous to the MOUNT UNIX call.
  • The remote file system is specified as Host/Path
    • Host  the host name of the host where the remote file system is located
    • Path  local path name on the remote host.
  • The NFS client sends to the NFS server an RPC with the file Path information and gets back from the server a file handle
    • A 32 bit name that uniquely identifies the remote object.
  • The server encodes in the file handle:
    • A file system identifier
    • An inode number
    • A generation number

Lecture 14

implementation
Implementation
  • Vnode – a structure in volatile memory which abstracts if a file or directory is local or remote. A file system call (Open, Read, Write, Close, etc.) is done through the vnode-layer. Example:
    • To Open a file a client calls PATHNAME_TO_VNODE
    • The file name is parsed and a LOOKUP is generated
      • if the directory is local and the file is found the local file system creates a vnode for the file
      • else
        • the LOOKUP procedure implemented by the NFS client is invoked . The file handle of the directory and the path name are passed as arguments
        • The NFS client invokes the LOOKUP remote procedure on the sever via an RPC
        • The NFS server extracts the file system id and the inode number and then calls a LOOKUP in the vnode layer.
        • The vnode layer on the server side does LOOKUP on the local file system passing the path name as an argument.
        • If the local file system on the server locates the file it creates a vnode for it and returns the vnode to the NFS server.
        • The NFS server sends a reply to the RPC containing the file handle of the vnode and some metadata
        • The NFS client creates a vnode containing the file handle

Lecture 14

why file handles and not path names
Why file handles and not path names

--------------------------------- Example 1 ------------------------------------------------

Program 1 on client 1 Program 2 on client 2

CHDIR (‘dir1’)

fd OPEN(“f”, READONLY)

RENAME(‘dir1’,’dir2)

RENAME(‘dir3’,’dir1’)

READ(fd,buf,n)

To follow the UNIX specification if both clients would be on the same system client1 would read from dir2.f. If the inode number allows the client 1 to follw the same semantics rather than read from dir1/f

----------------------------------- Example 2 -----------------------------------------------

fd OPEN(“file1”, READONLY)

UNLINK(“f”)

fd  OPEN(“f”,CREATE)

READ(fd,buf,n)

If the NFS server reuses the inode of the old file then the RPC from client 2 will read from the new file created by client 1. The generation number allows the NSF server to distinguish between the old file opened by client 2 and the new one created by client 1.

Lecture 14

read write coherence
Read/Write coherence
  • Enforcing Read/Write coherence is non-trivial even for a local operations.
    • For performance reasons a device driver may delay a Write operation issued by Client1; caching could cause problems when Client2 tries to Read the file.
  • Possible solutions
    • Close-to-Open consistency:
      • Client 1 executes the sequence : Open Write  Close before Client 2 executes the sequence : Open  Read  Close  Read/Write coherence is provided
      • Client 1 executes the sequence : Open Write before Client 2 executes the sequence : Open  Read . Read/Write coherence may or may not be provided
    • Consistency for every operation: no caching 

Lecture 14

nfs close to open semantics
NFS Close-to-Open semantics

A client stores with each block in its cache the timestamp of the block’s vnode at the time the client got the block from the NFS server.

When a user program opens a file the client sends a GETATTR request to get the timestamp of the latest modification for the file.

The client gets a new copy only if the file has been modified since it has last accessed it; else it uses the local copy.

To implement a Write a client updates only its copy in cache without a an RPC Write.

At Close time the client sends the cached copy to the server

Lecture 14

ad