Chapter 11:  I

Chapter 11: I PowerPoint PPT Presentation

  • Uploaded on
  • Presentation posted in: General

2. I/O management and disk scheduling. We have already discussed I/O techniquesProgrammed I/OInterrupt-driven I/ODirect memory access (DMA)Also discussed of value of logical I/O where the OS hides most of the details of device I/O in system service routines. Then processes see devices in general terms such as read, write, open, close, lock, unlock.

Download Presentation

Chapter 11: I

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript

1. 1 Chapter 11: I/O management and disk scheduling CS 472 Operating Systems Indiana University – Purdue University Fort Wayne

2. 2 I/O management and disk scheduling We have already discussed I/O techniques Programmed I/O Interrupt-driven I/O Direct memory access (DMA) Also discussed of value of logical I/O where the OS hides most of the details of device I/O in system service routines. Then processes see devices in general terms such as read, write, open, close, lock, unlock

3. 3 Other issues . . . I/O buffering Physical disk organization Need for efficient disk access Disk scheduling policies RAID

4. 4 I/O buffering Can be block-oriented or stream-oriented Block-oriented buffering Information is stored in fixed-size blocks Transfers are made a block at a time Used for disks and tapes

5. 5 I/O buffering Stream-oriented Transfer information as a stream of bytes Used for terminals, printers, communication ports, mouse and other pointing devices, and most other devices that are not secondary storage User input from a terminal is one line at a time with carriage return signaling the end of the line Output to the terminal is one line at a time

6. 6 I/O buffering I/O is a problem under paged virtual memory The target page of any I/O operation must be present in a page frame until the transfer is complete Otherwise, there can be single-process deadlock Example Suppose process P is blocked waiting for I/O event to complete Then suppose the target page for the I/O is swapped out to disk The I/O operation is subsequently blocked waiting for the target page to be swapped in This won’t happen until P is runs and causes a page fault

7. 7 I/O buffering Resulting OS complications The target page of any I/O operation must be locked in memory until the transfer is complete A process with pending I/O on any page may not be suspended Solution: Do I/O through a system I/O buffer in main memory assigned to the I/O request System buffer is locked in memory frame Input transfer is made to the buffer Block moved to user space when needed This decouples the I/O transfer from the address space of the application

8. 8 I/O buffering for throughput With I/O buffers, an application can process the data from one I/O request while awaiting another Time needed to process a block of data . . . without buffering = C + T with buffering = M + max{ C, T } where: C = computation time T = I/O memory/disk transfer time M = memory/memory transfer time (buffer to user)

9. 9 Double buffering Use two system buffers instead of one A process can transfer data to or from one buffer while the operating system empties or fills the other buffer

10. 10 Circular buffering More than two buffers are used List of system buffers are arranged in a circle Appropriate in applications where there are bursts of I/O requests

11. 11 Physical disk organization

12. 12 Physical disk organization

13. 13 Physical disk organization To read or write, the disk head must be positioned on the desired track and at the beginning of the desired sector Seek time is the time it takes to position the head on the desired track Rotational delay or rotational latency is the additional time its takes for the beginning of the sector to reach the head once the head is in position Transfer time is the time for the sector to pass under the head

14. 14 Physical disk organization Access time = seek time + rotational latency + transfer time Efficiency of a sequence of disk accesses strongly depends on the order of the requests Adjacent requests on the same track avoid additional seek and rotational latency times Loading a file as a unit is efficient when the file has been stored on consecutive sectors on the same cylinder of the disk

15. 15 Example: Two single-sector disk requests Assume -- average seek time = 10 ms -- average rotational latency = 3 ms -- transfer time for 1 sector = 0.01875 ms Adjacent sectors on same track -- access time = 10 + 3 + 2*(0.01875) ms = 13.0375 ms Random sectors -- access time = 2*(10 + 3 + 0.01875) ms = 26.0375 ms

16. 16 Disk scheduling policies Each policy assumes a queue of waiting disk requests exists disk requests are entered the queue in random order Policies we will consider are: random FIFO PRI SSTF SCAN C-SCAN N-Step-SCAN FSCAN

17. 17 Disk scheduling policies Random – Just a benchmark for comparison FIFO Next disk request has been in queue the longest Same as random if disk requests are queued randomly (true for many processes) Fair to all processes

18. 18 Disk scheduling policies PRI PRIority given to requests, based on process class (interactive, batch, etc.) Scheduling largely outside of disk management control Goal is not to optimize disk use but to meet other objectives Short batch jobs may have higher priority This provides good interactive response time

19. 19 Disk scheduling policies SSTF (Shortest Service Time First) From requests currently in the queue, choose the request that minimizes movement of the arm (read/write head) Always chooses minimum seek time Resolves ties in a fair manner (both inward and outward) Doesn’t guarantee minimum total arm movement Starvation possible

20. 20 Disk scheduling policies SCAN Arm moves in one direction only until it reaches the last request in that direction Then the arm reverses and repeats Avoids starvation C-SCAN Like SCAN, but in one direction only Then returns arm to the opposite side and repeats Reduces maximum wait in the queue near the disk edge

21. 21 Disk scheduling policies N-step-SCAN Divide queue into N-request segments Use SCAN on each New requests are added to the rear of the queue to form a new N-request segment Reduces maximum waiting time in a high-volume situation Causes head to move more frequently from one cylinder to the next

22. 22 Disk scheduling policies FSCAN Like N-step-SCAN but with two queues One queue fills while the other is processed using SCAN

23. 23 Disk Scheduling Algorithms

24. 24 RAID Redundant Array of Independent Disks OS views N physical disk drives as a single logical drive Data are distributed across the physical drives of an array Redundant disk capacity can be used to store parity information

25. 25 RAID Level 0 (non-redundant)

26. 26 RAID Level 1 (mirrored) Two copies of Level 0 disks (mirror each other) A single read request is served by the disk with the minimum access time A write is done in parallel to both disks Write access time is maximum of both write times Data redundancy but twice the cost

27. 27 RAID Levels 2 through 6 exist Read if interested: pp. 520-523

  • Login