1 / 10

Eliminating Receive Livelock in an Interrupt-driven Kernel

Eliminating Receive Livelock in an Interrupt-driven Kernel. Jeffrey C. Mogul mogul@wrl.dec.com K. K. Ramakrishnan AT&T Bell Laboratories kkrama@research.att.com 1996 USENIX. Problem: receive livelock the system spends all its time processing interrupts, to the exclusion of other necessary tasks

chiara
Download Presentation

Eliminating Receive Livelock in an Interrupt-driven Kernel

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Eliminating Receive Livelockin an Interrupt-driven Kernel Jeffrey C. Mogulmogul@wrl.dec.comK. K. RamakrishnanAT&T Bell Laboratorieskkrama@research.att.com1996 USENIX

  2. Problem: receive livelock • the system spends all its time processing interrupts, to the exclusion of othernecessary tasks • Solution: load dependant scheduling • The OS lowers the priority of network interrupt handling as load increases

  3. OS Scheduling Techniques • Interrupts • When a task requires service, it generates an interrupt. The interrupt handler provides some service immediately. • Polling • When a task requires service, it turns on a flag. The OS polls the flags periodically, and provides some service to the tasks with flags turned on.

  4. Benefits of Scheduling Techniques • Interrupts • Much lower overhead • Rapid response • Polling • Perform better under heavy load

  5. How interrupt-driven scheduling causes excess latency under overload • Timeline of Ethernet packet processing • link-level processing at device IPL, which includes copying the packet into kernel buffers • further processing following a software interrupt, which includes locating the appropriate user process, and and queueing the packet for delivery to this process • finally, awakening the user process, which then (in kernel mode) copies the received packet into its own buffer.

  6. Under Load, Avoid Livelock • Use interrupts only to initiate polling. • Use round-robin polling to fairly allocate resources among event sources. • Temporarily disable input when feedback from a full queue, or a limit on CPU usage, indicates that other important tasks are pending. • Drop packets early, rather than late, to avoid wasted work. Try to process packets that are received to completion. • Unloaded, Maintain High Performance • Re-enable interrupts when no work is pending, to avoid polling overhead and to keep latency low. • Let the receiving interface buffer bursts, to avoid dropping packets. • Eliminate the IP input queue, and associated overhead.

  7. Livelock in a Unix router • filled circles: kernel-based forwarding performance • open squares: the screend program, a firewall application

  8. Why livelock occurs in the 4.2BSD Router

  9. Fixing the livelock problem • Do as much work as possible in a kernel thread • Eliminate the IP input queue and its associated queue manipulations and software interrupt

  10. Guaranteeing progress for user-level processes

More Related