1 / 23

4061 Session 17 (3/19)

4061 Session 17 (3/19). Today. Time in UNIX. Today’s Objectives. Define what is meant when a system is called “interrupt-driven” Describe some trade-offs in setting the timer interrupt frequency

roz
Download Presentation

4061 Session 17 (3/19)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 4061 Session 17 (3/19)

  2. Today • Time in UNIX

  3. Today’s Objectives • Define what is meant when a system is called “interrupt-driven” • Describe some trade-offs in setting the timer interrupt frequency • Write programs that format time values, measure time intervals, and utilize high-precision or interval timers

  4. Admin • Remember - Homework 3 is due a week from Wed. • Quiz 3 Results, overview, questions • Grading questions: Qiang #2/4, Riea #1/3

  5. Time in Perspective • 1 second (s) ==1,000 milliseconds (ms) ==1,000,000 microseconds (μs) • Million Instructions Per Second (MIPS) • A measure of *peak* CPU speed (not comparable across architectures...) • 1992: Intel 486DX (66 MHz) - 54 MIPS • 2006: Intel Core 2 Extreme QX6700 (3.33 GHz) - 57063 MIPS • Thus, 57,063,000,000 instructions per second, or 57,063 instructions per microsecond

  6. Interrupt-Driven • Systems have a “heartbeat” • Every so often, a timer interrupts whatever is happening, and the OS determines what to do next (e.g. scheduling) • The period between beats is known as a “jiffy” • Increment a kernel-internal value (jiffies) every timer interrupt. • The timer interrupt is the default scheduling quantum • All other timers are based on jiffies. • The timer interrupt rate (and jiffy increment rate) is defined by a compile-time constant called HZ

  7. Tuning • Older linux kernels on Intel: 10 ms (1/100 s, 100 Hz) • 10 ms == 10,000 μs • up to 570,630,000 instructions per jiffy on really fast HW • Newer linux kernels on Intel: 4 ms (1/250 s, 250 Hz) • For a few revisions, kernels were set to 1 ms. Deemed too short. Why?

  8. Timer Interrupt Frequency • Higher frequency leads to: • (+) better latency • (+) more timer granularity • (+) better performance for poll()/select() • (-) overhead (lower throughput) • “jiffie wraparound” bug: system uptime clock • 32 bits, 100 jiffies per second = 497 days • (they fixed by moving to 64 bit storage)

  9. Notes • Note: timeslice != jiffy • Scheduling is implemented by checking against the jiffy counter, but the length of a quantum can be longer than one jiffy • vmstat 1: look at interrupts per second • On an idle system, can get a feel for timer Hz • in: interrupts in last second • cs: context switches in last second

  10. Timers and Jiffies • We’re going to be talking about timers today • e.g. How sleep(10) is implemented • Keep in mind that without special kernel modifications timers can be no more accurate than the length of the jiffy • These special kernel modifications are written, and will likely be put in the linux mainline code soon

  11. Time and Timers • An alarm is a signal that is delivered to a process after a time interval. • The resolution of a clock is the minimum interval that can be represented. • The accuracy of a clock is its difference from some generally-accepted standard. • Time interval measurements are available for • wall-clock time, • user-mode runtime, • kernel-mode runtime, and • user-mode and kernel-mode runtime for children

  12. Getting the Time • From the command line with date, in a variety of formats, in any timezone. • time_t time(time_t *tm) • This gives you the "time since epoch" in seconds. • Returns and (optionally) stores result in tm. • Accurate computation of long time intervals is not possible, because Unix time ignores leap seconds.

  13. Y2K, Unix-style • time_t datatype used to store seconds since Jan 1, 1970 • time_t was originally a 32 bit signed long • This number wraps (the highest-order bit is set to 1) during the year 2038, indicating a time *before* 1970 • ...this is only a problem for 32 bit systems (changing these systems to use 64 bit longs is hard) • Will there be any 32 bit systems left in 2038?

  14. From Wikipedia  • Using a (signed) 64-bit value introduces a new wraparound date in about 290 billion years, on Sunday, December 4, 292,277,026,596. • The 128-bit Unix time resets later, on December 31, 17,014,118,346,046,923,173,168,730,371,588,410. • However, these are not widely regarded as pressing issues.

  15. Higher-precision Times int gettimeofday(struct timeval *tv, struct timezone *tz); struct timeval { time_t tv_sec; /* seconds */ suseconds_t tv_usec; /* microseconds */ }; • This gives you the time with microsecond resolution, but not microsecond accuracy. • The tz value is ignored. Just use NULL.

  16. NTP • Many network algorithms require (reasonably) accurate comparison of times across a network. • How do I know that both machines are set correctly? • Network-connected machines can achieve millisecond accuracy with the network time protocol NTP.

  17. Time Formatting • Given a time value, there are two steps to getting a human-readable string representing the time. • Break the time down into parts: seconds, minutes, hours, month, year. See man ctime. • Convert the broken-down time into a string. • Set time zone info in tzname, timezone. • Many of these functions use static data. • If you call ctime() or gmtime() a second time, the results from the first call may be gone. • There are "thread-safe" versions of these, eg. ctime_r() , but they still use static locations for timezone info.

  18. Sleep and Alarm • The alarm(m) call causes SIGALRM to be sent to your process after m seconds. The sleep(m) call uses SIGALRM to pause your process. • A higher-precision way to get your process to pause: • int nanosleep(const struct timespec *req, struct timespec *rem); • req is the desired pause time. • If interrupted, rem is the remaining unused time.

  19. Process Pauses • nanosleep() has different behaviors • Delay <= 2 ms: busy loop (no signals!) • Delay > 2 ms: implemented like sleep() • The BSD version is called usleep(usec) • This is considered obsolete in Linux. • From the standpoint of simplicity, portability, and efficiency, nanosleep() is almost always the best choice.

  20. Interval Timers • Each process has a set of interval timers available. You can set them to send a periodic signal, based on passage of real time, or runtime. • Use getitimer(...) and setitimer(...)

  21. Interval Timers (2) • The three interval timers: • ITIMER_REAL • decrements in real time • delivers SIGALRM • ITIMER_VIRTUAL • decrements only when the process is executing • delivers SIGVTALRM • ITIMER_PROF • decrements both when the process executes and when the system is executing on behalf of the process. • delivers SIGPROF • Each has a current value, and an “interval” • If the interval > 0, then the timer will restart

  22. Measuring Runtime • The system keeps track of time spent in user and kernel, for a process and its children. • clock_t times(struct tms *buffer); • Returns accumulated runtime.

More Related