1 / 52

Server Resources

INF 5070 – Media Storage and Distribution Systems:. Server Resources. 15/9 - 2003. Overview. Resources, real-time, … “Continuous” media streams (CPU) Scheduling Memory management for streaming. Resources and Real-Time. Resources.

Download Presentation

Server Resources

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. INF 5070 – Media Storage and Distribution Systems: Server Resources 15/9 - 2003

  2. Overview • Resources, real-time, … • “Continuous” media streams • (CPU) Scheduling • Memory management for streaming

  3. Resources and Real-Time

  4. Resources • Resource:“A resource is a system entity required by a task for manipulating data” [Steimetz & Narhstedt 95] • Characteristics: • active: provides a service, e.g., CPU, disk or network adapter • passive: system capabilities required by active resources, e.g., memory or bandwidth • exclusive: only one process at a time can use it, e.g., CPU • shared: can be used by several concurrent processed, e.g., memory • single: exists only once in the system, e.g., loudspeaker • multiple: several within a system, e.g., CPUs in a multi-processor system

  5. Real-Time – I • Real-time process:“A process which delivers the results of the processing in a given time-span” • Real-time system:“A system in which the correctness of a computation depends not only on obtaining the result, but also upon providing the result on time” • Many real-time applications, e.g.: • temperature control in a nuclear/chemical plant • driven by interrupts from an external device • these interrupts occur irregularly • defense system on a navy boat • driven by interrupts from an external device • these interrupts occur irregularly • control of a flight simulator • execution at periodic intervals • scheduled by timer-services which the application requests from the OS • ...

  6. Real-Time – II • Deadline:“A deadline represents the latest acceptable time for the presentation of the processing result” • Hard deadlines: • must never be violated  system failure • too late results • have no value, e.g., processing weather forecasts • means severe (catastrophic) system failure, e.g., processing of an incoming torpedo signal in a navy boat scenario • Soft deadlines: • in some cases, the deadline might be missed • not too frequently • not by much time • result still may have some (but decreasing) value, e.g., a late I-frame in MPEG

  7. Real-Time and Multimedia • Multimedia systems • typically have soft deadlines (may miss a frame) • are non-critical (user may be annoyed, but …) • have periodic processing requirements (e.g., each 33 ms in a 30 fps video) • require large bandwidths (e.g., average of 3.5 Mbps for DVD video) • need predictability (guarantees) • adapt real-time mechanisms to continuous media • exploit resource-specific properties(like real-time resource allocation schemes, preemption, ...) • priority-based schemes are of special importance

  8. Admission and Reservation • To prevent overload, admission may be performed: • schedulability test: • “are there enough resources available for a new stream?” • “can we find a schedule for the new task without disturbing the existing workload?” • a task is allowed if the utilization remains < 1 • yes – allow new task, allocate/reserve resources • no – reject • Resource reservation is analogous to booking(asking for resources) • pessimistic • avoid resource conflicts making worst-case reservations • potentially under-utilized resources • guaranteed QoS • optimistic • reserve according to average load • high utilization • overload may occur • perfect • must have detailed knowledge about resource requirements of all processes • too expensive to make/takes much time

  9. Real-Time and Operating Systems • The operating system manages local resources(CPU, memory, disk, network card, busses, ...) • In a real-time, multimedia scenario, support is needed for: • real-time processing • efficient memory management • This also means support for: • proper scheduling – high priorities for time-restrictive multimedia tasks • timer support – clock with fine granularity and event scheduling with high accuracy • kernel preemption – avoid long periods where low priority processes cannot be interrupted • memory replacement – prevent code for real-time programs from being paged out • fast switching – both interrupts and context switching should be fast • ...

  10. Continuous Media Streams

  11. arrive function data offset send function read function consume function time t1 Streaming Data – I • Start playback at t1 • Consumed bytes (offset) • variable rate • constant rate • Must start retrieving data earlier • Data must arrive beforeconsumption time • Data must be sentbefore arrival time • Data must be read from disk before sending time

  12. arrive function data offset consume function time t1 t0 Streaming Data – II • Need buffers to hold data between the functions, e.g., client B(t) = A(t) – C(t), i.e., t :A(t) ≥ C(t) • Latest start of data arrival is given by min[B(t,t0,t1) ;  t B(t,t0,t1) ≥ 0],i.e., the buffer must at all times t have more data to consume

  13. application file system communication system Streaming Data – III • “Continuous Media” and “Streaming” are ILLUSIONS • retrieve data in blocks from disk • transfer blocks from file system to application • send packets to communication system • split packets into appropriate MTUs • ... (intermediate nodes) • ... (client) • different optimal sizes • pseudo-parallel processes (run in time slices) • need for scheduling(to have timing and appropriate resource allocation)

  14. (CPU) Scheduling

  15. Scheduling – I • A task is a schedulable entity (a process/thread executing a job, e.g., an packet through the communication system or a disk request through the file system) • In a multi-tasking system, several tasks may wish to use a resource simultaneously • A scheduler decides which task that may use the resource, i.e., determines order by which requests are serviced, using a scheduling algorithm • Each active (CPU, disk, NIC) resources needs a scheduler(passive resources are also “scheduled”, but in a slightly different way) requests scheduler resource

  16. Scheduling – II • Scheduling algorithm classification: • dynamic • make scheduling decisions at run-time • flexible to adapt • considers only actual task requests and execution time parameters • large run-time overhead finding a schedule • static • make scheduling decisions at off-line (also called pre-run-time) • generates a dispatching table for run-time dispatcher at compile time • needs complete knowledge of task before compiling • small run-time overhead • preemptive • currently executing task may be interrupted (preempted) by higher priority processes • preempted process continues later at the same state • overhead of contexts switching • (almost!?) useless for disk and network cards • non-preemptive • running tasks will be allowed to finish its time-slot (higher priority processes must wait) • reasonable for short tasks like sending a packet (used by disk and network cards) • less frequent switches

  17. preemption Scheduling – III requests • Preemption: • tasks waits for processing • scheduler assigns priorities • task with highest priority will be scheduled first • preempt current execution if a higher priority (more urgent) task arrives • real-time and best effort priorities • real-time processes have higher priority (if exists, they will run) • to kinds of preemption: • preemption points (typical in general OSes) • predictable overhead • simplified scheduler accounting • immediate preemption • needed for hard real-time systems • needs special timers and fast interrupt and context switch handling scheduler resource

  18. request request request process 2 process 2 process 2 process 2 process 3 process 3 process 3 process 3 process 4 process 4 process 4 process 4 … … … … process N process N process N process N only delay switching and interrupts process 1 p 1 p 1 Scheduling – IV • Scheduling is difficult and takes time (both to find a schedule and to switch between threads/processes – not shown): RT process delay round-robin process 1 process 2 process 3 process 4 … process N RT process RT process delay priority,non-preemtive process 1 RT process RT process priority,preemtive p 1 RT process NOTE: preemption may also be limited to preemption points (fixed points where the scheduler is allowed to interrupt a running process)  giving larger delays

  19. Priorities and Multimedia • Multimedia streams need predictable access to resources – high priorities: • Within each class one could have a second-level scheduler • 1 and 2: real-time scheduling and fine grained priorities • 3: may use traditional approaches as round-robin 1. multimedia traffic with guaranteed QoS may not exist 2. multimedia traffic with predictive QoS 3. other requests must not starve

  20. Scheduling in Windows 2000 Real Time (system thread) • Preemptive kernel • 32 priority levels - Round Robin (RR) in each • Schedules threads individually • Default time slices (3 quantums = 10 ms) of • 120 ms – Win2000 server • 20 ms – Win2000 professional/workstation • may vary between threads • Interactive and throughput-oriented: • “Real time” – 16 system levels • fixed priority • may run forever • Variable – 15 user levels • priority may change – thread priority = process priority ± 2 • uses much  drops • user interactions, I/O completions  increase • Idle/zero-page thread – 1 system level • runs whenever there are no other processes to run • clears memory pages for memory manager Variable (user thread) Idle (system thread)

  21. Scheduling in Linux SHED_FIFO • Preemptive kernel • Threads and processes used to be equal, but Linux uses (in 2.6) thread scheduling • SHED_FIFO • may run forever, no timeslices • may use it’s own scheduling algorithm • SHED_RR • each priority in RR • timeslices of 10 ms (quantums) • SHED_OTHER • ordinary user processes • uses “nice”-values: 1≤ priority≤40 • timeslices of 10 ms (quantums) • Threads with highest goodness are selected first: • realtime (FIFO andRR):goodness = 1000 + priority • timesharing (OTHER): goodness = (quantum > 0 ? quantum + priority : 0) • Quantums are reset when no ready process has quantums left:quantum = (quantum/2) + priority SHED_RR nice SHED_OTHER

  22. Scheduling in AIX SHED_FIFO • Similar to Linux, but has always only used thread scheduling • SHED_FIFO • SHED_RR • SHED_OTHER • BUT, SHED_OTHER may change “nice” values • running long (whole timeslices) penalty – nice increase • interrupted (e.g., I/O) gives initial “nice” value back SHED_RR nice SHED_OTHER

  23. p d e time s Real-Time Scheduling – I • Multimedia streams are usually periodic(fixed frame rates and audio sample frequencies) • Time constraints for a periodic task: • s – starting point(first time the task require processing) • e – processing time • d – deadline • p – period • r – rate (r = 1/p) • 0 ≤ e ≤ d (often d ≤ p: we’ll use d = p – end of period, but Σd ≤ Σp is enough) • the kth processing of the task • is ready at time s + (k – 1) p • must be finished at time s + (k – 1) p + d • the scheduling algorithm must account for these properties

  24. Real-Time Scheduling – II • Resource reservation • QoS can be guaranteed • relies on knowledge of tasks • no fairness • origin: time sharing operating systems • e.g., earliest deadline first (EDF) and rate monotonic (RM)(AQUA, HeiTS, RT Upcalls, ...) • Proportional share resource allocation • no guarantees • requirements are specified by a relative share • allocation in proportion to competing shares • size of a share depends on system state and time • origin: packet switched networks • e.g., Scheduler for Multimedia And Real-Time (SMART)(Lottery, Stride, Move-to-Rear List, ...)

  25. Earliest Deadline First (EDF) – I • Preemptive scheduling based on dynamic task priorities • Task with closest deadline has highest priority stream priorities vary with time • Dispatcher selects the highest priority task • Assumptions: • requests for all tasks with deadlines are periodic • the deadline of a task is equal to the end on its period (starting of next) • independent tasks (no precedence) • run-time for each task is known and constant • context switches can be ignored

  26. priority A < priority B priority A > priority B Earliest Deadline First (EDF) – II • Example: deadlines Task A time Task B Dispatching

  27. Rate Monotonic (RM) Scheduling – I • Classic algorithm for hard real-time systems with one CPU [Liu & Layland ‘73] • Pre-emptive scheduling based on static task priorities • Optimal: no other algorithms with static task priorities can schedule tasks that cannot be scheduled by RM • Assumptions: • requests for all tasks with deadlines are periodic • the deadline of a task is equal to the end on its period (starting of next) • independent tasks (no precedence) • run-time for each task is known and constant • context switches can be ignored • any non-periodic task has no deadline

  28. p1 preemption Rate Monotonic (RM) Scheduling – II shortest period, highest priority • Process priority based on task periods • task with shortest period gets highest static priority • task with longest period gets lowest static priority • dispatcher always selects task requests with highest priority • Example: priority longest period, lowest priority period length Task 1 p2 P1 < P2  P1 highest priority Task 2 Dispatching

  29. Fixed priorities,A has priority, dropping waste of time deadline miss deadline miss deadline miss deadline miss Fixed priorities,B has priority, dropping deadline miss Rate monotonic (as the first) EDF Versus RM – I • It might be impossible to prevent deadline misses in a strict, fixed priority system: deadlines Task A time Task B Fixed priorities,A has priority, no dropping waste of time waste of time Fixed priorities,B has priority, no dropping RM may give some deadline violationswhich is avoided by EDF Earliest deadline first

  30. NOTE: this means that EDF is usually more efficient than RM, i.e., if switches are free and EDF uses resources ≤ 1, then RM may need ≤ ln(2) resources to schedule the same workload EDF Versus RM – II • EDF • dynamic priorities changing in time • overhead in priority switching • QoS calculation – maximal throughput: Ri x ei ≤ 1, R – rate, e – processing time • RM • static priorities based on periods • may map priority onto fixed OS priorities (like Linux) • QoS calculation: Ri x ei ≤ ln(2), R – rate, e – processing time all streams i all streams i

  31. SMART – I • Designed for multimedia and real-time applications • Based on proportional resource sharing • Principles • priority – high priority tasks should not suffer from low priority tasks • proportional sharing – allocate resources proportionally and distribute unused resources (work conserving) • tradeoff immediate fairness – less competitive processes (short-lived, interactive, I/O-bound, ...) get a higher share • graceful transitions – adapt smoothly to resource demand changes • notification – notify applications of resource changes • No admission control

  32. SMART – II • Tasks have importance and urgency • urgency – an immediate real-time constraint, short deadline(determine when a task will get resources) • importance – a high priority • importance is expressed by a tuple: [ priority p , biased virtual finishing time bvft ] • virtual finishing time: degree to which the share was consumed • bias: bonus for interactive tasks • Schedule based on urgency and importance • find most important tasks – compare tuple:T1 > T2 (p1 > p2)  (p1 = p2  bvft1 > bvft2) • sort after urgency

  33. Evaluation of a Real-Time Scheduling – I • Tests performed • by IBM (1993) • executing tasks with and without EDF • on an 57 MHz, 32 MB RAM, AIX Power 1 • Video playback program: • one real-time process • read compressed data • decompress data • present video frames via X server to user • process requires 15 timeslots of 28 ms each per second 42 % of the CPU time

  34. Evaluation of a Real-Time Scheduling – II 3 load processes(competing with the video playback) the real-time scheduler reaches all its deadlines laxity (remaining time to deadline) several deadlineviolations by the non-real-timescheduler task number

  35. Evaluation of a Real-Time Scheduling – III Varied the number of load processes(competing with the video playback) Only video process 4 other processes laxity (remaining time to deadline) 16 other processes NB! The EDF scheduler kept its deadlines task number

  36. Evaluation of a Real-Time Scheduling – IV • Tests again performed • by IBM (1993) • on an 57 MHz, 32 MB RAM, AIX Power 1 • “Stupid” end system program: • 3 real-time processes only requesting CPU cycles • each process requires 15 timeslots of 21 ms each per second 31.5 % of the CPU time each 94.5 % of the CPU time required for real-time tasks

  37. Evaluation of a Real-Time Scheduling – V 1 load process(competing with the real-time processes) the real-time scheduler reaches all its deadlines laxity (remaining time to deadline) task number

  38. Evaluation of a Real-Time Scheduling – VI 16 load process(competing with the real-time processes) process 1 Regardless of other load, the EDF-scheduler reach its deadlines(laxity almost equal as in 1 load process scenario) laxity (remaining time to deadline) process 2 NOTE: Processes are scheduled in same order process 3 task number

  39. Memory Management

  40. application file system communication system disk network card I/O controller hub memory controller hub file system communication system application network card disk Copying on the Intel Hub Architecture Pentium 4 Processor registers cache(s) RDRAM RDRAM RDRAM RDRAM PCI slots PCI slots PCI slots

  41. Traditional applications: Streaming applications: application-specific data modifications read write user kernel OS independent abstraction layer(s) device driver device driver HW device HW device read write user kernel OS independent abstraction layer(s) device driver device driver HW device HW device Streaming Modes Using Copying

  42. Cost of Data Transfers – Example I • First generation router built with 133 MHz Intel Pentium • mean packet size 500 B • interrupt time of 10 µs, word access 50 ns • per packet processing of 200 instructions (1.504 µs) • copy loop: • 4 instructions • 2 memory accesses • 130.08 µs (per 4 byte) • per packet: copy + interrupt = [(500/4) * 130 µs] + 10 µs = 27.765 µs 144.07 Mbps register  memory[read_ptr] memory[write_ptr]  register read_ptr  read_prt + 4 write_ptr  write_prt + 4 counter  counter – 1 if (counter not 0) goto top of loop

  43. Cost of Data Transfers – Example II • Copying in NetBSDv1.5 • by UniK/IFI (2000) • copyin(), copyout(), and memcpy() • special system call • 933 MHz P3 CPU • theoretical max.:25.6 Gbps • INTEL:larger is better • BUT: • max at 2 – 8 KB • decrease at larger sizes • caching effects

  44. Cost of Data Transfers – Example II (cont.) • Assume sending 1 GB data • whole operation, reading from disk and sending to network, takes about 10 s • reading 64 KB blocks from disk  137.10µs per copyout() • sending 4 KB packets  1.65 µs per copyin() • in total: read + send = (16384 * 137.10 µs) + (262144 * 1.65 µs) = 2.679 s for copying only • THUS; data movement costs should be kept small • careful management of contiguous media data • avoid unnecessary physical copy operations • apply appropriate buffer management schemes • The data movement overhead can be saved removing physical in-memory copy operation, i.e.,ZERO-COPY data paths

  45. application user space kernel space file system communication system Basic Idea of Zero-Copy Data Paths buf mbuf b_data m_data bus(es)

  46. Application streaming using zero-copy: read data into kernel buffer and send from there application responsible for timing send: explicit send automatic send Kernel streaming using zero-copy: thread per stream perform read and write operations application specifies timing, but it is ensured by the tread stream is only created – controlled by kernel read write create stream user kernel OS user kernel OS independent abstraction layer(s) independent abstraction layer(s) read write thread device driver device driver device driver device driver HW device HW device HW device HW device Streaming Modes NOT Using Copying read & send

  47. Kernel streaming using zero-copy Application streaming using zero-copy Existing Zero-Copy Streaming Mechanisms • Linux: sendfile() • between two descriptors (file and TCP-socket) • bi-directional: disk-network and network-disk • need TCP_CORK • AIX: send_file() • only TCP • uni-directional: disk-network • INSTANCE (MMBUF-based, in NetBSDv1.5): • by UniK/IFI (2000) • uni-directional: disk-network (network-disk ongoing work) • stream_read() and stream_send() (zero-copy 1) • stream_rdsnd() (zero-copy 2) • splice(), stream(), IO-Lite, MMBUF, …

  48. time in seconds packet size in KB INSTANCE CPU Time • Transfer 1 GB • Used disk blocks of 64 KB • Used packets of 1 – 8 KB • Results in seconds: • Gain larger than expected: • removed other operations as well like buffer cache look-up • some packet drop at server saved about 0.2 s • simplified the chain of functions

  49. INSTANCE Zero-Copy Transfer Rate • Zero-copy transfer rate limited by network cardand storage system • saturated a 1 Gbps NIC and 32-bit, 33 MHz PCI • reduced processing time by approximately 50 % • huge improvement in number of concurrent streams  Throughput increase of ~2.7 times per stream (can at least double the number of streams) approx. 12 Mbps approx. 6 Mbps

  50. The End:Summary

More Related