1 / 39

Concurrency in Ada

Concurrency in Ada. Programming Languages 1 Robert Dewar. Concurrency in Ada. What concurrency is all about Relation to operating systems Language facilities vs library packages POSIX threads Ada concurrency Real time support Distributed programming. What Concurrency is all About.

luke-foley
Download Presentation

Concurrency in Ada

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Concurrency in Ada Programming Languages 1 Robert Dewar

  2. Concurrency in Ada • What concurrency is all about • Relation to operating systems • Language facilities vs library packages • POSIX threads • Ada concurrency • Real time support • Distributed programming

  3. What Concurrency is all About • Multiple threads of control within one program • Fairly closely coupled (single address space) • Each thread is independent • But can syncrhonize with other threads • One program has many threads

  4. Relation to Operating Systems • Typical Unix systems provide • Multiple processes • separate address spaces, separate scheduling • Light weight processes/kernel threads • shared address space, separate scheduling • User level threads • shared address space, no separate scheduling

  5. Language Feature vs Libraries • Library approach takes a standard sequential language, e.g. C • And provides a set of packages that provide concurrency • The C program makes calls to the library to create tasks etc.

  6. Problems with Library Method • Libraries may not be completely well defined and may not be portable • Language was not defined with concurrency in mind • e.g. are library routines “thread-safe” • are constructs well defined • e.g. what is rule with shared variables?

  7. Thread Safety • Suppose two threads of control both call a routine such as malloc. • In the middle of one malloc call, the thread is interrupted by a higher priority thread, or reaches end of time slice. • Another task calls malloc • Chaos???

  8. Shared Variables • Suppose we have a global variable “a” • One task does VAR := 1; • Another task does VAR := 256; • Again we have interrupts causing statements to get intermingled • Is result well defined? or could we end up with VAR having the value 257?

  9. POSIX Threads • A standardized package of thread primitives • create thread • timer functions • synchronization mechanisms • etc. • Several versions • Lots left undefined

  10. Add Concurrency to Language • Algol-68 had simple semaphores and the notion of separate tasks. • CSP, not really a PL (although consider OCCAM derived from CSP) had a simple channel mechanism • Simula-67 identified tasks with objects

  11. More on CSP (OCCAM) • Program consists of processes and channels • Process is code containing channel operations • Channel is a data object • All synchronization is via channels

  12. Channel Operations in CSP • Read data item D from channel C • D ? C • Write data item Q to channel C • Q ! C • If reader accesses channel first, wait for writer, and then both proceed after transfer. • If writer accesses channel first, wait for reader, and both proceed after transfer.

  13. Tasking in Ada • Declare a task type • The specification gives the entries • task type T is entry Put (data : in Integer); entry Get (result : out Integer);end T; • The entries are used to access the task

  14. Declaring Task Body • Task body gives actual code of task • task body T is x : integer; -- local per thread declarationbegin … accept Put (M : Integer) do … end Put; …end T;

  15. Creating an Instance of a Task • Declare a single task • X : T; • or an array of tasks • P : array (1 .. 50) of T; • or a dynamically allocated task • type AT is access T; • P : AT;…P := new T;

  16. Task execution • Each task executes independently, until • an accept call • wait for someone to call entry, then proceed with rendezvous code, then both tasks go on their way • an entry call • wait for addressed task to reach corresponding accept statement, then proceed with rendezvous, then both tasks go on their way.

  17. More on the Rendezvous • During the Rendezvous, only the called task executes, and data can be safely exchanged via the entry parameters • If accept does a simple assignment, we have the equivalent of a simple CSP channel operation, but there is no restriction on what can be done within a rendezvous

  18. Termination of Tasks • A task terminates when it reaches the end of the begin-end code of its body. • Tasks may either be very static (create at start of execution and never terminate) • Or very dynamic, e.g. create a new task for each new radar trace in a radar system.

  19. The Delay Statement • Delay statements temporarily pause a task • Delay xyz • where xyz is an expression of type duration causes execution of the thread to be delayed for (at least) the given amount of time • Delay until tim • where tim is an expression of type time, causes execution of the thread to be delayed until (at the earliest) the given tim

  20. Selective Accept • Select statement allows a choice of actions • select entry1 (…) do .. end;or when bla entry2 (…);or delay ...ddd...;end select; • Take whichever open entry arrives first, or if none arrives by end of delay, do …ddd…stmts.

  21. Timed Entry Call • Timed Entry call allows timeout to be set • select entry-call-statement or delay xxx; …end select • We try to do the entry call, but if the task won’t accept in xxx time, then do the delay stmts.

  22. Conditional Entry Call • Make a call only if it will be accepted • select entry-call ..else statementsend select; • If entry call is accepted immediately, fine, otherwise execute the else statements.

  23. Task Abort • Unconditionally terminate a task • abort taskname; • task is immediately terminated • (it is allowed to do finalization actions) • but whatever it was doing remains incomplete • code that can be aborted must be careful to leave things in a coherent state if that is important!

  24. Asynchronous Transfer of Control • Execute section of code, aborting after specified time or event • select delay or accept statementthen abort statementsend select; • Statements start executing and are immediately aborted if delay or accept completes.

  25. Shared Variables • A shared variable is one accessed by more than one task • if variable is declared atomic, no restrictions • otherwise, we cannot have two tasks access the same variable without synchronizing (e.g. doing a rendezvous). • model is that variables can normally be in registers

  26. Tasking Is Completely General • Any possible synchronization problem can be solved using the rendezvous • We know this because it is more powerfulthan CSP/Occam which is itself general • That means that any synchronization primitive can be simulated using the RV

  27. An Example, the Semaphore • The Idea of a (binary) semaphore • Two operations, p and v • p grabs semaphore or waits if not available • v releases the semaphore • Monitor is • p (sem); statementsv (sem);

  28. A Semaphore using a Task, RV • The specification • task type Semaphore is entry p; entry v;end Semaphore;

  29. A Semaphore using RV • The body of semaphore is very simple: • task body Semaphore isbegin loop accept p; accept v; end loop;end Semaphore;

  30. Using the Semaphore Abstraction • Declare an instance of a semaphore • Lock : Semaphore; • Now we can use this semaphore to create a monitor, using • Lock.P; code to be protected in monitorLock.V;

  31. The RV Semaphore • Very neat expression • Nice high level semantics • But awfully heavy if a real task is involved • A case of “abstraction inversion” • We expect to see tasks implemented in terms of low level stuff like semaphores, not the other way round.

  32. Protected Types and Objects • A protected type is a data object with locks • specification provides data, like a record, and the locked access routines • functions (read the data with read lock) • procedures (read/write the data with write lock) • entries (wait till some condition is met, then read/write the data with write lock)

  33. Protected Types and Objects • There is conceptually no separate thread of control. • Body provides code for the functions, procedures and entries • These are executed in the calling thread after obtaining necessary locks

  34. Semaphore Using Protected Type • Specification has the data and entries: • protected type Semaphore is entry P; procedure V;private Grabbed : Boolean := False;end Semaphore; • P is an entry since we may have to wait

  35. Protected Type Semaphore • The body provides the code of P and V • protected body Semaphore is entry P when not Grabbed is begin Grabbed := True; end P; procedure V is begin Grabbed := False; end;end Semaphore;

  36. Using the Protected Type Semaphore • Declare an instance of a semaphore • Lock : Semaphore; • Now we can use this semaphore to create a monitor, using • Lock.P; code to be protected in monitorLock.V; • Note: this was cut and paste from the task slide

  37. Requirements for Real Time • Eliminate non-determinism • pragma Dispatching_Policy (FIFO_Within_Priorities); • means run till blocked, no time slicing • reduces non-determinism • typical of “real time threads”, e.g. in NT • Define priorities of tasks • exact specs for how priorities are respected • Define queuing protocols • first in, first out, or by priority of caller

  38. Importance of Priorities • Proper priority assignment important • Used to ensure important tasks are completed • Used to ensure system is schedulable • Example: Monotonic Scheduling • Collection of cyclic tasks • Require servicing at fixed interval • Assign highest priority to shortest cycle • Regardless of “importance” of tasks • Ensures schedulability if possible

  39. Priority Inheritance • Guard against priority inversion • low priority task grabs resource X • high priority task needs resource X, waits • medium priority task preempts low priority task, and runs for a long time, holding up high priority task • Solution, while high priority task is waiting, lend high priority to low priority task

More Related