1 / 56

Real-Time Systems Lecture 4

Real-Time Systems Lecture 4. Teachers: Olle Bowallius Phone: 790 44 42 Email: olleb@isk.kth.se Anders Västberg Phone: 790 44 55 Email: vastberg@kth.se. Synchronisation and Communication.

talli
Download Presentation

Real-Time Systems Lecture 4

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Real-Time SystemsLecture 4 Teachers: Olle Bowallius Phone: 790 44 42 Email: olleb@isk.kth.se Anders Västberg Phone: 790 44 55 Email: vastberg@kth.se

  2. Synchronisation and Communication • The correct behaviour of a concurrent program depends on synchronisation and communication between its processes • Synchronisation: an action by one process only occurring after an action by another • Communication: the passing of information from one process to another • Concepts are linked since communication requires synchronisation, and synchronisation can be considered as content-less communication. • Data communication is usually based upon either shared variables or message passing.

  3. Independent Processes • Two processes are independent if: • Both processes only read shared variables. • One process reads different variables than is written by the other process.

  4. Process • A process executes a sequence of statements. • Each statement is implemented by a sequence of one or more atomic actions, which are actions that indivisably examine or change the program state. • The execution of a concurrent program results in an interleaving of the sequences of atomic actions executed by each process. • A particular execution of a concurrent program can be viewed as a history or a trace of the sequence of states. • n processes and m atomic actions number of possible histories: (n*m)!/(m!n)

  5. Reentrant Funktion • A reentrant function can be used by one or more tasks without fear for data corruption • Reentrant functions either use local variables or protect data when global variables are used void strcpy (char *dest, char *src) { while (*dest++ = *src++) ; *dest = ‘\0’; } • Since copies of the arguments are placed on the task’s stack, several functions can call strcpy() without corrupting other’s pointers • Non-reetrant function: int Temp; void swap (int *x, int *y) { Temp = *x; *x = *y; *y = Temp; }

  6. Atomic Actions y = z = 0; co; x = y+z;// y = 1; z = 2; oc; • Load y and store value in a register • Add z to the register. • Store value in x. Final value can be 0, 1, 2 or 3 depending on which history As the three operations are not indivisible, two processes simultaneously updating a variable could follow an interleaving that would produce an incorrect result

  7. Granularity • Fine-grained atomic actions • Implemented directly by hardware • Course-grained atomic actions • Provided by constructs which supports mutual exclusion.

  8. Avoiding Interference • The parts of a process that access shared variables must be executed indivisibly with respect to each other • These parts are called critical sections • The required protection is called mutual exclusion

  9. Mutual Exclusion • Atomicity is assumed to be present at the memory level. If one process is executing X:= 5, simultaneously with another executing X:= 6, the result will be either 5 or 6 (not some other value) • If two processes are updating a structured object, this atomicity will only apply at the single word element level

  10. Critical Section • No two processes may be simultaneously inside their critical regions. • No assumptions may be made about speeds or the number of CPUs • No processes running outside its critical region may block other processes • No process should have to wait forever to enter its critical region.

  11. Cricital Section

  12. Race Condition • Two processes wants to write to the queue at the same time. • Process A reads in=7 • Process A store the value 7 in a local variable next_free_slot • Process B reads in=7 • Process B store the value 7 in a local variable next_free_slot • Process B store its filename in place 7 and updates in=8 • Process A store its filename in place 7 and updates in=8

  13. Disable interrupts • Mutual exclusion can be created by disabling interrupts • Does not work for a multiprocessor system. • Are not normally allowed by a user • Should only be made during very short time periods in a RTS.

  14. Condition Synchronisation • Condition synchronisation is needed when a process wishes to perform an operation that can only sensibly, or safely, be performed if another process has itself taken some action or is in some defined state • E.g. a bounded buffer has 2 condition synchronisation: • the producer processes must not attempt to deposit data onto the buffer if the buffer is full • the consumer processes cannot be allowed to extract objects from the buffer if the buffer is empty head tail

  15. Busy Waiting • One way to implement synchronisation is to have processes set and check shared variables that are acting as flags • This approach works well for condition synchronisation but no simple method for mutual exclusion exists • Busy wait algorithms are in general inefficient; they involve processes using up processing cycles when they cannot perform useful work • Even on a multiprocessor system they can give rise to excessive traffic on the memory bus or network (if distributed)

  16. Busy Wait (Spin Locks) process P1 { // Do something while(flag == down) ; // Do something other after process P2 } Process P2 { // Do something flag = up; //signal to P1 // Do something other }

  17. Busy Wait (Mutual Exclusion) process P2 { while(true){ flag2 = up; while(flag1==up) ; //wait //Critical section flag2=down; //non-critical section } } process P1 { while(true){ flag1 = up; while(flag2==up) ; //wait //Critical section flag1=down; //non-critical section } } Possible Livelock ! Livelock is when two processes are executing while waiting for each other.

  18. Busy wait (Mutual Exclusion) process P2 { while(flag1==up) ; //wait flag2=up; //Critical section flag2=down; //non-critical section } process P1 { while(flag2==up) ; //wait flag1=up; //Critical section flag1=down; //non-critical section } No mutual exclusion !

  19. Busy wait process P2 { while(turn==1) ; //wait //Critical section turn==1; //non-critical section } process P1 { while(turn==2) ; //wait //Critical section turn==2; //non-critical section } P1 and P2 must take turns in the critical section If P1fails in its critical section, then P2 will never enter its critical section.

  20. Petersons Algoritm process P1 { flag1=up; turn=2; while(flag2==up && turn==2) ; //wait //Critical section flag1=down; //non-critical section } process P2 { flag2=up; turn=1; while(flag1==up && turn==1) ; //wait //Critical section flag2=down; //non-critical section }

  21. Busy Wait • Protocols that use busy loops are difficult to design, understand and prove correct. • Testing programs may not examine rare interleavings that break mutual exclusion or lead to livelock. • Busy-wait loops are inefficient

  22. Semaphores • A semaphore is a non-negative integer variable that apart from initialization can only be acted upon by two procedures P (or WAIT) and V (or SIGNAL) • WAIT(S) If the value of S > 0 then decrement its value by one; otherwise delay the process until S > 0 (and then decrement its value). • SIGNAL(S) Increment the value of S by one. • WAIT and SIGNAL are atomic (indivisible). Two processes both executing WAIT operations on the same semaphore cannot interfere with each other and cannot fail during the execution of a semaphore operation

  23. Condition synchronisation • var consyn : semaphore (* init 0 *) • process P1; • (* waiting process *) • statement X; • wait (consyn) • statement Y; • end P1; • process P2; • (* signalling proc *) • statement A; • signal (consyn) • statement B; • end P2;

  24. Mutual Exclusion sem mutex = 1; process P1 { wait(mutex); //Critical section signal(mutex); //non-critical section } process P2 { wait(mutex); //Critical section signal(mutex); //non-critical section }

  25. Barrier Synchronization sem arrive1 = 0; sem arrive2 = 0; process P1 { ... signal(arrive1); wait(arrive2); ... } process P2 { ... signal(arrive2); wait(arrive1); ... }

  26. Producers and Consumers typeT buffer; sem empty = 1; sem full = 0; process Producer[i] { while(true) { //produce data wait(empty); buffer = data; signal(full); } } process Consumer[i] { while(true) { //get data wait(full); result = buffer; signal(empty); } }

  27. Binary and quantity semaphores • A general semaphore is a non-negative integer; its value can rise to any supported positive number • A binary semaphore only takes the value 0 and 1; the signalling of a semaphore which has the value 1 has no effect - the semaphore retains the value 1

  28. Bounded buffer typeT buffer; sem empty = n; sem full = 0; sem mutex = 1; process Producer { while(true) { //producera data wait(empty); wait(mutex); insert(data); signal(mutex); signal(full); } } process Consumer { while(true) { //hämta data wait(full); wait(mutex); data = remove(); signal(mutex); signal(empty); } }

  29. Deadlock Circular Wait causes deadlock

  30. Deadlock Deadlock if the buffer is full typeT buffer; sem empty = n; sem full = 0; sem mutex = 1; process Producer { while(true) { //producera data wait(mutex); wait(empty); //fel ordning insert(data); wait(mutex); signal(full); } } process Consumer { while(true) { //hämta data wait(full); wait(mutex); data = remove(); wait(mutex); signal(empty); } }

  31. Criticisms of semaphores • Semaphore are an elegant low-level synchronisation primitive, however, their use is error-prone • If a semaphore is omitted or misplaced, the entire program to collapse. Mutual exclusion may not be assured and deadlock may appear just when the software is dealing with a rare but critical event • A more structured synchronisation primitive is required • No high-level concurrent programming language relies entirely on semaphores; they are important historically but are arguably not adequate for the real-time domain

  32. Dining Philosophers • Philosophers either eat or thinks. • A philosopher needs two forks to be able to eat the spaghetti. • When a philosopher gets hungry, she tries to acquiring her left and right fork, one at a time. • How do you avoid deadlock or starvation?

  33. Dining Philosophers sem fork[5] = {1, 1, 1, 1, 1}; //i=0 to 3 process Philospher[i] { while(true) { wait(fork[i]);//get left wait(fork[i+1]);//get right //eat; signal(fork[i]); signal(fork[i+1]); //think; } } process Philospher[4] { while(true) { wait(fork[0]);//get right fork wait(fork[4]);// then left fork //eat; signal(fork[0]); signal(fork[4]); //think } }

  34. Liveness • If a processes do not contain • Livelocks • Deadlocks • Indefinite postponements (starvation). Then it is said to posses liveness

  35. Semaphores in microC/OS-II OS_Event *DispSem; int main(void) { OSInit(); DispSem = OSCreate(1); OSStart(); } OS_Event *DispSem; void DispTask(void *pdata) { INT8U err; while(1) { OSSemPend(DispSem, 0, &err); } } OS_Event *DispSem; void TaskX(void *pdata) { INT8U err; while(1) { err = OSSemPost(DispSem); } } Also non-blocking OSSemAccept

  36. Monitors • Monitors provide encapsulation, and efficient condition synchronisation • The critical regions are written as procedures and are encapsulated together into a single module • All variables that must be accessed under mutual exclusion are hidden; all procedure calls into the module are guaranteed to be mutually exclusive • Only the operations are visible outside the monitor

  37. Condition Variables • Different semantics exist • In Hoare’s monitors: a condition variable is acted upon by two semaphore-like operators WAIT and SIGNAL • A process issuing a WAIT is blocked (suspended) and placed on a queue associated with the condition variable (cf semaphores: a wait on a condition variable always blocks unlike a wait on a semaphore) • A blocked process releases its hold on the monitor, allowing another process to enter • A SIGNAL releases one blocked process. If no process is blocked then the signal has no effect (cf semaphores)

  38. Block of Data reader reader writer writer Readers/Writers Problem How can monitors be used to allow many concurrent readers or a single writer but not both? Consider a file which needs mutual exclusion between writers and reader but not between multiple readers

  39. Hint You will need to have an entry and exit protocol • Reader: • start_read • . . . • stop_read Writer: start_write . . . stop_write

  40. Criticisms of Monitors • The monitor gives a structured and elegant solution to mutual exclusion problems such as the bounded buffer • It does not, however, deal well with condition synchronization — requiring low-level condition variables • All the criticisms surrounding the use of semaphores apply equally to condition variables

  41. Synchronized Methods • Java provides a mechanism by which monitors can be implemented in the context of classes and objects • There is a lock associated with each object which cannot be accessed directly by the application but is affected by • the method modifier synchronized • block synchronization. • When a method is labeled with the synchronized modifier, access to the method can only proceed once the lock associated with the object has been obtained • Hence synchronized methods have mutually exclusive access to the data encapsulated by the object, if that data is only accessed by other synchronized methods • Non-synchronized methods do not require the lock and, therefore, can be called at any time

  42. Example of Synchronized Methods public class SharedInteger { private int theData; public SharedInteger(int initialValue) { theData = initialValue; } public synchronized int read() { return theData; }; public synchronized void write(int newValue) { theData = newValue; }; public synchronized void incrementBy(int by) { theData = theData + by; }; } SharedInteger myData = new SharedInteger(42);

  43. Block Synchronization • Provides a mechanism whereby a block can be labeled as synchronized • The synchronized keyword takes as a parameter an object whose lock it needs to obtain before it can continue • Hence synchronized methods are effectively implementable as public int read() { synchronized(this) { return theData; } } • Where this is the Java mechanism for obtaining the current object

  44. Warning • Used in its full generality, the synchronized block can undermine one of the advantages of monitor-like mechanisms, that of encapsulating synchronization constraints associate with an object into a single place in the program • This is because it is not possible to understand the synchronization associated with a particular object by just looking at the object itself when other objects can name that object in a synchronized statement. • However with careful use, this facility augments the basic model and allows more expressive synchronization constraints to be programmed

  45. Static Data • Static data is shared between all objects created from the class • To obtain mutually exclusive access to this data requires all objects to be locked • In Java, classes themselves are also objects and therefore there is a lock associated with the class • This lock may be accessed by either labeling a static method with the synchronized modifier or by identifying the class's object in a synchronized block statement • The latter can be obtained from the Object class associated with the object • Note, however, that this class-wide lock is not obtained when synchronizing on the object

  46. Static Data class StaticSharedVariable { private static int shared; ... public int Read() { synchronized(this.getClass()) { return shared; }; } public void Write(int I) { synchronized(this.getClass()) { shared = I; }; }; } Could have used: public static synchronized void Write(int I)

  47. Waiting and Notifying • To obtain conditional synchronization requires the methods provided in the predefined object class public void wait() throws InterruptedException; // also throws IllegalMonitorStateException public void notify(); // throws IllegalMonitorStateException public void notifyAll(); // throwsIllegalMonitorStateException • These methods should be used only from within methods which hold the object lock • If called without the lock, the exception IllegalMonitor-StateException is thrown

  48. Waiting and Notifying • The wait method always blocks the calling thread and releases the lock associated with the object • A wait within a nested monitor releases only the inner lock • The notify method wakes up one waiting thread; the one woken is not defined by the Java language • Notify does not release the lock; hence the woken thread must wait until it can obtain the lock before proceeding • To wake up all waiting threads requires use of the notifyAll method • If no thread is waiting, then notify and notifyAll have no effect

  49. Thread Interruption • A waiting thread can also be awoken if it is interrupted by another thread • In this case the InterruptedException is thrown (see later in the course)

  50. Condition Variables • There are no explicit condition variables. An awoken thread should usually evaluate the condition on which it is waiting (if more than one exists and they are not mutually exclusive) public class BoundedBuffer { private int buffer[]; private int first; private int last; private int numberInBuffer = 0; private int size; public BoundedBuffer(int length) { size = length; buffer = new int[size]; last = 0; first = 0; };

More Related