1 / 21

# Lecture 10: Synchronization (Chapter 6) - PowerPoint PPT Presentation

Lecture 10: Synchronization (Chapter 6). #include <pthread.h> #include <stdio.h> int sum; /* this data is shared by the thread(s) */ void *runner(void *param); /* the thread */ main(int argc, char *argv[]) { pthread_t tidl, tid; /* the thread identifier */

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.

## PowerPoint Slideshow about ' Lecture 10: Synchronization (Chapter 6)' - trent

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

### Lecture 10: Synchronization(Chapter 6)

#include <stdio.h>

int sum; /* this data is shared by the thread(s) */

void *runner(void *param); /* the thread */

main(int argc, char *argv[])

{

if (argc != 2) {

fprintf(stderr, ’’usage: a.out<integer value>\n’’);

exit();

}

if (atoi(argv[1] < 0) {

fprintf(stderr, ’’%d must be >= 0\n’’, atoi(argv[1]));

exit();

}

printf(sum = %d\n,sum);

}

/* The thread will begin control in this function */

void *runner(void *param)

{

int upper = atoi(param);

int i;

sum = 0;

if (upper > 0) {

for (i = 1; i <= upper; i++)

sum += i;

}

}

Name the Problem:

• producer process produces information that is consumed by a consumer process

• unbounded-buffer places no practical limit on the size of the buffer

• bounded-buffer assumes that there is a fixed buffer size

• Shared data

#define BUFFER_SIZE 10

typedef struct {

. . .

} item;

item buffer[BUFFER_SIZE];

int in = 0; //next free position in the buffer

int out = 0; //next full position in the buffer

PRODUCER:

while (true) {

/* produce an item and put in nextProduced */

while (count == BUFFER_SIZE)

; // do nothing

buffer [in] = nextProduced;

in = (in + 1) % BUFFER_SIZE;

count++;

}

CONSUMER:

while (true) {

while (count == 0)

; // do nothing

nextConsumed = buffer[out];

out = (out + 1) % BUFFER_SIZE;

count--;

/* consume the item in nextConsumed */

}

count++ could be implemented asregister1 = count register1 = register1 + 1 count = register1

count-- could be implemented asregister2 = count register2 = register2 - 1 count = register2

Consider this execution interleaving with “count = 5” initially:

S0: producer execute register1 = count {register1 = 5}S1: producer execute register1 = register1 + 1 {register1 = 6} S2: consumer execute register2 = count {register2 = 5} S3: consumer execute register2 = register2 - 1 {register2 = 4} S4: producer execute count = register1 {count = 6 } S5: consumer execute count = register2 {count = 4}

• Race Condition: two or more processes are reading and writing on shared data and the final result depends on who runs precisely when

• Mutual exclusion: making sure that if one process is accessing a shared memory, the other will be excluded from doing the same thing

• Critical region: the part of the program where shared variables are accessed

1. Mutual Exclusion - If a process is executing in its critical section, then no other processes can be executing in their critical sections

2. Progress - If no process is executing in its critical section and there exist some processes that wish to enter their critical section, then the selection of the processes that will enter the critical section next cannot be postponed indefinitely

Bounded Waiting - A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted

Two assumptions:

Assume that each process executes at a nonzero speed

No assumption concerning relative speed of the N processes

Two-process solution

Assume that the LOAD and STORE instructions are atomic

The two processes share two variables:

int turn;

Boolean interested[2]

The variable turn indicates whose turn it is to enter the critical section.

The interested array is used to indicate if a process is ready to enter the critical section. interested[i] = true implies that process Pi is ready.

interested[i] = TRUE;

turn = j;

while (interested[j] && turn == j) ;

critical section

interested[i] = FALSE;

remainder section

} while (TRUE);

Algorithm for Process Pi

Entry section

Exit section

Many systems provide hardware support for critical section code

Uniprocessors – could disable interrupts

Currently running code would execute without preemption

Generally too inefficient on multiprocessor systems

Modern machines provide special atomic hardware instructions

Atomic = non-interruptable

Either test memory word and set value (test-and-set)

Or swap contents of two memory words (swap)

do {

acquire lock

critical section

release lock

remainder section

} while (TRUE);

boolean TestAndSet (boolean *target)

{

boolean rv = *target;

*target = TRUE;

return rv:

}

Shared boolean variable lock, initialized to false.

Solution:

do {

while (TestAndSet (&lock ))

; // do nothing

// critical section

lock = FALSE;

// remainder section

} while (TRUE);

Does it implementthe conditions

of critical section?

void Swap (boolean *a, boolean *b)

{

boolean temp = *a;

*a = *b;

*b = temp:

}

Shared Boolean variable lock initialized to FALSE;

Each process has a local Boolean variable key

Solution:

do {

key = TRUE;

while ( key == TRUE)

Swap (&lock, &key);

// critical section

lock = FALSE;

// remainder section

} while (TRUE);

Does it implementthe conditions

of critical section?

do {

waiting[i] = TRUE;

key = TRUE;

while (waiting[i] && key)

key = TestAndSet(&lock);

waiting[i] = FALSE;

// critical section

j = (i + 1) % n;

while ((j != i) && !waiting[j])

j = (j + 1) % n;

if (j == i)

lock = FALSE;

else

waiting[j] = FALSE;

// remainder section

} while (TRUE);

ICN

cache

cache

cache

. . . . .

Caches and Consistency in Multiprocessors

• cache with each processor to hide latency for memory accesses

• miss in cache => go to memory to fetch data

• multiple copies of same address in caches

• caches need to be kept consistent

• write-invalidate

memory

invalidate -->

ICN

X -> X’

X -> Inv

X -> Inv

. . . . .

Before a write, an “invalidate” command goes out on the bus.

All other processors are “snooping” and they invalidate their caches. Invalid entries cause read faults and new values will be fetched from main memory.

As soon as the cache line is written, it is broadcast on the

bus, so all processors update their caches.

• write-update (also called distributed write)

memory

update -->

ICN

X -> X’

X -> X’

X -> X’

. . . . .