1 / 15

CSC 552.201 - Advanced Unix Programming, Fall, 2008

CSC 552.201 - Advanced Unix Programming, Fall, 2008. Monday, December 1 Thread local storage, POSIX:SEM semaphores and POSIX:XSI IPC. Thread local data. int pthread_key_create(pthread_key_t *key, void (*destructor, void*));

sauda
Download Presentation

CSC 552.201 - Advanced Unix Programming, Fall, 2008

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CSC 552.201 - Advanced Unix Programming, Fall, 2008 Monday, December 1 Thread local storage, POSIX:SEM semaphores and POSIX:XSI IPC

  2. Thread local data • int pthread_key_create(pthread_key_t *key, void (*destructor, void*)); • This function creates a thread-specific data key visible to all threads in the process. Key values provided by pthread_key_create() are opaque objects used to locate thread-specific data. Although the same key value may be used by different threads, the values bound to the key by pthread_setspecific() are maintained on a per-thread basis and persist for the life of the calling thread. Destructor may be NULL. If not, it is called for the key’s value upon thread exit. • The key acts like a hash index to access thread-local data.

  3. Thread local data access • int pthread_setspecific(pthread_key_t key, const void *value); • Sets thread-local data for key to pointer value. • value points to a valid object that persists across calls. • void *pthread_getspecific(pthread_key_t key); • Returns the key’s value in this thread, or NULL if the key has not been set in this thread. • int pthread_key_delete(pthread_key_t key); • Deletes the mapping in this thread. • Used for distinct mappings in multiple threads.

  4. Thread local storage example • A multithreaded database access API maintains per-thread caches for thread-local pre-commit and pre-abort database fetch and store operations. • Each database read() fetches the DBID -> value mappings for its thread. Other threads might read() or write() the same DBID prior to commit() or abort(). • Each thread must maintain a cache of the DBID -> value mappings it currently sees, along sets of DBIDs that it has created, modified, deleted, or re-created. • A thread’s commit() writes/commits all changes back to the DB and empties the thread local cache. An abort() or close() empties the thread local cache.

  5. Thread local storage example • A multithreaded database access API maintains per-thread caches for thread-local pre-commit and pre-abort database fetch and store operations. singleton database thread DB function() call from 1 of N client threads function, resultQueue, this, DBID Transaction serviceQueue thread local cache : current transaction mappings, deltas, resultQueue DB Transaction resultQueue

  6. Why use thread-local storage in this DB example? • This application uses thread local storage in part because it is middleware, in this case a wrapper that resides between client DB access functions and the underlying DB API calls. • It intercepts client calls to the DB access functions because the underlying DB is not robust for multithreaded commit() and abort() calls. • This middleware dedicates one worker thread to actually performing underlying DB commits and aborts. • DB calls in client threads do not have access to data parameters in the threads’ startup functions.

  7. POSIX:SEM semaphores (Ch. 14) • int sem_init(sem_t *sem, int pshared, unsigned int value); ALSO int sem_destroy(sem_t *sem); • pshared is 0 for single process, 1 for interprocess • value is 0 for a locked semaphore, > 0 for unlocked • int sem_wait(sem_t *sem); • blocks on a 0 sem_t value, decrements if > 0 • or int sem_trywait(sem_t *sem); • int sem_post(sem_t *sem); • Adds 1 to sem_t if there are no blocked sem_wait callers, unblocks one thread if 1 or more are blocked

  8. ~parson/UnixSysProg/semwait 50 sem_t sem_hasspace ; // the buffer has space • sem_t sem_hascontent ; // the buffer has content • if ((errcode = sem_init(&sem_hasspace, 0, 1)) == -1) { • if ((errcode = sem_init(&sem_hascontent, 0, 0)) == -1) { • (void) sem_destroy(&sem_hasspace); Producer: • if ((errcode = sem_wait(&(link->sem_hasspace))) == -1) { 89 if ((errcode = sem_post(&(link->sem_hascontent))) == -1) { Consumer: 107 if ((errcode = sem_wait(&(link->sem_hascontent))) == -1) { 130 if ((errcode = sem_post(&(link->sem_hasspace))) == -1) {

  9. Named semaphores can be opened and used across processes. • sem_t *sem_open(const char *name, int oflag, /* unsigned long mode, unsigned int value */ ...); • name is a path-like name (not necessarily a file path) with a single leading “/” identifying the semaphore. • oflag can be O_CREAT with an O_EXCL option. • O_CREAT mode_t values set permissions • S_IRUSR | S_IWUSR | S_IRGRP | S_IWGRP | S_IROTH | S_IWOTH • O_CREAT value parameter sets the initial value • Use sem_destroy() as with sem_init().

  10. POSIX:XSI Inter-Process Communication (IPC) (Ch. 15) • Semaphore sets, shared memory, and type-tagged message queues (sem, shm, msg). • These mechanisms support synchronization and communication among UNIX processes. • Unlike pipes and named FIFOs, they do not use file descriptors or select() or poll(). • These mechanisms predate UNIX threads, commercially available sockets and POSIX.

  11. ~ parson/UnixSysProg/ipcwait • Example code uses shared memory and a two-semaphore set to connect one producer process to two consumer processes. • Semaphore sets • Multiple, atomically updated semaphores in 1 set • ftok(), semget() (to open), semctl(), semop() • Shared memory • Select virtual pages of multiple processes map to shared physical address pages. • ftok(), shmget() (to open), shmctl(), shmat(), shmdt()

  12. ipcs and ipcrm shell utilities -bash-3.00$ ipcs -p Shared Memory: m 1526726770 0x1008ca1 --rw------- parson faculty 4334 4334 Semaphores: s 100663397 0x8ca1 --ra------- parson faculty -bash-3.00$ pmap 4334 4334: ./ipcwait 10000 16K r-x-- /export/home/faculty/parson/UnixSysProg/ipcwait/ipcwait • 8K rwx-- /export/home/faculty/parson/UnixSysProg/ipcwait/ipcwait . . . FF250000 8K rwxs- [ shmid=0x5b000072 ]

  13. ~ parson/UnixSysProg/msgipcwait • Example code uses a message queue to connect one producer process to two consumer processes. • Message queues • Each queue includes a “long” type tag field that receivers can use to select from among typed messages. • ftok(), msgget() (to open), msgctl(), msgsnd(), msgrcv()

  14. Single threaded, multiprocess and multithreaded server loops. • Use select() or poll() to monitor all incoming and possibly outgoing fds of interest. • In a single-threaded, single-process system, perform time-bounded work, then return to the service loop • Especially appropriate for real-time or small footprint, reactive systems • Forking or threading could apply to some requests • Use accept() or other blocking system call in a server thread to receive service requests. • fork() a worker process to perform concurrent work • pthread_create worker threads; maintain a thread pool

  15. Programming assignment 4(modify a copy of assignment 2 or 3) • Each chess plugin starts one or more threads to monitor its incoming data streams, blocking on read(), and copying the data stream to a log file and, for gnuchess, to an output stream. • Threads reading stdout from gnuchess or pchess must detect moves as in assignment 2. • They invoke a callback function that passes the move back to the main thread via a condition variable and a queue of moves. The main thread blocks on this condition variable. There is no select() or poll() loop. • Move injection from the main thread to the stdin on a child game (gnuchess or pchess) must use a mutex to protect writing into the child’s stdin stream, only if there are multiple writers (e.g., xboard to gnuchess). • Callbacks signal end-of-file and errors in the child data connections. • The main thread must handle signals as in assignment 2. The child threads must mask out all signals.

More Related