1 / 80

Java for High Performance Computing

Java for High Performance Computing. Multithreaded and Shared-Memory Programming in Java http://www.hpjava.org/courses/arl Instructor: Bryan Carpenter Pervasive Technology Labs Indiana University. Java as a Threaded Language.

umika
Download Presentation

Java for High Performance Computing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Java for High Performance Computing Multithreaded and Shared-Memory Programming in Java http://www.hpjava.org/courses/arl Instructor: Bryan Carpenter Pervasive Technology Labs Indiana University dbcarpen@indiana.edu

  2. Java as a Threaded Language • In C, C++, etc it is possible to do multithreaded programming, given a suitable library. • e.g. the pthreads library. • Thread libraries provide one approach to doing parallel programming on computers with shared memory. • Another approach is OpenMP, which uses compiler directives. This will be discussed later. • Unlike these (traditional HPC) languages, Java integrates threads into the basic language specification in a much tighter way. • Every Java Virtual Machine must support threads. • Although this close integration doesn’t exactly make multithreaded programming in Java easy, it does help avoid common programming errors, and keeps the semantics clean. dbcarpen@indiana.edu

  3. Features of Java Threads • Java provides a set of synchronization primitives based on monitor and condition variable paradigm of C.A.R. Hoare. • Underlying functionality is similar to POSIXthreads, for example. • Syntactic extension for threads is (deceptively?) small: • synchronized attribute on methods. • synchronized statement. • volatile keyword. • Other thread management and synchronization captured in the Thread class and related classes. • But the presence of threads has a more wide-ranging effect on the language specification and JVM implementation. • e.g., the Java memory model. dbcarpen@indiana.edu

  4. Contents of this Lecture Set • Introduction to Java Threads. • Mutual Exclusion. • Synchronization between Java Threads using wait() and notify(). • Other features of Java Threads. • Shared Memory Parallel Computing with Java Threads • We review select parallel applications and benchmarks of Java on SMPs from the recent literature. • Special Topic: JOMP • JOMP is a prototype implementation of the OpenMP standard for Java. • Suggested Exercises dbcarpen@indiana.edu

  5. Java Thread Basics dbcarpen@indiana.edu

  6. Creating New Threads in a Java Program • Any Java thread of execution is associated with an instance of the Thread class. Before starting a new thread, you must create a new instance of this class. • The Java Thread class implements the interface Runnable. So every Thread instance has a method: public void run() { . . . } • When the thread is started, the code executed in the new thread is the body of the run() method. • Generally speaking the new thread ends when this method returns. dbcarpen@indiana.edu

  7. Making Thread Instances • There are two ways to create a thread instance (and define the thread run() method). Choose at your convenience: • Extend the Thread class and override the run() method, e.g.: class MyThread extends Thread { public void run() { System.out.println(“Hello from another thread”) ; } } . . . Thread thread = new MyThread() ; • Create a separate Runnable object and pass it to the Thread constructor, e.g.: class MyRunnable implements Runnable { public void run() { System.out.println(“Hello from another thread”) ; } } . . . Thread thread = new MyThread(new MyRunnable()) ; dbcarpen@indiana.edu

  8. Starting a Thread • Creating the Thread instance does not in itself start the thread running. • To do that you must call the start() method on the new instance: thread.start() ; This operation causes the run() method to start executing concurrently with the original thread. • In our example the new thread will print the message “Hello from another thread” to standard output, then immediately terminate. • You can only call the start() method once on any Thread instance. Trying to “restart” a thread causes an exception to be thrown. dbcarpen@indiana.edu

  9. Example: Multiple Threads class MyThread extends Thread { MyThread(int id) { this.id = id ; } public void run() { System.out.println(“Hello from thread ” + id) ; } private int id ; } . . . Thread [] threads = new Thread [p] ; for(int i = 0 ; i < p ; i++) threads [i] = new MyThread(i) ; for(int i = 0 ; i < p ; i++) threads [i].start() ; dbcarpen@indiana.edu

  10. Remarks • This is one way of creating and starting p new threads to run concurrently. • The output might be something like (for p = 4): Hello from thread 3 Hello from thread 4 Hello from thread 2 Hello from thread 1 Of course there is no guarantee of order (or atomicity) of outputs, because the threads are concurrent. • One might worry about the efficiency of this approach for large numbers of threads (massive parallelism). dbcarpen@indiana.edu

  11. JVM Termination and Daemon Threads • When a Java application is started, the main() method of the application is executed in a singled-out thread called the main thread. • In the simplest case—if the main method never creates any new threads—the JVM keeps running until the main() method completes (and the main thread terminates). • If the JVM was started with the java command, the command finishes. • If the main() method creates new threads, then by default the JVM terminates when all user-created threads have terminated. • More generally there are system threads executing in the background, concurrent with the user threads (e.g. threads might be associated with garbage collection). These threads are marked as daemon threads—meaning just that they don’t have the property of “keeping the JVM alive”. So, more strictly, the JVM terminates when all non-daemon threads terminate. • Ordinary user threads can create daemon threads by applying the setDaemon() method to the thread instance before starting it. dbcarpen@indiana.edu

  12. Mutual Exclusion dbcarpen@indiana.edu

  13. Avoiding Interference • In any non-trivial multithreaded (or shared-memory-parallel) program, interference between threads is an issue. • Generally interference (or a race condition) occurs if two threads are trying to do operations on the same variables at the same time. This often results in corrupt data. • But not always. It depends on the exact interleaving of instructions. This non-determinism is the worst feature of race conditions. • A popular solution is to provide some kind of lock primitive. Only one thread can acquire a particular lock at any particular time. The concurrent program can then be written so that operations on a given group of variables are only ever performed by threads that hold the lock associated with that group. • In POSIX threads, for example, the lock objects are called mutexes. dbcarpen@indiana.edu

  14. Example use of a Mutex (in C) Thread A Thread B pthread_mutex_lock(&my_mutex) ; /* critical region */ tmp1 = count ; count = tmp1 + 1 ; pthread_mutex_lock(&my_mutex) ; Blocked pthread_mutex_unlock(&my_mutex) ; /* critical region */ tmp2 = count ; count = tmp2 - 1 ; pthread_mutex_unlock(&my_mutex) ; dbcarpen@indiana.edu

  15. Pthreads-style mutexes • In POSIX threads, a mutex is allocated then initialized with pthread_mutex_init(). • Once a mutex is initialized, a thread can acquire a lock on it by calling e.g. pthread_mutex_lock(). While it holds the lock it performs some update on the variables guarded by the lock (critical region). Then the thread calls pthread_mutex_unlock() to release the lock. • Other threads that try to call pthread_mutex_lock() while the critical region is being executed are blocked until the first thread releases the lock. • This is fine, but opportunities for error include: • There is no built-in association between the lock object (mutex) and the set of variables it guards—it is up to the program to maintain a consistent association. • If you forget to call pthread_mutex_unlock() after pthread_mutex_lock(), deadlocks will occur. dbcarpen@indiana.edu

  16. Monitors • Java addresses these problems by adopting a version of the monitors proposed by C.A.R. Hoare. • Every Java object is created with its own lock (and every lock is associated with an object—there is no way to create an isolated mutex). In Java this lock is often called the monitor lock. • Methods of a class can be declared to be synchronized. • The object’s lock is acquired on entry to a synchronized method, and released on exit from the method. • Synchronized static methods need slightly different treatment. • Assuming methods generally modify the fields (instance variables) of the objects they are called on, this leads to a natural and systematic association between locks and the variables they guard: viz. a lock guards the instance variables of the object it is attached to. • The critical region becomes the body of the synchronized method. dbcarpen@indiana.edu

  17. Example use of Synchronized Methods Thread A Thread B … call to counter.increment() … // body of synchronized method tmp1 = count ; count = tmp1 + 1 ; … call to counter.decrement() … Blocked … counter.increment() returns … // body of synchronized method tmp2 = count ; count = tmp2 - 1 ; … counter.decrement() returns … dbcarpen@indiana.edu

  18. Caveats • This approach helps to encourage good practices, and make multithreaded Java programs less error-prone than, say, multithreaded C programs. • But it isn’t magic: • It still depends on correct identification of the critical regions, to avoid race conditions. • The natural association between the lock of the object and its fields relies on the programmer following conventional patterns of object oriented programming (which the language encourages but doesn’t enforce). • By using the synchronized construct (see later), programs can break this association altogether. • There are plenty more insidious ways to introduce deadlocks, besides accidentally forgetting to release a lock! • Concurrent programming is hard, and if you start with the assumption Java somehow makes concurrent programming “easy”, you are probably going to write some broken programs! dbcarpen@indiana.edu

  19. Example: A Simple Queue public class SimpleQueue { private Node front, back ; public synchronized void add(Object data) { if (front != null) { back.next = new Node(data) ; back = back.next ; } else { front = new Node(data) ; back = front ; } } public synchronized Object rem() { Object result = null ; if (front != null) { result = front.data ; front = front.next ; } return result ; } } dbcarpen@indiana.edu

  20. Remarks • This queue is implemented as a linked list with a front pointer and a back pointer. • The method add() adds a node to the back of the list; the method rem() removes a node from the front of the list. • The Node class just has a data field (type Object) and a next field (type Node). • The rem() method immediately returns null when the queue is empty. • The following slide gives an example of what could go wrong without mutual exclusion. It assumes two threads concurrently add nodes to the queue. • In the initial state, Z is the last item in the queue. In the final state, the X node is orphaned, and the back pointer is null. dbcarpen@indiana.edu

  21. The Need for Synchronized Methods Thread A: add(X) null Z back back.next = new Node(X) ; Thread B: add(Y) null X back.next = new Node(Y) ; Z null X back Z null Y back = back.next ; back null X Z back = back.next ; null Y back null X Z Corrupt data structure! null Y back null dbcarpen@indiana.edu

  22. The synchronized construct • The keyword synchronized also appears in the synchronized statement, which has syntax like: synchronized (object) { … critical region … } • Here object is a reference to any object. The synchronized statement first acquires the lock on this object, then executes the critical region, then releases the lock. • Typically you might use this for the lock object, somewhere inside a none-synchronized method, when the critical region is smaller than the whole method body. • In general, though, the synchronized statement allows you to use the lock in any object to guard any code. dbcarpen@indiana.edu

  23. Performance Overheads of synchronized • Acquiring locks obviously introduces an overhead in execution of synchronized methods. See, for example: “Performance Limitations of the Java Core Libraries”, Allan Heydon and Marc Najork (Compaq), Proceedings of ACM 1999 Java Grande Conference. • Many of the utility classes in the Java platform (e.g. Vector, etc) were originally specified to have synchronized methods, to make them safe for the multithreaded environment. • This is now generally thought to have been a mistake: newer replacement classes (e.g. ArrayList) usually don’t have synchronized methods—it is left to the user to provide the synchronization, as needed, e.g. through wrapper classes. dbcarpen@indiana.edu

  24. General Synchronization dbcarpen@indiana.edu

  25. Beyond Mutual Exclusion • The mutual exclusion provided by synchronized methods and statements is an important special sort of synchronization. • But there are other interesting forms of synchronization between threads. Mutual exclusion by itself is not enough to implement these more general sorts of thread interaction (not efficiently, at least). • POSIX threads, for example, provides a second kind of synchronization object called a condition variable to implement more general inter-thread synchronization. • In Java, condition variables (like locks) are implicit in the definition of objects: every object effectively has a single condition variable associated with it. dbcarpen@indiana.edu

  26. A Motivating Example • Consider the simple queue from the previous example. • If we try to remove an item from the front of the queue when the queue is empty, SimpleQueue was specified to just return null. • This is reasonable if our queue is just meant as a data structure buried somewhere in an algorithm. But what if the queue is a message buffer in a communication system? • In that case, if the queue is empty, it may be more natural for the “remove” operation to block until some other thread added a message to the queue. dbcarpen@indiana.edu

  27. Busy Waiting • One approach would be to add a method that polls the queue until data is ready: public synchronized Object get() { while(true) { Object result = rem() ; if (result != null) return result ; } } • This works, but it may be inefficient to keep doing the basic rem() operation in a tight loop, if these machine cycles could be used by other threads. • This isn’t clear cut: sometimes busy waiting is the most efficient approach. • Another possibility is to put a sleep() operation in the loop, to deschedule the thread for some fixed interval between polling operations. But then we lose responsiveness. dbcarpen@indiana.edu

  28. wait() and notify() • In general a more elegant approach is to use the wait() and notify() family of methods. These are defined in the Java Object class. • Typically a call to a wait() method puts the calling thread to sleep until another thread wakes it up again by calling a notify() method. • In our example, if the queue is currently empty, the get() method would invoke wait(). This causes the get() operation to block. Later when another thread calls add(), putting data on the queue, the add() method invokes notify() to wake up the “sleeping” thread. The get() method can then return. dbcarpen@indiana.edu

  29. A Simplified Example public class Semaphore { int s ; public Semaphore(int s) { this.s = s ; } public synchronized void add() { s++ ; notify() ; } public synchronized void get() throws InterruptedException { while(s == 0) wait() ; s-- ; } } dbcarpen@indiana.edu

  30. Remarks I • Rather than a linked list we have a simple counter, which is required always to be non-negative. • add() increments the counter. • get() decrements the counter, but if the counter was zero it blocks until another thread increments the counter. • The data structures are simplified, but the synchronization features used here are essentially identical to what would be needed in a blocking queue (left as an exercise). • You may recognize this as an implementation of a classical semaphore—an important synchronization primitive in its own right. dbcarpen@indiana.edu

  31. Remarks II • wait() and notify() must be used inside synchronized methods of the object they are applied to. • The wait() operation “pauses” the thread that calls it. It also releases the lock which the thread holds on the object (for the duration of the wait() call: the lock will be claimed again before continuing, after the pause). • Several threads can wait() simultaneously on the same object. • If any threads are waiting on an object, the notify() method “wakes up” exactly one of those threads. If no threads are waiting on the object, notify() does nothing. • A wait() method may throw an InterruptedException (rethrown by by get() in the example). This will be discussed later. • Although the logic in the example doesn’t strictly require it, universal lore has it that one should always put a wait() call in a loop, in case the condition that caused the thread to sleep has not been resolved when the wait() returns (a programmer flaunting this rule might use if in place of while). dbcarpen@indiana.edu

  32. Another Example public class Barrier { private int n, generation = 0, count = 0 ; public Barrier(int n) { this.n = n ; } public synchronized void synch() throws InterruptedException { int genNum = generation ; count++ ; if(count == n) { count = 0 ; generation++ ; notifyAll() ; } else while(generation == genNum) wait() ; } } dbcarpen@indiana.edu

  33. Remarks • This class implements barrier synchronization—an important operation in shared memory parallel programming. • It synchronizes n processes: when n threads make calls to synch() the first n-1 block until the last one has entered the barrier. • The method notifyAll() generalizes notify(). It wakes up all threads currently waiting on this object. • Many authorities consider use of notifyAll() to be “safer” than notify(), and recommend always to use notifyAll(). • In the example, the generation number labels the current, collective barrier operation: it is only really needed to control the while loop round wait(). • And this loop is only really needed to conform to the standard pattern of wait()-usage, mentioned earlier. dbcarpen@indiana.edu

  34. Concluding Remarks on Synchronization • We illustrated with a couple of simple examples that wait() and notify() allow various interesting patterns of thread synchronization (or thread communication) to be implemented. • In some sense these primitives are sufficient to implement “general” concurrent programming—any pattern of thread synchronization can be implemented in terms of these primitives. • For example you can easily implement message passing between threads (left as an exercise…) • This doesn’t mean these are necessarily the last word in synchronization: e.g. for scalable parallel processing one would like a primitive barrier operation more efficient than the O(n) implementation given above. dbcarpen@indiana.edu

  35. Other Features of Java Threads dbcarpen@indiana.edu

  36. Other Features • This lecture isn’t supposed to cover all the details—for those you should look at the spec! • But we mention here a few other features you may find useful. dbcarpen@indiana.edu

  37. Join Operations • The Thread API has a family of join() operations. These implement another simple but useful form of synchronization, by which the current thread can simply wait for another thread to terminate, e.g.: Thread child = new MyThread() ; child.start() ; … Do something in current thread … child.join() ; // wait for child thread to finish dbcarpen@indiana.edu

  38. Priority and Name • Thread have properties priority and name, which can be defined by suitable setter methods, before starting the thread, and accessed by getter methods. dbcarpen@indiana.edu

  39. Sleeping • You can cause a thread to sleep for a fixed interval using the sleep() methods. • This operation is distinct from and less powerful than wait(). It is not possible for another thread to prematurely wake up a thread paused using sleep(). • If you want to sleep for a fixed interval, but allow another thread to wake you beforehand if necessary, use the variants of wait() with timeouts instead. dbcarpen@indiana.edu

  40. Deprecated Thread Methods • There is a family of methods of the Thread class that was originally supposed to provide external “life-or-death” control over threads. • These were never reliable, and they are now officially “deprecated”. You should avoid them. • If you have a need to interrupt a running thread, you should explicitly write the thread it in such a way that it listens for interrupt conditions (see the next slide). • If you want to run an arbitrary thread in such a way that it can be externally killed and cleaned by an external agent, you probably need to fork a separate process to run it. • The most interesting deprecated methods are: • stop() • destroy() • suspend() • resume() dbcarpen@indiana.edu

  41. Interrupting Threads • Calling the method interrupt() on a thread instance requests cancellation of the thread execution. • It does this in an advisory way: the code for the interrupted thread must be written to explicitly test whether it has been interrupted, e.g.: public void run() { while(!interrupted()) … do something … } Here interrupted() is a static method of the Thread class which determines whether the current thread has been interrupted • If the interrupted thread is executing a blocking operation like wait() or sleep(), the operation may end with an InterruptedException. An interruptible thread should catch this exception and terminate itself. • Clearly this mechanism depends wholly on suitable implementation of the thread body. The programmer must decide at the outset whether it is important that a particular thread be responsive to interrupts—often it isn’t. dbcarpen@indiana.edu

  42. Thread Groups • There is a mechanism for organizing threads into groups. This may be useful for imposing security restrictions on which threads can interrupt other threads, for example. • Check out the API of the ThreadGroup class if you think this may be important for your application. dbcarpen@indiana.edu

  43. Thread-Local Variables • An object from the ThreadLocal class stores an object which has a different, local value in every thread. • For example, suppose you implemented the MPI message-passing interface, mapping each MPI process to a Java thread. You decide that “world communicator” should be a static variable of the Comm class. But then how do you get the rank() method to return a different process ID for each thread, so this kind of thing works?: int me = Comm.WORLD.rank() ; • One approach would be to store the process ID in the communicator object in a thread local variable. • Another approach would be to use a hash map keyed by Thread.currentThread(). • Check the API of the ThreadLocal class for details. dbcarpen@indiana.edu

  44. Volatile Variables • Suppose a the value of a variable must be accessible by multiple threads, but for some reason you decided you can’t afford the overheads of synchronized methods or the synchronized statement. • Presumably effects of race conditions have been proved innocuous. • In general Java does not guarantee that—in the absence of lock operations to force synchronization of memory—the value of a variable written by a one thread will be visible to other threads. • But if you declare a field to be volatile: volatile int myVariable ; the JVM is supposed to synchronize the value of any thread-local (cached) copy of the variable with central storage (making it visible to all threads) every time the variable is updated. • The exact semantics of volatile variables, and the Java memory model in general, is still controversial, see for example: “A New Approach to the Semantics of Multithreaded Java”, Jeremy Manson and William Pugh, http://www.cs.umd.edu/~pugh/java/memoryModel/ dbcarpen@indiana.edu

  45. Thread-based Parallel Applications and Benchmarks dbcarpen@indiana.edu

  46. Threads on Symmetric Multiprocessors • Most modern implementations of the Java Virtual Machine will map Java threads into native threads of the underlying operating system. • For example these may be POSIX threads. • On multiprocessor architectures with shared memory, these threads can exploit multiple available processors. • Hence it is possible to do true parallel programming using Java threads within a single JVM. dbcarpen@indiana.edu

  47. Select Application Benchmark Results • We present some results, borrowed from the literature in this area. • Two codes are described in “High-Performance Java Codes for Computational Fluid Dynamics” C. Riley, S. Chatterjee, and R. Biswas, ACM 2001 Java Grande/ISCOPE • LAURA is a finite-volume flow solver for multiblock, structured grids. • 2D_TAG is a triangular adaptive grid generation tool. dbcarpen@indiana.edu

  48. Parallel Speedup of LAURA Code dbcarpen@indiana.edu

  49. Parallel Speedup of 2D_TAG Code dbcarpen@indiana.edu

  50. Remarks • LAURA speedups are generally fair, and are outstanding with the Jalapeno JVM on PowerPC (unfortunately this is the only JVM that isn’t freely available.) • LAURA is “regular”, and the parallelization strategy needs little or no locking. • 2D_TAG results are only reported for PowerPC (presumably worse on other platforms). This code is very dynamic, very OO, and the naïve version uses many synchronized methods, hence poor performance. • The two optimized versions cut down the amount of locking by partitioning the grid, and only using locks in accesses to edge regions. • As noted earlier, synchronization in Java is quite expensive. dbcarpen@indiana.edu

More Related