1 / 113

Introduction to PETSc

Introduction to PETSc. VIGRE Seminar, Wednesday, November 8, 2006. Parallel Computing. How (basically) does it work?. Parallel Computing. How (basically) does it work? Assign each processor a number. Parallel Computing. How (basically) does it work? Assign each processor a number

cburch
Download Presentation

Introduction to PETSc

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Introduction to PETSc VIGRE Seminar, Wednesday, November 8, 2006

  2. Parallel Computing How (basically) does it work?

  3. Parallel Computing How (basically) does it work? • Assign each processor a number

  4. Parallel Computing How (basically) does it work? • Assign each processor a number • The same program goes to all

  5. Parallel Computing How (basically) does it work? • Assign each processor a number • The same program goes to all • Each uses separate memory

  6. Parallel Computing How (basically) does it work? • Assign each processor a number • The same program goes to all • Each uses separate memory • They pass information back and forth as necessary

  7. Parallel Computing Example 1: Matrix-Vector Product

  8. Parallel Computing Example 1: Matrix-Vector Product

  9. Parallel Computing Example 1: Matrix-Vector Product

  10. Parallel Computing Example 1: Matrix-Vector Product

  11. Parallel Computing Example 1: Matrix-Vector Product

  12. Parallel Computing Example 1: Matrix-Vector Product

  13. Parallel Computing Example 2: Matrix-Vector Product

  14. Parallel Computing Example 2: Matrix-Vector Product

  15. Parallel Computing Example 2: Matrix-Vector Product

  16. Parallel Computing Example 3: Matrix-Matrix Product

  17. Parallel Computing Example 3: Matrix-Matrix Product

  18. Parallel Computing Example 3: Matrix-Matrix Product

  19. Parallel Computing Example 3: Matrix-Matrix Product

  20. Parallel Computing Example 4: Block Diagonal Product

  21. Parallel Computing Example 4: Block Diagonal Product

  22. Parallel Computing When is it worth it to parallelize?

  23. Parallel Computing When is it worth it to parallelize? • There is a time cost associated with passing messages

  24. Parallel Computing When is it worth it to parallelize? • There is a time cost associated with passing messages • The amount of message passing is dependent on the problem and the program (algorithm)

  25. Parallel Computing When is it worth it to parallelize? • Therefore, the benefits depend more on the structure of the problem and the program than on the size/speed of the parallel network (diminishing returns).

  26. Parallel Networks How do I use multiple processors?

  27. Parallel Networks How do I use multiple processors? • This depends on the network, but… • Most networks use some variation of PBS, a job scheduler, and mpirun or mpiexec.

  28. Parallel Networks How do I use multiple processors? • This depends on the network, but… • Most networks use some variation of PBS, a job scheduler, and mpirun or mpiexec. • A parallel program needs to be submitted as a batch job.

  29. Parallel Networks • Suppose I have a program myprog, which gets data from data.dat, which I call in the following fashion when only using one processor: ./myprog –f data.dat I would write a file myprog.pbs that looks like the following:

  30. Parallel Networks #PBS –q compute (name of the processing queue [not necessary on all networks]) #PBS -N myprog (the name of the job) #PBS –l nodes=2:ppn=1,walltime=00:10:00 (number of nodes and number of processes per node, maximum time to allow the program to run) #PBS -o /home/me/mydir/myprog.out (where the output of the program should be written) #PBS -e /home/me/mydir/myprog.err (where the error stream should be written) These are the headers that tell the job scheduler how to handle your job.

  31. Parallel Networks Although what follows depends on the MPI software that the network runs, it should look something like this: cd $PBS_O_WORKDIR (makes the processors run the program in the directory where myprog.pbsis saved) mpirun –machinefile $PBS_NODEFILE –np 2 myprog –f mydata.dat (tells the MPI software which processes to use and how many processes to start: notice that command line arguments follows as usual)

  32. Parallel Networks • Once the .pbs file is written, it can be submitted to the job scheduler with qsub: qsub myprog.pbs

  33. Parallel Networks • Once the .pbs file is written, it can be submitted to the job scheduler with qsub: qsub myprog.pbs • You can check to see if your job is running with the command qstat.

  34. Parallel Networks • Some systems (but not all) will allow you to simulate running your program in parallel on one processor, which is useful for debugging: mpirun –np 3 myprog –f mydata.dat

  35. Parallel Networks What parallel systems are available?

  36. Parallel Networks What parallel systems are available? • RTC : Rice Terascale Cluster: 244 processors.

  37. Parallel Networks What parallel systems are available? • RTC : Rice Terascale Cluster: 244 processors. • ADA : Cray XD1: 632 processors.

  38. Parallel Networks What parallel systems are available? • RTC : Rice Terascale Cluster: 244 processors. • ADA : Cray XD1: 632 processors. • caamster: CAAM department exclusive: 8(?) processors.

  39. PETSc What do I use PETSc for?

  40. PETSc What do I use PETSc for? • File I/O with “minimal” understanding of MPI

  41. PETSc What do I use PETSc for? • File I/O with “minimal” understanding of MPI • Vector and matrix based data management (in particular: sparse)

  42. PETSc What do I use PETSc for? • File I/O with “minimal” understanding of MPI • Vector and matrix based data management (in particular: sparse) • Linear algebra routines familiar from the famous serial packages

  43. PETSc • At the moment, ada and caamster (and harvey) have PETSc installed

  44. PETSc • At the moment, ada and caamster (and harvey) have PETSc installed • You can download and install PETSc on your own machine (requires cygwin for Windows), for educational and debugging purposes

  45. PETSc • PETSc builds on existing software BLAS and LAPACK: which implementations to use can be specified at configuration

  46. PETSc • PETSc builds on existing software BLAS and LAPACK: which implementations to use can be specified at configuration • Has (slower) debugging configuration and (faster, tacit) optimized configuration

  47. PETSc • Installation comes with documentation, examples, and manual pages.

  48. PETSc • Installation comes with documentation, examples, and manual pages. • The biggest part of learning how to use PETSc is learning how to use the manual pages.

  49. PETSc • It is extremely useful to have an environmental variable PETSC_DIR in you shell of choice, which gives the path to the installation of PETSc, e.g. PETSC_DIR=/usr/local/src/petsc-2.3.1-p13/ export PETSC_DIR

  50. PETSc Makefile

More Related