1 / 44

Epetra Concepts Data management using Epetra Michael A. Heroux Sandia National Laboratories

Epetra Concepts Data management using Epetra Michael A. Heroux Sandia National Laboratories. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy under contract DE-AC04-94AL85000. Outline. Creating Objects.

mikemurphy
Download Presentation

Epetra Concepts Data management using Epetra Michael A. Heroux Sandia National Laboratories

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Epetra ConceptsData management using EpetraMichael A. HerouxSandia National Laboratories Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company,for the United States Department of Energy under contract DE-AC04-94AL85000.

  2. Outline • Creating Objects. • Performance Optimizations. • Petra Object Model. • Parallel Data Redistribution.

  3. A Simple Epetra/AztecOO Program // Header files omitted… int main(int argc, char *argv[]) { MPI_Init(&argc,&argv); // Initialize MPI, MpiComm Epetra_MpiComm Comm( MPI_COMM_WORLD ); // Header files omitted… int main(int argc, char *argv[]) { Epetra_SerialComm Comm(); // ***** Create x and b vectors ***** Epetra_Vector x(Map); Epetra_Vector b(Map); b.Random(); // Fill RHS with random #s // ***** Map puts same number of equations on each pe ***** int NumMyElements = 1000 ; Epetra_Map Map(-1, NumMyElements, 0, Comm); int NumGlobalElements = Map.NumGlobalElements(); // ***** Create Linear Problem ***** Epetra_LinearProblem problem(&A, &x, &b); // ***** Create/define AztecOO instance, solve ***** AztecOO solver(problem); solver.SetAztecOption(AZ_precond, AZ_Jacobi); solver.Iterate(1000, 1.0E-8); // ***** Create an Epetra_Matrix tridiag(-1,2,-1) ***** Epetra_CrsMatrix A(Copy, Map, 3); double negOne = -1.0; double posTwo = 2.0; for (int i=0; i<NumMyElements; i++) { int GlobalRow = A.GRID(i); int RowLess1 = GlobalRow - 1; int RowPlus1 = GlobalRow + 1; if (RowLess1!=-1) A.InsertGlobalValues(GlobalRow, 1, &negOne, &RowLess1); if (RowPlus1!=NumGlobalElements) A.InsertGlobalValues(GlobalRow, 1, &negOne, &RowPlus1); A.InsertGlobalValues(GlobalRow, 1, &posTwo, &GlobalRow); } A.FillComplete(); // Transform from GIDs to LIDs // ***** Report results, finish *********************** cout << "Solver performed " << solver.NumIters() << " iterations." << endl << "Norm of true residual = " << solver.TrueResidual() << endl; MPI_Finalize() ; return 0; }

  4. Typical Flow of Epetra Object Construction • Any number of Comm objects can exist. • Comms can be nested (e.g., serial within MPI). Construct Comm • Maps describe parallel layout. • Maps typically associated with more than one comp object. • Two maps (source and target) define an export/import object. Construct Map • Computational objects. • Compatibility assured via common map. Construct x Construct b Construct A

  5. EpetraExt::BlockMapToMatrixMarketFile("Test_map.mm", *map, "Official EpetraExt test map", "This is the official EpetraExt test map generated by the EpetraExt regression tests"); EpetraExt::RowMatrixToMatrixMarketFile("Test_A.mm", *A, "Official EpetraExt test matrix", "This is the official EpetraExt test matrix generated by the EpetraExt regression tests"); EpetraExt::VectorToMatrixMarketFile("Test_x.mm", *x, "Official EpetraExt test initial guess", "This is the official EpetraExt test initial guess generated by the EpetraExt regression tests"); EpetraExt::VectorToMatrixMarketFile("Test_xexact.mm", *xexact, "Official EpetraExt test exact solution", "This is the official EpetraExt test exact solution generated by the EpetraExt regression tests"); EpetraExt::VectorToMatrixMarketFile("Test_b.mm", *b, "Official EpetraExt test right hand side", "This is the official EpetraExt test right hand side generated by the EpetraExt regression tests"); Trilinos/packages/epetraext/test/inout/ (Part 1)

  6. Epetra_Map * map1; Epetra_CrsMatrix * A1; Epetra_Vector * x1; Epetra_Vector * b1; Epetra_Vector * xexact1; EpetraExt::MatrixMarketFileToMap("Test_map.mm", comm, map1); if (map->SameAs(*map1)) if (verbose) cout << "Maps are equal. In/Out works." << endl; else if (verbose) cout << "Maps are not equal. In/Out fails." << endl; // Read Matrix: EpetraExt::MatrixMarketFileToCrsMatrix("Test_A.mm", *map1, A1); // Alternates: EpetraExt::MatrixMarketFileToCrsMatrix("Test_A.mm", comm, A1); // Only need comm EpetraExt::MatlabFileToCrsMatrix("Test_A.dat", comm, A1); // Read (row,col,val) (no MM header). // Read Vector (Equivalent MultiVector versions. EpetraExt::MatrixMarketFileToVector("Test_x.mm", *map1, x1); EpetraExt::MatrixMarketFileToVector("Test_xexact.mm", *map1, xexact1); EpetraExt::MatrixMarketFileToVector("Test_b.mm", *map1, b1); Trilinos/packages/epetraext/test/inout/ (part 2)

  7. Inout Summary • Reads from/writes to Matlab, HDF5 or Matrix Market compatible files. • Works for any distributed map, matrix, vector or multivector. • If map is read in on the same number of processor it was written to, the map layout will be preserved.

  8. Epetra Performance Optimization GuideSAND2005-1668 • Topics: • 3rd Party Libraries: BLAS and LAPACK • Epetra_MultiVector Data Layout • Epetra_CrsGraph Construction • Epetra_CrsMatrix Construction • Selecting the Right Sparse Matrix Class • Parallel Data Redistribution • General Practices • Tiered Approach to practices.

  9. Practice Categories Each practice falls into one of three categories: Very Strongly Recommended - Practices necessary for Epetra to perform well. VSR Strongly Recommended - Practices that are definitely a good thing or that have proved to be valuable. SR Recommended - Practices that are probably a good idea. R

  10. Epetra Computational Classes • Epetra contains base class Epetra_CompObject. • Small class, but classes that derive from it are exactly those which are the focus of guide.

  11. Fast BLAS • Link to high-performance BLAS for Epetra_MultiVector, Epetra_VbrMatrix and Epetra_SerialDense performance • Options: • Atlas (All platforms). • GOTO BLAS (PC). • vecLib (Mac). • Native BLAS on legacy platforms. VSR

  12. Epetra_MultiVector Layout • Create Epetra_MultiVector objects with strided storage. • Improved block vector updates, block dot products. • Improve sparse matrix multiplication/solves. VSR

  13. Epetra_CrsGraph/Matrix Construction • Construct Epetra_CrsGraph objects first: • When constructing multiple Epetra_CrsMatrix or Epetra_VbrMatrix objects, much of the overhead in sparse matrix construction is related solely to the graph structure of the matrix. • By pre-constructing the Epetra_CrsGraph object, this expense is incurred only once and then amortized over multiple matrix constructions. • Note: Even when constructing a single matrix, it is often the case that matrix construction is done within a nonlinear iteration or a time-stepping iteration. In both of these cases, it is best to construct the Epetra_CrsGraph object one time and then reuse it. • When constructing Epetra_CrsGraph/matrix objects, carefully manage the nonzero profile and set StaticProfile to ‘true’ : • Although it is very convenient to use the flexible insertion capabilities of Epetra_CrsGraph/Matrix, the performance and memory cost can be substantial. • StaticProfile is an optional argument to the Epetra_CrsGraph/Matrix constructors that forces the constructor to pre-allocate all storage for the graph, using the argument NumIndicesPerRow as a strict upper limit on the number of indices that will be inserted. • After calling FillComplete(), call OptimizeStorage() [Now automatic]: • The OptimizeStorage() method frees memory that is used only when submitting data to the object. • Also, the storage that remains is packed for better cache memory performance due to better spatial locality. • If StaticProfile is true, the packing of data is fast and cheap, if needed at all . • If StaticProfile is false, then the graph data is not contiguously stored, so we must allocate contiguous space for all entries, copy data from the existing arrays, and then delete the old arrays. This can be a substantial overhead in memory costs and could even double the high-water memory mark. However, the performance improvement for matrix operations can be 20% to a full doubling of performance. VSR VSR VSR

  14. Epetra_CrsGraph/Matrix (cont) • Use Epetra_FECrsMatrix if you have many repeated indices. Then extract the associated Epetra_CrsGraph for subsequent use: • Although there is no Epetra_FECrsGraph class (something that we may introduce in the future), it is typically best to use the Epetra_FECrsMatrix class to construct matrices when you have many repeated indices. • This is typically the case when forming global stiffness matrices from local element stiffness matrices in a finite element assembly loop. • Note: Even though there is no Epetra_FECrsGraph class, once an Epetra_FECrsMatrix has been constructed, call it myFEMatrix, there is an Epetra_CrsGraph that can be accessed via myFEMatrix.Graph() and used to construct further matrices that have the same pattern as myFEMatrix. • When inserting indices into Epetra_CrsGraph/Matrix objects, avoid large numbers of repeated indices: • To reduce time costs, indices that are inserted into a row are not checked for redundancy at the time they are inserted. Instead, when FillComplete() is called, indices are sorted and then redundant indices are removed. • Because of this approach, repeated indices increase the memory used by a given row, at least until FillComplete() is called. • Submit large numbers of graph/matrix indices at once: • When inserting indices, submit many entries at once if possible. • Ideally, it is best to submit all indices for a given row at once. VSR SR SR

  15. Picking a Sparse Matrix Class • Epetra_CrsMatrix: Compressed Row Storage scalar entry matrix. • Epetra_FECrsMatrix: Finite Element Compressed Row Storage matrix. • Epetra_VbrMatrix: Variable Block Row block entry matrix. • Epetra_FEVbrMatrix: Finite Element Variable Block Row matrix. • Epetra_JadMatrix: Supports vectorization of sparse matrix-vector multiplication. • Your own: Use the Epetra_BasicRowMatrix class as a base class. Requires implementation of 4 virtual methods (and probably the SpMV too).

  16. Picking a Sparse Matrix Class (cont) • Use Epetra_FEVbrMatrix to construct a matrix with dense block entries and repeated summations into block entries • Use Epetra_VbrMatrix to construct a matrix with dense block entries and few repeated submissions: • Block entries should be of size 4 or larger before Vbr formats are preferable to Crs formats. • Use Epetra_FECrsMatrix to construct a matrix with scalar entries and repeated summations into entries: • Epetra_FECrsMatrix is designed to handle problems such as finite element, finite volume or finite difference problems where a single degree of freedom is being tracked at a single mesh point, and the matrix entries are being summed into the global stiffness matrix one finite element or control volume cell at a time. • Use Epetra_CrsMatrix in all other cases: • Epetra_CrsMatrix is the simplest and most natural matrix data structure for people who are used to thinking about sparse matrices. Unless you have one of the above three situations, you should use Epetra_CrsMatrix. • Use Epetra_RowMatrix to access matrix entries: • If you are writing code to use an existing Epetra sparse matrix object, use the Epetra_RowMatrix interface to access the matrix. • This will provide you compatibility with all of Epetra’s sparse matrix classes, and allow users of your code to provide their own custom implementation of Epetra_RowMatrix if needed. VSR R VSR VSR VSR

  17. Other RowMatrix adapters • In addition, there are numerous other implementation of Epetra_RowMatrix provided in other packages. For example: • Epetra_MsrMatrix: • This class is provided within AztecOO. • The constructor for Epetra_MsrMatrix takes a single argument, namely an existing AZ_DMSR matrix. • Epetra_MsrMatrix does not make a copy of the Aztec matrix. Instead it make the Aztec matrix act like an Epetra_RowMatrix object. • Ifpack Filters: • Ifpack uses the Epetra_RowMatrix interface to provide modified views of existing Epetra_RowMatrix objects. • For example, Ifpack_DropFilter creates a new Epetra_RowMatrix from an existing one by dropping all matrix values below a certain tolerance.

  18. Parallel Data Redistribution • Use Isorropia to balance load: • Any serious use of Epetra for scalable performance must ensure that work and data are balanced across the parallel machine. • Isorropia provides an interface to the Zoltan library that can greatly improve the load balance for typical sparse matrix computations. • Use Epetra_OffsetIndex to improve performance of multiple redistributions: • Often the data distribution that is optimal for constructing sparse matrices is not optimal for solving the related system of equations. • As a result, the matrix will be constructed in one distribution, and then redistributed for the solve. • Epetra_OffsetIndex precomputes the offsets for the redistribution pattern, making repeated redistributions cheaper. • Particularly important for very sparse (e.g., circuit) matrices.

  19. General Practices • Check method return codes: • Almost all methods of Epetra classes provide an integer return argument. • This argument is set to zero if no errors or warning occurred. • However, a number of performance-sensitive methods will communicate potential performance problems by returning a positive error code. • For example, if the matrix insertion routine must re-allocate memory because the allocated row storage is too small, a return code of 1 is set. Since memory reallocation can be expensive, this information can be an indication that the user should increase the nonzero profile values (NumEntriesPerRow). • Compile Epetra with aggressive optimization: • Epetra performance can benefit greatly from aggressive compiler optimization settings. • In particular, aggressive optimization for FORTRAN can be especially important. Configuring Epetra (or Epetra as part of Trilinos) with the following compiler flags works well on 32-bit PC platforms with the GCC compilers: ../configure CXXFLAGS="-O3" CFLAGS="-O3" \ FFLAGS="-O5 -funroll-all-loops -malign-double"

  20. LAL Foundation: Petra • Petra provides a “common language” for distributed linear algebra objects (operator, matrix, vector) • Petra provides distributed matrix and vector services. • Has 3 implementations under development.

  21. Petra Object Model • Perform redistribution of distributed objects: • Parallel permutations. • “Ghosting” of values for local computations. • Collection of partial results from remote processors. • Base Class for All Distributed Objects: • Performs all communication. • Requires Check, Pack, Unpack methods from derived class. • Abstract Interface for Sparse All-to-All Communication • Supports construction of pre-recorded “plan” for data-driven communications. • Examples: • Supports gathering/scatter of off-processor x/y values when computing y = Ax. • Gathering overlap rows for Overlapping Schwarz. • Redistribution of matrices, vectors, etc… • Abstract Interface to Parallel Machine • Shameless mimic of MPI interface. • Keeps MPI dependence to a single class (through all of Trilinos!). • Allow trivial serial implementation. • Opens door to novel parallel libraries (shmem, UPC, etc…) • Graph class for structure-only computations: • Reusable matrix structure. • Pattern-based preconditioners. • Pattern-based load balancing tools. • Basic sparse matrix class: • Flexible construction process. • Arbitrary entry placement on parallel machine. • Describes layout of distributed objects: • Vectors: Number of vector entries on each processor and global ID • Matrices/graphs: Rows/Columns managed by a processor. • Called “Maps” in Epetra. • Dense Distributed Vector and Matrices: • Simple local data structure. • BLAS-able, LAPACK-able. • Ghostable, redistributable. • RTOp-able.

  22. Petra Implementations • Three version under development: • Epetra (Essential Petra): • Current production version. • Restricted to real, double precision arithmetic. • Uses stable core subset of C++ (circa 2000). • Interfaces accessible to C and Fortran users. • Tpetra (Templated Petra): • Next generation C++ version. • Templated scalar and ordinal fields. • Uses namespaces, and STL: Improved usability/efficiency. • Advanced node architecture, multiprecision support. • Jpetra (Java Petra): • Pure Java. Portable to any JVM. • Interfaces to Java versions of MPI, LAPACK and BLAS via interfaces.

  23. Details about Epetra Maps • Note: Focus on Maps (not BlockMaps). • Getting beyond standard use case…

  24. 1-to-1 Maps • 1-to-1 map (defn): A map is 1-to-1 if each GID appears only once in the map (and is therefore associated with only a single processor). • Certain operations in parallel data repartitioning require 1-to-1 maps. Specifically: • The source map of an import must be 1-to-1. • The target map of an export must be 1-to-1. • The domain map of a 2D object must be 1-to-1. • The range map of a 2D object must be 1-to-1.

  25. 2D Objects: Four Maps • Epetra 2D objects: • CrsMatrix, FECrsMatrix • CrsGraph • VbrMatrix, FEVbrMatrix • Have four maps: • RowMap: On each processor, the GIDs of the rows that processor will “manage”. • ColMap: On each processor, the GIDs of the columns that processor will “manage”. • DomainMap: The layout of domain objects (the x vector/multivector in y=Ax). • RangeMap: The layout of range objects (the y vector/multivector in y=Ax). Typically a 1-to-1 map Typically NOT a 1-to-1 map Must be 1-to-1 maps!!!

  26. Sample Problem x y A =

  27. RowMap = {0, 1} ColMap = {0, 1, 2} DomainMap = {0, 1} RangeMap = {0, 1} Case 1: Standard Approach • First 2 rows of A, elements of y and elements of x, kept on PE 0. • Last row of A, element of y and element of x, kept on PE 1. PE 0 Contents PE 1 Contents • RowMap = {2} • ColMap = {1, 2} • DomainMap = {2} • RangeMap = {2} Notes: • Rows are wholly owned. • RowMap=DomainMap=RangeMap (all 1-to-1). • ColMap is NOT 1-to-1. • Call to FillComplete: A.FillComplete(); // Assumes Original Problem y A x =

  28. RowMap = {0, 1} ColMap = {0, 1, 2} DomainMap = {1, 2} RangeMap = {0} Case 2: Twist 1 • First 2 rows of A, first element of y and last 2 elements of x, kept on PE 0. • Last row of A, last 2 element of y and first element of x, kept on PE 1. PE 0 Contents PE 1 Contents • RowMap = {2} • ColMap = {1, 2} • DomainMap = {0} • RangeMap = {1, 2} Notes: • Rows are wholly owned. • RowMap is NOT = DomainMap is NOT = RangeMap (all 1-to-1). • ColMap is NOT 1-to-1. • Call to FillComplete: A.FillComplete(DomainMap, RangeMap); Original Problem y A x =

  29. RowMap = {0, 1} ColMap = {0, 1} DomainMap = {1, 2} RangeMap = {0} Case 2: Twist 2 • First row of A, part of second row of A, first element of y and last 2 elements of x, kept on PE 0. • Last row, part of second row of A, last 2 element of y and first element of x, kept on PE 1. PE 0 Contents PE 1 Contents • RowMap = {1, 2} • ColMap = {1, 2} • DomainMap = {0} • RangeMap = {1, 2} Notes: • Rows are NOT wholly owned. • RowMap is NOT = DomainMap is NOT = RangeMap (all 1-to-1). • RowMap and ColMap are NOT 1-to-1. • Call to FillComplete: A.FillComplete(DomainMap, RangeMap); Original Problem y A x =

  30. What does FillComplete Do? • A bunch of stuff. • One task is to create (if needed) import/export objects to support distributed matrix-vector multiplication: • If ColMap ≠ DomainMap, create Import object. • If RowMap ≠ RangeMap, create Export object. • A few rules: • Rectangular matrices will always require: A.FillComplete(DomainMap,RangeMap); • DomainMap and RangeMap must be 1-to-1.

  31. Parallel Data Redistribution • Epetra vectors, multivectors, graphs and matrices are distributed via one of the map objects. • A map is basically a partitioning of a list of global IDs: • IDs are simply labels, no need to use contiguous values (Directory class handles details for general ID lists). • No a priori restriction on replicated IDs. • If we are given: • A source map and • A set of vectors, multivectors, graphs and matrices (or other distributable objects) based on source map. • Redistribution is performed by: • Specifying a target map with a new distribution of the global IDs. • Creating Import or Export object using the source and target maps. • Creating vectors, multivectors, graphs and matrices that are redistributed (to target map layout) using the Import/Export object.

  32. Goal: epetra/ex9.cpp Before Export After Export PE 0 xxx(0)=10 xxx(1)=10 xxx(2)=10 PE 0 yyy(0)=10 yyy(1)=30 yyy(2)=30 yyy(3)=20 Export/Add PE 1 xxx(1)=20 xxx(2)=20 xxx(3)=20 PE 1

  33. int main(int argc, char *argv[]) { MPI_Init(&argc, &argv); Epetra_MpiComm Comm(MPI_COMM_WORLD); int NumGlobalElements = 4; // global dimension of the problem int NumMyElements; // local nodes Epetra_IntSerialDenseVector MyGlobalElements; if( Comm.MyPID() == 0 ) { NumMyElements = 3; MyGlobalElements.Size(NumMyElements); MyGlobalElements[0] = 0; MyGlobalElements[1] = 1; MyGlobalElements[2] = 2; } else { NumMyElements = 3; MyGlobalElements.Size(NumMyElements); MyGlobalElements[0] = 1; MyGlobalElements[1] = 2; MyGlobalElements[2] = 3; } // create a map Epetra_Map Map(-1,MyGlobalElements.Length(), MyGlobalElements.Values(),0, Comm); // create a vector based on map Epetra_Vector xxx(Map); for( int i=0 ; i<NumMyElements ; ++i ) xxx[i] = 10*( Comm.MyPID()+1 ); if( Comm.MyPID() == 0 ){ double val = 12; int pos = 3; xxx.SumIntoGlobalValues(1,0,&val,&pos); } cout << xxx; // create a target map, in which all elements are on proc 0 int NumMyElements_target; if( Comm.MyPID() == 0 ) NumMyElements_target = NumGlobalElements; else NumMyElements_target = 0; Epetra_Map TargetMap(-1,NumMyElements_target,0,Comm); Epetra_Export Exporter(Map,TargetMap); // work on vectors Epetra_Vector yyy(TargetMap); yyy.Export(xxx,Exporter,Add); cout << yyy; MPI_Finalize(); return( EXIT_SUCCESS ); } Example: epetra/ex9.cpp

  34. > mpirun -np 2 ./ex9.exe Epetra::Vector (PE0 before export) MyPID GID Value 0 0 10 0 1 10 0 2 10 Epetra::Vector (PE1 before export) 1 1 20 1 2 20 1 3 20 Epetra::Vector (PE0 after export) MyPID GID Value 0 0 10 0 1 30 0 2 30 0 3 20 Epetra::Vector (PE1 after export) Output: epetra/ex9.cpp Before Export After Export PE 0 xxx(0)=10 xxx(1)=10 xxx(2)=10 PE 0 yyy(0)=10 yyy(1)=30 yyy(2)=30 yyy(3)=20 Export/Add PE 1 xxx(1)=20 xxx(2)=20 xxx(3)=20 PE 1

  35. Import vs. Export • Import (Export) means calling processor knows what it wants to receive (send). • Distinction between Import/Export is important to user, almost identical in implementation. • Import (Export) objects can be used to do an Export (Import) as a reverse operation. • When mapping is bijective (1-to-1 and onto), either Import or Export is appropriate.

  36. Example: 1D Matrix Assembly -uxx = f u(a) = 0 u(b) = 1 PE 0 PE 1 a x1 x2 x3 b • 3 Equations: Find u at x1, x2 and x3 • Equation for u at x2 gets a contribution from PE 0 and PE 1. • Would like to compute partial contributions independently. • Then combine partial results.

  37. Two Maps • We need two maps: • Assembly map: • PE 0: { 1, 2 }. • PE 1: { 2, 3 }. • Solver map: • PE 0: { 1, 2 } (we arbitrate ownership of 2). • PE 1: { 3 }.

  38. End of Assembly Phase • At the end of assembly phase we have AssemblyMatrix: On PE 0: On PE 1: • Want to assign all of Equation 2 to PE 0 for usewith solver. • NOTE: For a class of Neumann-Neumann preconditioners, the above layout is exactly what we want. Equation 1:Equation 2: Row 2 is shared Equation 2:Equation 3:

  39. Export Assembly Matrix to Solver Matrix Epetra_Export Exporter(AssemblyMap, SolverMap); Epetra_CrsMatrix SolverMatrix (Copy, SolverMap, 0); SolverMatrix.Export(AssemblyMatrix, Exporter, Add); SolverMatrix.FillComplete();

  40. Matrix Export After Export Before Export PE 0 PE 0 Equation 1:Equation 2: Equation 1:Equation 2: Export/Add PE 1 PE 1 Equation 2:Equation 3: Equation 3:

  41. int main(int argc, char *argv[]) { MPI_Init(&argc,&argv); Epetra_MpiComm Comm (MPI_COMM_WORLD); int MyPID = Comm.MyPID(); int n=4; // Generate Laplacian2d gallery matrix Trilinos_Util::CrsMatrixGallery G("laplace_2d", Comm); G.Set("problem_size", n*n); G.Set("map_type", "linear"); // Linear map initially // Get the LinearProblem. Epetra_LinearProblem *Prob = G.GetLinearProblem(); // Get the exact solution. Epetra_MultiVector *sol = G.GetExactSolution(); // Get the rhs (b) and lhs (x) Epetra_MultiVector *b = Prob->GetRHS(); Epetra_MultiVector *x = Prob->GetLHS(); // Repartition graph using Zoltan EpetraExt::Zoltan_CrsGraph * ZoltanTrans = new EpetraExt::Zoltan_CrsGraph(); EpetraExt::LinearProblem_GraphTrans * ZoltanLPTrans = new EpetraExt::LinearProblem_GraphTrans( *(dynamic_cast<EpetraExt::StructuralSameTypeTransform<Epetra_CrsGraph>*>(ZoltanTrans)) ); cout << "Creating Load Balanced Linear Problem\n"; Epetra_LinearProblem &BalancedProb = (*ZoltanLPTrans)(*Prob); // Get the rhs (b) and lhs (x) Epetra_MultiVector *Balancedb = Prob->GetRHS(); Epetra_MultiVector *Balancedx = Prob->GetLHS(); cout << "Balanced b: " << *Balancedb << endl; cout << "Balanced x: " << *Balancedx << endl; MPI_Finalize() ; return 0 ; } Example: epetraext/ex2.cpp

  42. Need for Import/Export • Solvers for complex engineering applications need expressive, easy-to-use parallel data redistribution: • Allows better scaling for non-uniform overlapping Schwarz. • Necessary for robust solution of multiphysics problems. • We have found import and export facilities to be a very natural and powerful technique to address these issues.

  43. Tpetra Briefly:Current Classes and Hierarchy • Teuchos::Object[external] • Tpetra::Directory< Ordinal > • Tpetra::DistObject< Ordinal, Scalar > • Tpetra::MultiVector< Ordinal, Scalar > • Tpetra::Vector< Ordinal, Scalar > • Tpetra::Distributor< Ordinal > • Tpetra::Export< Ordinal > • Tpetra::Import< Ordinal > • Tpetra::Map< Ordinal > • Tpetra::Operator< Ordinal, Scalar > • Tpetra::CrsMatrix< Ordinal, Scalar > • Tpetra::Platform< Ordinal > • Tpetra::MpiPlatform< Ordinal > • Tpetra::SerialPlatform< Ordinal >

  44. Summary • Petra Object Model: • Flexible, scalable parallel data model. • Extending to support abstract/concrete compute node access. • Epetra: • Workhorse data classes, used by 1000s. • Incremental, evolutionary development ongoing. • Tpetra: • Future workhorse. • Multicore/GPU support. • Multi-precision. • Tpetra and Epetra: • Epetra not abandoned. • Adapters to wrap Epetra objects as Tpetra.

More Related