1 / 15

MA/CS 471

MA/CS 471. Lecture 25, Fall 2002 Adaptive Mesh Resolution. Goal. When we solve a PDE we typically should provide an estimate for how good the solution is. In fact we should aim to keep working until the approximate solution is within a given tolerance of the actual solution.

marlon
Download Presentation

MA/CS 471

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. MA/CS 471 Lecture 25, Fall 2002 Adaptive Mesh Resolution

  2. Goal • When we solve a PDE we typically should provide an estimate for how good the solution is. • In fact we should aim to keep working until the approximate solution is within a given tolerance of the actual solution.

  3. Static Mesh Distribution • So far we have considered the case of a static grid • The sequence of events has been: • Generate geometry in WinUSEMe • Grid inside WinUSEMe (using Shewchuk’s Triangle) • Save to mesh to .neu file • Create partition file for N processors • Queue job for N processors • Load N pieces of mesh onto the N processors • Invert stiffness matrix (mat) • Print out error, and save solution • Load solution into (pgplot or Matlab or .. ) • View results… • If better result is needed go back to 2.

  4. Difficulties for Static Version • Notice that we end up having to resubmit the parallel job every time we wish to change the mesh resolution. • Also – there’s a lot of disk access going on (mesh  file  umMESH  refined mesh  file …) • There is no guided effort to achieve a final accuracy.

  5. Dynamic Mesh Distribution • Ok – so clearly the 11 stage process is way too involved. • A better (more integrated) approach should be: • Generate geometry in WinUSEMe • Grid inside WinUSEMe (using Shewchuk’s Triangle) • Save to mesh to .neu file • Queue job for N processors • Load N pieces of mesh onto the N processors • Use parmetis to redistribute nodes • Invert stiffness matrix (mat ) • Estimate error (remember, normally we do not know a solution to Laplace’s equation arbitrary geometry). • Mark areas or elements to refine. • If solution error estimate is within tolerance for all elements quit. • Increase mesh resolution where needed • Go to 6

  6. Project Continued • Ok – so here’s where we go next. • We are going to cheat and use the exact solution to tell us whether we have converged to a given tolerance. • We are going to use a nested element refinement as follows:

  7. Possible Element Refinements (that we will allow)

  8. Example Small Mesh Refinement Original Mesh

  9. Example Small Mesh Refinement Original Mesh Check element error estimate    

  10. Check element error estimate     Example Small Mesh Refinement Refine 

  11. Refine  Make elements “conform” Example Small Mesh Refinement

  12. Final Team Project Description • If you have a working parallel direct solver use it, otherwise use your working conjugate gradient code • Integrate the parmetis library so that you can do the mesh partition on the fly (unless your direct solver already takes care of it): http://www-users.cs.umn.edu/~karypis/metis/parmetis/index.html • Create a routine which will build an array of length Nelements, with each entry containing a flag as to whether that element should be split (use |exact – approx| > tolerance as your measure of refinement) • Create a routine which takes the array from stage 3 and makes sure that if an element edge is split then any element sharing that edge should also have that edge split. Also if that shared edge is broken across processors then the split must propagate to the other processor. • Create a routine which does trash collection to make sure the memory use does not balloon up as stuff is repeatedly created. • Make sure that your code scales. i.e. use upshot to verify that large amounts of time are not being spent in MPI_Send, MPI_Recv… . If the code is troubled recode with MPI_Isend, MPI_Irecv and MPI_Wait – and restructure so that you are doing work while you are waiting for the messages to arrive.

  13. Final Project Testing Phase • Try out this on some test meshes of your own creation. • Build your right hand side function (f) and mesh so that zero-Dirichlet boundary conditions are satisfied and the solution is some smooth u • Each team member must try a different mesh and solution. • The group MUST create a web site (this is a portion of your grade). However, only post your grade after the project deadline.

  14. Extra Credit  • Ok – so we are cheating by using the exact solution to figure out whether to refine or not. • Alternatively we can try some heuristics: • Compute the normal derivative of the numerical solution at the edges of an element. • If the jump in the normal derivative is larger than some tolerance mark the edge for splitting. • -- OR – Go online and find some alternative “error estimates for finite element solution of Laplace’s equations” • ONLY DO THIS IF EVERYTHING ELSE IS WORKING

  15. Final Team Project cont • Ok – you should split up the work with parts 2,3,4,(5&6) as the partition. • The COMPLETE project is due on 12/02/02 • ** Warning ** this is just after the Thanksgiving break so I strongly suggest you get it working by 11/27/02 • There will be a 1/2 point deduction for every day it is late after 12/02/02 • All lab time will be devoted to this effort – and attendance to labs is compulsory.

More Related