1 / 7

Parallel, Probabilistic Path Planning

Parallel, Probabilistic Path Planning. Nathan Ickes. May 19, 2004. 6.846 Parallel Processing: Architecture and Applications. Rapidly-Exploring Random Trees. RRTs are good at quickly finding a workable path. Rapidly-Exploring Random Trees. Building an RRT:. Biasing an RRT towards a goal

Download Presentation

Parallel, Probabilistic Path Planning

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Parallel, Probabilistic Path Planning Nathan Ickes May 19, 2004 6.846 Parallel Processing: Architecture and Applications

  2. Rapidly-Exploring Random Trees • RRTs are good at quickly finding a workable path

  3. Rapidly-Exploring Random Trees Building an RRT: Biasing an RRT towards a goal • On some iterations, pick the goal postion as a, instead of a random point 1 2 3 4 b b b a a a c a Pick a random point a in space Find the node b in the tree which is closest to a Drive robot towards a from b If path to c is collision-free, add c to the tree

  4. Parallelizing with OpenMP • RRT has one major, global data structure • Easily parallelized on a shared-memory machine int planner_run(int max_iter) { #pragma omp parallel private(i) #pragma omp for schedule(dynamic) nowait for (i=0; i<max_iter; i++) { #pragma omp atomic planner_iterations++; a = planner_pick_random_point(); b = planner_find_closest_point(a); c = planner_drive_towards(a, b); if (c) { #pragma omp critical tree_append_node(c); if (c == goal) return 0; } } return 1; }

  5. OpenMP Results Time to execute 100,000 iterations:

  6. Master-slave architecture Cooperative Architecture Parallelizing with MPI Can’t use pointers! Network latency is huge! Tree updates Master processmaintains tree New nodes New nodes Every process generates new nodes and adds them to its own tree New nodes are broadcast to other processes Processes work largely independently due to network latency Slave processesiterate algorithm Slaves generate new nodes, but wait for master to put them in the tree Can’t update tree fast enough

  7. Conclusions • RRT works well on a shared-memory machine • OpenMP makes it easy to parallelize RRT • provides significant performance increase • RRT is harder to implement with MPI, and doesn’t work as well • Global data structure • Can’t divide task into large chunks

More Related