1 / 37

High-Speed Autonomous Navigation with Motion Prediction for Unknown Moving Obstacles

High-Speed Autonomous Navigation with Motion Prediction for Unknown Moving Obstacles. Dizan Vasquez, Frederic Large, Thierry Fraichard and Christian Laugier INRIA Rhône-Alpes & Gravir Lab. France IROS 2004. Objective.

Download Presentation

High-Speed Autonomous Navigation with Motion Prediction for Unknown Moving Obstacles

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. High-Speed Autonomous Navigation with Motion Prediction for Unknown Moving Obstacles Dizan Vasquez, Frederic Large, Thierry Fraichard and Christian Laugier INRIA Rhône-Alpes & Gravir Lab. France IROS 2004

  2. Objective • To design techniques allowing a vehicle to navigate in an environment populated with moving obstacles whose future motion is unknown. • Two constraints: • Limited response time: f(Dynamicity). • Need of reasoning about the future: Prediction. • Prediction Validity?

  3. Autonomous Navigation:Approaches • Reactive approaches [Arkins, Simmons, Borenstein, etc.] No look-ahead • “Improved” reactive approaches [Khatib, Montano, Ulrich, etc.] Lack of generality • Iterative planning approaches [Hsu, Veloso] Too slow for highly dynamic environment • Iterative partial planning[Fraichard, Frazzoli, Petti]

  4. Autonomous Navigation:Proposed Solution • Iterative partial planning approach • Fast Motion Planning. The concept of Velocity Obstacle [Fiorini, Shiller] is used in an iterative motion planner which proposes a safe plan for a given time interval. • Motion prediction for Moving obstacles. Typical behavior of moving obstacles is learned and then applied for motion prediction.

  5. Motion Planning:Principle • Iterative planner. Plans computed during a given time interval. • Incremental calculation of a partial trajectory. • Uses a model of the future (prediction). • Based on the A* algorithm. • Uses the Non Linear Velocity Obstacle concept to speed up the calculation [Large, Shiller] • Real Time. • Adapts to changes.

  6. Motion Planning:Velocity Obstacles A NLVO is the set of all the linear velocities of the robot that are constant on a given interval and that induce a collision before .

  7. Motion Planning:A* implementation • Nodes: Dated states. • Link: Motion (velocities). • Velocities expanded with a two criteria heuristic: • Time to Collision cost : • Time to Goal cost:

  8. Motion Planning:Updating the Tree • Instead of rebuilding the tree at each step, we update it. • Past configuration are pruned excepting for the currently open node. • If any collision is detected, another node is chosen in the remaining tree, and explored from the root.

  9. Motion Prediction:Traditional Approaches • Motion Equations and State Estimation Example [Zhu90] • Fast. • Easy to Implement. • Estimate and . [Kalman60] • Short Time Horizon. • Equations are not general (intentional behaviour?).

  10. Motion Prediction:Learning-Based Approaches • Hypothesis: On a given environment, objects do not move randomly but follow a pattern. • Steps: • Learning. • Prediction. • General. • Long Time Horizon. • Real-Time Capability. • Prediction of unobserved behaviors. • Unstructured Environments [TadokoroEtAl95] [KruseEtAl96] [BennewitzEtAl02]

  11. Motion Prediction:Proposed Approach The approach we propose is defined by: • A similarity measure. • Use of pairwise clustering algorithms. • A cluster representation. • Calculation of probability of belonging to a cluster.

  12. Motion Prediction:Learning Stage • Dissimilarity • Measure Observed Trajectories Dissimilarity Matrix Cluster Mean Values and Std. Dev. 2. Pairwise Clustering Algorithm 3. Calculation Of Cluster Representation Trajectory Clusters

  13. Motion Prediction:Dissimilarity Measure q di . dj t Ti Tj

  14. Motion Prediction:Cluster Representation • Cluster Mean-Value: • Cluster Standard Deviation:

  15. Motion Prediction:Prediction Stage • The probability of belonging to a cluster is modeled as a Gaussian: Where: Prediction: Maximum likelihood or sampling

  16. Motion Prediction:Experimental Results • Implementation using Complete-Link Hierarchical Clustering and Deterministic Annealing Clustering. • Benchmark using Expectation-Maximization Clustering as described in [Bennewitz02]

  17. Motion Prediction:Experimental Results • Evaluation using a performance measure. • Tests ran with simulated data.

  18. Motion Planning:Results Experiments have been performed in a simulated environment.

  19. Conclusions In this paper a navigation approach is proposed. It consists of two components: • A learning-based motion prediction technique able to produce long-term motion estimates. • An iterative motion planner based on the concept of Non-Linear Velocity Obstacle which adapts its scope according to available time.

  20. Perspectives • Work in a real system installed in the laboratory’s parking. • Research on unknown behavior’s prediction.

  21. Thank You!

  22. PWE: Calcul du Nombre de Clusters

  23. Résultats Expérimentaux: Génération de l’ensemble d’entraînement(cont…) • Les points correspondant aux points de control sont génères en utilisant des distributions gaussiennes avec un écart type fixe. • Le mouvement a été simulé en avançant en pas fixes depuis le dernier point de control dans la direction du prochain d’accord a une distribution gaussienne. On considère avoir arrivé dans le prochain point de control quand on est plus près qu’un certain seuil. • Le pas 2 es répété jusqu’à on arrive au dernier point de control.

  24. Quelques Concepts Importantes • Configuration. • Mouvement. • Estimation de Mouvement. • Horizon Temporelle.

  25. PWE: Deterministic Annealing L’appartenance dans un cluster est calculée de façon itérative: INITIALISER et AU HAZARD; température T←T₀; WHILE T>Tfinal s←0; REPEAT Estimation: Calculer en fonction de ; Maximisation: Calculer a partir de ; s←s+1; UNTIL tous ( , ) convergent; T←ηT; ← ; ← ; END;

  26. 2. Select Cluster with Max Likelihood Cluster Mean Cluster Set 3. Calculate Distance Trajectory Fraction Error Value 1. Select Starting Fraction Test Trajectory Test Trajectory Experimental Results:Performance Measure

  27. Experimental Results:Learning stage results

  28. Experimental Results:Learning stage results Résultats Expérimentaux:

  29. Experimental Results:Cluster Examples

  30. Conclusions:Contributions • We have proposed an approach based on three calculations: • Dissimilarity Measure. • Cluster Mean-Value. • Probability of Belonging to a Cluster.

  31. Conclusions:Contributions (cont…) • We have implemented our approach using Complete-Link and Deterministic Annealing Clustering • We have implemented the approach presented on [Bennewitz 02] • According to our performance measure, our technique has a better performance than that based on Estimation-Maximization.

  32. PWE: Trouve les groupes et leur représentations en deux pas. Calcule la valeur de K avec l’algorithme Complete-Link. Peut utiliser tous les algorithmes Pairwise Clustering. Représente les clusters avec la trajectoire moyenne. EME: Trouve les groupes et leur représentations simultanément. Calcule la valeur de K avec un algorithme incrémental. Utilise l’algorithme Expectation-Maximization Représente les clusters avec des distributions gaussiennes. PWE: Comparaison avec EME

  33. Estimation basé sur EM (EME) Nous considérons cette technique [Bennewitz 02] comme l’état de l’art pour notre problème: Apprentissage: • Trouve les groupes et ses représentations (séquences de gaussiennes) simultanément. • Utilise l’algorithme EM (Expectation-Maximization) • Trouve le nombre de clusters en utilisant un algorithme incrémental. Estimation: • Basé sur le calcul de la vraisemblance d’une trajectoire partielle observé opartial sous chaque un des chemins θk comme une multiplication de probabilités.

  34. Estimation basé sur EM (EME):Algorithme EM Calcule les assignations cik et les chemins θk • Expectation: Calcule la valeur espéré E[cik] sous les chemins courants θk. • Maximization: Assume que cik= E[cik] et calcule des nouveaux chemins θ’k • Fait θk=θ’k et recommence . . . . . . . θk2 θk10 θk1

  35. Estimation basé sur EM (EME):Estimation La vraisemblance d’une trajectoire di sous un chemin θk est: di5 . . . . di2 . di1 . . θk2 θk10 θk1

  36. Résultats Expérimentaux:Mesure de Performance Fonction PerformanceMetric( χ,C,percentage) result←0; FOR chaque trajectoire χi in the test set χ DO calculate χipercentage; trouver le cluster Ck ayant la majeur vraisemblance pour χipercentage; result←result+δ(χi,μk); END FOR result← result/Nχ;

  37. Estimation basé sur EM (EME):Avantages / Inconvénients • Horizons Temporelles Longs • Ils ne fait pas de suppositions par rapport a la forme des trajectoires • Il estime le nombre de clusters • Il n’est pas capable de prédire des trajectoires qu’il n’a jamais observé.

More Related