1 / 37

January 18, 2007 Class lecture #2

Big Picture: Overview of Issues and Challenges in Autonomous Robotics + Impact on Practical Implementation Details. January 18, 2007 Class lecture #2. Announcements/Questions. Course mailing list set up: If you haven’t received a “welcome” message from this mailing list, see (TA) right away.

stash
Download Presentation

January 18, 2007 Class lecture #2

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Big Picture: Overview of Issues and Challenges in Autonomous Robotics + Impact on Practical Implementation Details January 18, 2007 Class lecture #2

  2. Announcements/Questions • Course mailing list set up: If you haven’t received a “welcome” message from this mailing list, see (TA) right away. • Any questions about Assignment #1?

  3. Today’s Objective: Understand big picture + challengesand realization in terms of practical implementation details Remember last time -- Issues we’ll study this term: • Robot control architectures • Biological foundations • Design of behavior-based systems • Representation Issues • Sensing • Adaptation • Multi-robot systems • Path planning • Navigation • Localization • Mapping

  4. Same issues from robot perspective • Where am I? [localization] • How do I interpret my sensor feedback to determine my current state and surroundings? [sensor processing/perception] • How do I make sense of noisy sensor readings? [uncertainty management] • How do I fuse information from multiple sensors to improve my estimate of the current situation? [sensor fusion] • What assumptions should I make about my surroundings? [structured/unstructured environments] • How do I know what to pay attention to? [focus-of-attention]

  5. More issues from robot perspective • What should my control strategy be to ensure that I respond quickly enough? [control architecture] • How should I make decisions? [reasoning, task arbitration] • Where do I want to be, and how do I get there? [path planning, navigation] • I have lots of choices of actions to take -- what should I do in my current situation? [action selection] • How should I change over time to respond to a dynamic environment? [learning, adaptation] • Why doesn’t the same action that worked in this situation before not work now? [hidden state] • How should I work with other robots? [multi-robot cooperation, communication]

  6. Functional Modules of a (Hypothetical) Intelligent Mobile Robot Localization/Mapping Communicator Uncertainty manager Path Planning World Model Action Selector/ Decision Maker Controller Sensor Fusion, Interpretation S e n s o r s E f f e c t o r s Navigation Focus-of-attention manager Application-specific mission task Uncertainty manager Learner/Adapter Fast reaction executor

  7. Mobile Robot Software Challenge:Usually, all “reasoning” functions reside on single processor Localization/Mapping Communicator Uncertainty manager Path Planning World Model Action Selector/ Decision Maker Controller Sensor Fusion, Interpretation S e n s o r s E f f e c t o r s Navigation Focus-of-attention manager Application-specific mission task Uncertainty manager Learner/Adapter Fast reaction executor

  8. Localization/ Mapping Communicator Uncertainty manager Path Planning World Model Action Selector/ Decision Maker Controller S e n s o r s E f f e c t o r s Sensor Fusion, Interpretation Navigation Focus-of-attention manager Application-specific mission task Uncertainty manager Learner/Adapter Fast reaction executor How do we organize all this? • Typical organizations: • Hierarchical • Behavior-based / Reactive • Hybrid Focus of classes 4-10

  9. Functional Modules Related to Control Architecture Localization/Mapping Communicator Uncertainty manager Path Planning World Model Action Selector/ Decision Maker Controller Sensor Fusion, Interpretation S e n s o r s E f f e c t o r s Navigation Focus-of-attention manager Application-specific mission task Uncertainty manager Learner/Adapter Fast reaction executor

  10. Hierarchical Organization Perception; Build World Model Control Action Environment Earliest robot control projects used this approach, with limited success. Focus of class 4: Hierarchical Organization

  11. Behavior-Based / Reactive: Based on Biological Paradigm Philosophy: “World is own best model; therefore, don’t try to build another world model” Control Action Sensing Control Action Sensing Control Action Sensing Environment More recent robot projects use this approach, with better success. Focus of classes 5-10: Biological Parallels, Behavior-Based Paradigm and Design

  12. Typical mobile robot implementation architecture • Essentially: PC on wheels/legs/tracks/... Special-Purpose Processor for sensor processing Special-Purpose Processor for motor control Standard PC running Linux M e m o r y E f f e c t o r s S e n s o r s M e m o r y

  13. Implication for Robot Control Code • Two options: • Separate threads with appropriate interfaces, interrupts, etc. • Single process with “operating-system”-like time-slicing of procedures • Usually: combination of both • For now: let’s examine single process with “operating-system”-like time-slicing of procedures

  14. Want all of this to happen “in parallel” on single processor Simple program: Follow walls and obey human operator commands • Assume we have the following functions needed: • Communications to operator interface -- commands such as “stop”, “go”, etc. • Sonar: used to follow wall and avoid obstacles Wall follower Controller& Arbitrator S o n a r W h e e l s Obstacle avoider Communicator Human operator commands

  15. Typical “single process” control approachto achieve functional parallelism int wall_follower( ) {/* one time slice of work */} int obstacle_avoider( ) {/* one time slice of work */} int communicator( ) {/* one time slice of work */} int controller_arbitrator ( ) {/* decides what action to take */} main() { for(;;) { wall_follower(); obstacle_avoider(); communicator(); controller_arbitrator(); } } Note of caution: dependent upon programmer to ensure individual functions return after “time slice” completed

  16. Example Control Command in Player/Stage PlayerClient robot(“localhost”); Position2dProxy pp(&robot, 0); … for (;;) { … pp.SetSpeed(speed, turnrate); … } Robot continues to execute the given velocity command until another command issued. Thus, duration of “time slice” is important.

  17. In Robot Design Choices, Must Consider Real-World Challenges Recall from last meeting: Software Challenges: • Autonomous:robot makes majority of decisions on its own; no human-in-the-loop control (as opposed to teleoperated) • Mobile: robot does not have fixed based (e.g., wheeled, as opposed to manipulator arm) • Unstructured: environment has not been specially designed to make robot’s job easier • Dynamic: environment may change unexpectedly • Partially observable: robot cannot sense entire state of the world (i.e., “hidden” states) • Uncertain: sensor readings are noisy; effector output is noisy Let’s look at these in more detail…

  18. Examples and Effect of Unstructured Environment • Examples of unstructured environment: • Nearly all natural (non-man-made) environments: • Deserts • Forests • Fields • To some extent, man-made environments not specifically designed for robots • Impact: • Difficult to make assumptions about sensing expectations • Difficult to make assumptions about environmental characteristics

  19. Example of Taking Advantage of Semi-StructuredEnvironment • In most man-made buildings, we can assume landmarks are available of hallways and intersections; • Allows merging of measurements and development of techniques that can overcome accumulated error and can deal with “loop closure”.

  20. Sources and Effect of Dynamic Environment • Sources of dynamic environment: • Other robots/agents in the area • Teammates • Adversaries • Neutral actors • Natural events (e.g., rain, smoke, haze, moving sun, power outages, etc.) • Impact: • Assumptions at beginning of mission may become invalid • Sensing/Action loop must be tight enough so that environment changes don’t invalidate decisions

  21. briefly Example of Effect of Dynamic Environment Possible control code: while (forever) do: { free = check_for_obstacle_to_goal(); if (free) move_straight_to_goal(); sleep(a_while); } Goal Current robot position and direction of motion

  22. Example: Glass walls--laser sensors tricked Causes and Effect of Partially Observable Environment • Causes of partially observable environment: • Limited resolution sensors • Reflection, occlusion, multi-pathing, absorption • Impact: • Same actions in “same” state may result in different outcomes

  23. Sources and Effect of Uncertainty/Noise • Sources of sensor noise: • Limited resolution sensors • Sensor reflection, multi-pathing, absorption • Poor quality sensor conditions (e.g., low lighting for cameras) • Sources of effector noise: • Friction: constant or varying (e.g., carpet vs. vinyl vs. tile; clean vs. dirty floor) • Slippage (e.g., when turning or on dusty surface) • Varying battery level (drainage during mission) • Impact: • Sensors difficult to interpret • Same action has different effects when repeated • Incomplete information for decision making

  24. Example of Effect of Noise on Robot Control Code: “Exact” Motions vs. Servo Loops (1) “Exact” motions: • Turn right by amount q • Go forward by amount d (2) Servo loop: • If to the left of desired trajectory, turn right. • If to the right of desired trajectory, turn left. • If online with desired trajectory, go straight. • If error to desired trajectory is large, go slow. • If error to desired trajectory is small, go fast. Goal q d Current robot position & orientation Two possible control strategies:

  25. Consider effect of noise: “Exact” control method • “Exact” method: q2 d2 q1 d1 q Goal d Current robot position & orientation , d1: actual angle, distance traveled; Noise  overshoot goal; have to turn back to goal q1 Doesn’t give good performance

  26. Consider effect of noise: Servo method • Servo method: Goal Current robot position & orientation Much better performance in presence of noise

  27. Other Functional Modules We’ll Study Focus of class 11-12: Sensing Localization/Mapping Communicator Uncertainty manager Path Planning World Model Action Selector/ Decision Maker Controller Sensor Fusion, Interpretation S e n s o r s E f f e c t o r s Navigation Focus-of-attention manager Application-specific mission task Uncertainty manager Learner/Adapter Fast reaction executor

  28. Other Functional Modules We’ll Study Focus of class 15-16: Adaptive Behavior Localization/Mapping Communicator Uncertainty manager Path Planning World Model Action Selector/ Decision Maker Controller Sensor Fusion, Interpretation S e n s o r s E f f e c t o r s Navigation Focus-of-attention manager Application-specific mission task Uncertainty manager Learner/Adapter Fast reaction executor

  29. Other Functional Modules We’ll Study Focus of class 17-18: Multi-Robot Systems Localization/Mapping Communicator Uncertainty manager Path Planning World Model Action Selector/ Decision Maker Controller Sensor Fusion, Interpretation S e n s o r s E f f e c t o r s Navigation Focus-of-attention manager Application-specific mission task Uncertainty manager Learner/Adapter Fast reaction executor

  30. Other Functional Modules We’ll Study Focus of class 19-24: Navigation, Localization, Path Planning, Mapping Localization/Mapping Communicator Uncertainty manager Path Planning World Model Action Selector/ Decision Maker Controller Sensor Fusion, Interpretation S e n s o r s E f f e c t o r s Navigation Focus-of-attention manager Application-specific mission task Uncertainty manager Learner/Adapter Fast reaction executor

  31. Your Final Project Will Look at Application-Specific Task Localization Communicator Uncertainty manager Path Planning World Model Action Selector/ Decision Maker Controller Sensor Fusion, Interpretation S e n s o r s E f f e c t o r s Navigation Focus-of-attention manager Application-specific mission task Uncertainty manager Learner/Adapter Fast reaction executor

  32. A Quick Tutorial on Writing Player/Stage Programs First, set up additional environment variable: • Be sure to have /usr/local/lib in your LD_LIBRARY_PATH (in .cshrc, or whatever is used by your shell): setenv LD_LIBRARY_PATH /usr/local/lib or, you can have more directories in your path (colon-separated), such as: setenv LD_LIBRARY_PATH /usr/local/lib:/usr/local/lib/pkgs/Python-2.4.1/lib Second, for compiling: Modify Makefile example posted on class website (see handouts for today on the Schedule page).

  33. Player/Stage Documentation • Some available online: http://playerstage.sourceforge.net/index.php?src=doc • You’ll especially be interested in the Player client libraries, and the libplayerc++ module • Also, if you like to browse header files for class/method/function definitions, you can find them on our machines here: /pkgs/player-stage-2.0.3/include/player-2.0/libplayerc++

  34. Example Complete Program in Player/Stage (slight variant of example0.cc) #include <iostream> #include <libplayerc++/playerc++.h> int main(int argc, char *argv[]) { using namespace PlayerCc; using namespace std; PlayerClient robot("localhost"); SonarProxy sp(&robot,0); Position2dProxy pp(&robot,0); for(;;) { double turnrate, speed; robot.Read(); // read from the proxies cout << sp << endl; // print out sonars for fun // do simple collision avoidance if((sp[0] + sp[1]) < (sp[6] + sp[7])) turnrate = dtor(-20); // turn 20 degrees per second else turnrate = dtor(20); if(sp[3] < 0.500) speed = 0; else speed = 0.100; pp.SetSpeed(speed, turnrate); // command the motors } }

  35. Helpful tool: playerv • Client GUI that visualizes sensor data from player server, from robot’s perspective. • Can also use it to teleoperate robot. • Is used instead of a player client program (such as laserobstacleavoid) • Run something like “player everything.cfg”, then start up playerv in another window • Use pull-down Devices menu to subscribe to various devices (such as position2d, sonar, laser, etc.) Shows position, laser, blobfinder, and ptz data Shows position and sonar

  36. Explore!! • Try out different programs already provided for you • Make some changes and see what happens • Pick out some example programs, look up the function definitions, and be sure you understand what they’re doing.

  37. Preview of Next Class • History of Intelligent Robotics: • Earliest robots to present-day state-of-the-art • Evolution of control approaches from hierarchical to behavior-based

More Related