1 / 24

Robot Intelligence

Robot Intelligence. Kevin Warwick. Reactive Architectures I: Subsumption. Perhaps the best known reactive architecture, developed in the ‘80s by Rodney Brooks Each behaviour is defined in a layer takes sensory input produces required robot motor output

baskerville
Download Presentation

Robot Intelligence

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Robot Intelligence Kevin Warwick

  2. Reactive Architectures I: Subsumption • Perhaps the best known reactive architecture, developed in the ‘80s by Rodney Brooks • Each behaviour is defined in a layer • takes sensory input • produces required robot motor output • each layer has a defined level of competence and hence an associated priority • Examples • collision avoidance (low competence, high priority) • path follow (higher competence, lower priority) • wander aimlessly • build map (high competence, low priority) • look for changes

  3. Reactive Architecture I:Subsumption • Each layer is integrated into a subsumption architecture whereby • for each orthogonal mode there is only one actual output • position of robot is orthogonal to (independent of), say, position of a pan/tilt camera platform • A lower competence can always subsume (or suppress) the output from a higher competence • “Default behaviour” is always the lowest competence one

  4. Reactive Architectures II: Motor Schema • Individual motor behaviours are defined based on sensory input (much the same as subsumption) • Output of each schema is a velocity vector representing direction and speed • Difference with subsumption is that a number of schema may be active at any one time • The emergent behaviour is a combination of groups of motor schema • Thus there is an element of co-operation between motor schemas • Disadvantages of motor schema? • How can groups of motor schema be combined, other than by the designer? • How can changes in the active group be effected? • Arbitration of Behaviours can be carried out through sequencing.

  5. Reactive Architectures III: Ego Behaviour • Both subsumption and motor schema architectures rely on each behaviour operating without any feedback on the emergent behaviour of the system • Each behaviour is also fixed in terms of the input-to-output mapping • An alternative approach employs a strategy for changing the way a behaviour contributes to the emergent behaviour based on • knowledge of the emergent behaviour (feedback) • self-awareness of the behaviour itself • This is effected by giving each behaviour an Ego

  6. Ego Behaviour • The Ego itself is here defined through a simple variable gain PD controller where the gains are updated using fuzzy logic to either • strengthen the contribution of the behaviour or • withdraw the behaviour from contributing

  7. Ego-Behaviour Experiments: 1 • Two behaviours are present • Cs is a strong ego-behaviour and wants to get to –1.5 • Cw is a weak ego-behaviour and wants to get to +1 • After 1 second Cw realises that it is not able to compete and withdraws

  8. Ego-Behaviour Experiments:2 • Three behaviours are present • Cs is a strong ego-behaviour and wants to get to –1.5 • Cm1 is a medium ego-behaviour and wants to get to +1 • Cm2 is a medium ego-behaviour and wants to get to +2

  9. Ego-Behaviour Experiments:2 - continued • Three behaviours are present • After 0.6 seconds the stronger ego-behaviour is overcome by both medium behaviours acting in co-operation. • The emergent behaviour swings in favour of Cm2 and Cm1 drops out after 1 second.

  10. Ego-Behaviour Experiments: 3 • Tele-assisted viewing example • The “hot spot” is a camera view centred on a tool rack • In this scenario the operator moves the slave manipulator towards the tool rack • The emergent behaviour of the automated camera view tracks the end of the slave until it enters the hot spot • The ego-behaviour associated with fixating the camera on the centre of the hot spot becomes dominant, stabilising the camera on the centre • After the slave has moved away from the hotspot the camera resumes tracking of the slave tip

  11. Evolutionary Robotics • Evolutionary Robotics falls under the category of artificial life. • Artificial Life is the study of “Life as it could be” • Based on understanding the principles and simulating the mechanisms of real biological life forms • Evolutionary robotics, as the name suggests, borrows from our knowledge of the principles of biological evolution to evolve robot controllers, sensors and/or physical morphology from the bottom up.

  12. Artificial Evolution • Extended Genetic Algorithms are used to evolve Controllers, Bodies, Sensors and/or Actuators. • Simulation is used extensively to evaluate agent behaviours without damage to real robots and to evaluate, in a reasonable amount of time, the vast number of generations that evolution requires • Typically, only then is a final behaviour tested on a real robot.

  13. How and what to evolve? • Highly recurrent free-form neural networks are usually used to control robot behaviours – these are suitable for evolution due to their distributed structure. • Typically, a fixed robot body is used • However, the Genetic description can also define sensor morphology and complete body shape.

  14. How a behaviour is evolved • A task we wish to solve has to be defined • A suitable simulation is required to test the ability of agents to solve the given task quantitatively. • This quantitative measure or “fitness” is used by the Genetic Algorithm to produce successive generations of agents until a suitable level of proficiency has been acquired. • Then the proficient behaviour can be transferred to a real robot.

  15. A typical evolved “free-form” neural network controller.

  16. Artificial Evolution • Complex behaviours and structures can be evolved in simulation. • Even for simple tasks evolution can produce surprisingly complex and life-like solutions. • If a suitable simulation is used these behaviours and structures are transferable to real world robots.

  17. Robot Sensing – Key points • Cost • Weight • Reliability • Functionality • Simplicity • Power requirements/weight • Computing requirements – on board? • Application driven – what is required?

  18. Vision • Is this needed? • Can be expensive – computationally/financially • Can take time • Human-like – human-world?

  19. Machine Vision • Image transformation – camera/CCD array • Image analysis – filtering, edge detection, line finding – colour, texture? • Image understanding – AI methods, segmentation, blocks world.

  20. Range Finding/Triangulation • Passive – correspondence problem • Active triangulation - Spot sensing • Time-of flight ranging – Sonar/Laser

  21. Proximity Sensing • Mechanical switch • Inductive/Capacitive sensors – C = εA/d – one plate on robot, one on object – change in area > change in capacitance • Magnetic sensors – reed/Hall • Optical position – phototransistor, optical interrupter, optical reflector

  22. Tactile Sensing • Probably not necessary for a typical, industrial mobile robot • Needed when a robot performs delicate assembly • Sense force in joints • Sense touch • Sense slip

  23. Robot Intelligence • Required intelligence will depend on sensor/actuator arrangements • Intellectual capabilities will depend on sensor/actuator capabilities • Sensors/actuators/brain(computer) will all be different to human/animal versions • RI is evolving at techno-rates not biological rates • So where will it be in 2035?

More Related