Robot Action Selection Through Subsymbolic Planning Results Experiments were performed in two different environments and corresponding task, using a Marvin mobile robot (see below). In the first environment the robot's task was to stop at a user-defined distance from a wall using a behavioural repertoire of two actions (move forwards and move backwards). In the second environment the robot's task was to stop at a specified position and orientation in its environment, using a repertoire of four actions (forwards, backwards, left, and right). Eight ultrasonic range finders were used as network input. The subsymbolic planning mechanism is able to determine task-achieving sequences of actions, without using symbolic representation, but instead applying previously acquired knowledge in novel ways. In all experiments Marvin was able to plan its moves to the goal location successfully. Introduction The ability to decide which action(s) to perform in order to reach a particular goal is of utmost importance to mobile robots. Planning enables the robot to identify those actions within its repertoire of possible manoeuvres that will lead to a specified goal, possibly combining actions in a completely novel manner, thus establishing new overall capabilities by `assembling' existing skills in novel ways. We present the experimental results of a subsymbolic action planning mechanism that exploits the agent'sperceptual and behavioural space, not relying on (externally supplied) symbolic representation, but on an acquired subsymbolic representation of perceptual and behavioural space, as the robot experiences it. The robot's representation of perceptual space is learned through interaction with the environment, associated with behavioural space, and subsequently used to determine sequences of actions that will take the robot from its starting position to an externally specified goal. Method The subsymbolic action planning mechanism has two basic operational phases: Exploration Phase The robot tries out actions of its behavioural repertoire and associates performed actions with the result in its sensor perception. To do so the robot first learns how to cluster its sensorperceptionspace using a self-organising feature map (SOFM) and consequently places action links between pairs of perceptual clusters (perceptual pre-conditions and post-conditions). Application Phase We initially place the robot at a goal location in its environment, where it perceives and stores its goal sensor perception. Subsequently we move the robot to an arbitrary location in the environment (the start position), where it perceives its start sensor perception The task for the robot is to planhow tomovetothegoalperception. After sampling the perception in both goal and start locations the mechanism feeds these perceptions to the subsymbolic planning mechanism, thus establishing start and goal cluster in the perceptual space of the SOFM. Following this step, reaction-diffusion dynamics are used for finding a path from the start perceptual cluster to the goal perceptual cluster, exploiting the existing action links between perceptual clusters. The first action of the shortest plan is executed, the sensory perception is sampled at the resulting location, and another planning step is performed. This process is repeated until the robot's perception is clustered in the same cluster as the goal sensor perception. Transferring to a Different Robot Transferring the subsymbolic action planning mechanism to a Magellan Pro robot did not require any changes except changed calls to the robot's sensors and motors. The Magellan Pro robot is equipped with 16 ultrasonic sonars and 16 infra-red range sensors, all of which were used. The different number of sensors, their different technical characteristics and the different behaviour of actuators did not necessitate changes to the mechanism. This makes the subsymbolic action planning mechanism robust in real world applications, were sensors and motors do not have identical, predictable characteristics. Conclusion By using the presented subsymbolic action planning mechanism the robot was able to plan motions, using previously acquired knowledge in novel ways. The mechanism does not use metric, sensor-specific information at all, but merely clustered, abstracted sensory perceptions.This means that one can connect a large range of different sensors, and leave the robot to learn how to plan with the new sensor apparatus. The planning mechanism proved to be robust in respect to faulty sensors. Likewise, structural asymmetries of the robot were overcome by the learning and clustering process. Aim Previous work in planning identifies the elements of behavioural space (the actions) and the permissible operators applicable to them manually, and uses standard search techniques to find paths through the search space, as demonstrated for instance in STRIPS. These approaches have several shortcomings: -Because of the manualidentification of actions, only part of the available behavioural space may be covered, -Likewise only part of the available perceptual space may be covered. -Actual perceptions and actions may differ from those defined by the human supervisor, due to noise and variation in sensory perception, leading to -brittle and unreliable performance. The aim of the presented experimental results is to investigate the feasibility of developing a subsymbolic planning mechanism for action selection that is based on the robot'sperceptualandbehaviouralspace, as the robotlearns it by interactingwithitsenvironment. Further Information John Pissokas and Ulrich Nehmzow, Robot Action Selection through Subsymbolic Planning, Department of Computer Science, University of Essex, Technical Report CSM-374, October 2002. John Pissokas, Ulrich Nehmzow Computer Science Dept., Wivenhoe Park, CO4 3SQ, UK Figure5: The Magellan Pro robot has sixteen ultrasonic sensors, sixteen infrared sensors and a colour camera (not used here). Figure1: The architecture of the subsymbolic action planning mechanism. The two layers of the mechanism are depicted. The first layer, a SOFM, performs the perception clustering. The second layer represents perception-action-perception links. Figure2: The Marvin robot has eight ultrasonic sensors, placed as indicated in the figures. Figure3: Typical task in the two dimensional experiment: the robot is placed at the goal location first (bottom figure), then lifted to start location (top figure) and instructed to determine a sequence of actions that will take it to the goal. Figure4: The sequence of actions (shown in the perceptual space of the SOFM) that the robot performed to solve the task depicted in figure 3. The grid represents the perceptual space with the intersections to be the centres of the perceptual clusters.