Advanced Examples and Ideas
1 / 76

Advanced Examples and Ideas - PowerPoint PPT Presentation

  • Updated On :

Advanced Examples and Ideas. Three Layer Evolutionary Approach. Local perceptions, such as “bald head” or “long beard”. Encoded behaviors or internal states. Time intervals. Evolve Behaviors. Evolve Motions. Evolve Perceptions.

I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
Download Presentation

PowerPoint Slideshow about 'Advanced Examples and Ideas' - jeneva

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

Three layer evolutionary approach l.jpg
Three Layer Evolutionary Approach

Local perceptions, such as “bald head” or “long beard”

Encoded behaviors or internal states

Time intervals

Evolve Behaviors

Evolve Motions

Evolve Perceptions

Motions as timed sequences of encoded actions, for instance RFRFLL

Global perceptions, possibly encoded such as “narrow Corridor” or “beautiful Princess”

Behaviors such as “go forward until you find a wall, else turn randomly right or left

Evolve in hierarchy l.jpg
Evolve in hierarchy

  • Together or separately

  • Feedback from model or from real world

  • First evolve motions and encode them.

  • Then evolve behaviors.

  • Finally develop perceptions.

Go to the end of the corridor and then look for food

If you see a beautiful princess go to her and bow low.

If you see a dragon escape

Evolve in hierarchy4 l.jpg
Evolve in hierarchy

avoid obstacles

Execute optimal motions

Save energy

Look for energy sources in advance

Execute actions that you enjoy

What if robot likes to play soccer and sees the ball but is low on energy?

Optimizing a motion l.jpg
Optimizing a motion

Parking a Truck

Slide6 l.jpg

Find the control

Solving this analytically would be very difficult

Slide7 l.jpg

Question; How to represent the chromosomes?

Here you see several snapshots of a “movie” about parking a truck, stages of the solution process.

Another example l.jpg
Another example

Learning Obstacle Avoiding

Similar to braitenberg vehicle but has 8 sensors l.jpg
Similar to Braitenberg Vehicle but has 8 sensors

Slide11 l.jpg

Input and output data are some form of MV logic


  • How would you represent chromosomes?

  • Design Crossovers?

Robot can move freely but has to avoid obstacles

This can be like the lowest level of behaviors in subsumption or other behavioral architecture for all your robots

Slide12 l.jpg

Remember the goal when you create the fitness function

The key to success is often in fitness function

Number of collisions l.jpg
Number of collisions

  • Time of learning

When you train longer you decrease the number of collisions

Evolutionary methods l.jpg
Evolutionary Methods

  • Optimization problems:

    • Single objective optimization problems

    • Multi-Objective optimization Problems

Slide17 l.jpg

More examples of problems in which we use evolutionary algorithms and similar methods.

  • Search Problems (Path search)

  • Optimal multi-robot coordination

  • Multi-task optimization

  • Optimal motion planning of robot arms (Trajectory planning of manipulators )

  • Motion optimization (optimization of controller parameters - morphology in different control schemas)

    • PID (PI)

    • Fuzzy

    • Neural

    • Hybrid (neuro-fuzzy)

  • Path planning and tracking (mobile robots)

  • Optimal motion planning of robot arms

    • Trajectory planning of manipulators

  • Vision – computational optimization

Slide18 l.jpg

What are these “other algorithm”? algorithms and similar methods.

  • Evolutionary Algorithms - Related techniques:

    • Ant colony optimization (ACO)

    • Particle swarm optimization

    • Differential evolution

    • Memetic algorithm (MA)

    • Simulated annealing

    • Stochastic optimization

    • Tabu search

    • Reactive search optimization (RSO)

    • Harmony search (HS)

    • Non-Tree Genetic programming (NT GP)

    • Artificial Immune Systems (AIS)

    • Bacteriological Algorithms (BA)

You can try them in your homework 1 if GA or GP is too easy for you.

Using them gives you higher possibility of creating a successful superior method for a new problem

Ga operators l.jpg
GA-operators algorithms and similar methods.

  • Selection

    • Roulette

    • Tournament

    • Stochastic sampling

    • Rank based selection

    • Boltzmann selection

    • Nonlinnear ranking selection

  • Crossover

    • One point

    • Multiple points

  • Mutation

Read in Auxiliary Slides about these methods. Or invent your own operators for your problem.

Your design parameters to be decided l.jpg
Your design parameters to be decided algorithms and similar methods.

  • Genotype length

    • Fixed length genotype

    • Variable-length genotype

  • Population

    • Fixed population

    • Variable population

    • Species inside population

    • Geometrical separation

Drawbacks of ga l.jpg
Drawbacks of GA algorithms and similar methods.

  • time-consuming when dealing with a large population

  • premature convergence

  • Dealing with multiple objective problems


  • Niches

  • Islands

  • Pareto approach

  • Others

More examples of using ga in robotics l.jpg
More examples of using GA in robotics algorithms and similar methods.

Trajectory Planning Problems

Ga and trajectory planning l.jpg
GA and Trajectory Planning algorithms and similar methods.

  • GA techniques for robot arm to identify the optimal trajectory based on minimum joint torque requirements (P. Garg and M. Kumar, 2002)

  • path planning method based on a GA while adopting the direct kinematics and the inverse dynamics (Pires and Machado, 2000)

  • point-to-point trajectory planning of flexible redundant robot manipulator (FRM) in joint space (S. G. Yue et al., 2002)

  • point-to-point trajectory planning for a 3-link (redundant) robot arm, objective function is to minimizing traveling time and space (Kazem, Mahdi, 2008)

Projects last years

Optimal path generation of robot manipulators l.jpg
Optimal path generation of robot manipulators algorithms and similar methods.

  • Control Schema

  • Robotic arm – kinematic model

  • Controller type

  • Objective function - optimal path

  • Optimization algorithm (method)

  • GA use smooth operators and avoids sharp jumps in the parameter values.

Slide25 l.jpg

  • Adaptive Control Schema algorithms and similar methods.– Track Control error function between outputs of a real system and mathematical model

  • What we optimize?

  • Which parameters must be optimized?

  • How many objectives (single –objective or multiobjective)?

  • Collision free? (How to model collision in GA?)

Slide26 l.jpg

  • Three join Manipulator algorithms and similar methods.

  • A three-joint robotic manipulator system has three inputs and three outputs.

  • The inputs are the torques applied to the joints and the outputs are the velocities of the joints

  • No ripples

Slide27 l.jpg

Design of robotic controllers algorithms and similar methods.

  • For n-DOF we will have n inputs ui, i=1…n, (ui↔ i)

  • Controller

    • PID (PI)

    • Neural network (multilayer perceptron, recurrent NN, RBF based NN)

    • Fuzzy

    • Neuro-Fuzzy (hybrid)

Slide28 l.jpg

Use of Neural Networks algorithms and similar methods.

  • NN: We must to adapt the weights and eventually the bias

    The chromosome:

  • Adapt the weights

Fuzzy logic l.jpg
FUZZY LOGIC algorithms and similar methods.

  • Fuzzy Logic

  • Aggregation of rules

  • defuzzification

  • free-of-obstacles workspace (Mucientes, et. al, 2007)

  • wall-following behavior in a mobile robot

Learning fuzzy logic controllers l.jpg
Learning FUZZY LOGIC Controllers algorithms and similar methods.

  • Learning of fuzzy rule-based controllers

  • Find a rule for the system

    Step 1: evaluate population;

    Step 2: eliminate bad rules and fill up population;

    Step 3: scale the fitness values;

    Step 4: repeat NI iterations for Step 4 to Step 9

    Step 5: select the individuals of the population;

    Step 6: crossover and mutate the individuals;

    Step 7: evaluate population;

    Step 8: eliminate bad rules and fill up population;

    Step 9: scale the fitness values.

    Step 10: Add the best rule to the final rule set.

    Step 11: Penalize the selected rule.

    Step 12: If the stop conditions are not fulfilled go to Step 1

Encoding fuzzy controls l.jpg
Encoding fuzzy controls algorithms and similar methods.

  • The chromosome encode the rules:

  • Sn is constant in this application but it can be also variable to be optimized

  • wall-following behavior of the robot

    • the robot is exploring an unknown area

    • moving between two points in a map

  • Requirements

    • maintain a suitable distance from the wall that is being followed

    • to move at a high velocity whenever the layout of the environment is permitting

    • avoid sharp movements (progressive turns and changes in velocity)

Path based robot behaviors l.jpg
Path-based robot behaviors algorithms and similar methods.

  • The requirements are “encoded” in Universes of discourse and precisions of the variables

    • right-hand distance (RD)

    • the distances quotient (DQ), based on left-hand distance

    • Orientation

    • linear velocity of the robot (LV)

    • Linear acceleration

    • Angular velocity

  • Path of the robot (simulated environments)

Fast reliable no harm to robot or to environment l.jpg
Fast, reliable, no harm to robot or to environment algorithms and similar methods.

  • This is useful for out PSU Guide Robot

    • Do not harm humans

    • Do not harm robot

Slide34 l.jpg

  • Fixed points: algorithms and similar methods.the desired Cartesian path Pt is given the problem is to find the set of joint paths P in order to minimize the cumulative error between desire and real path during trajectory

    Pk is the kinematic model

  • Free end points case

Find the set of joint paths, next smooth it

Minimize the cumulative error

Weighted global fitness l.jpg
Weighted Global Fitness algorithms and similar methods.

  • fitness function (minimization)

  • Global fitness: Linear function of individual objectives

    Fot – excessive driving (sum of all maximum torques), fq – the total joint traveling distance of the manipulator, fc - total Cartesian trajectory length, tT - total consumed time for robot motion

  • Penalty function

  • Population initialization (probability distribution)

    • Random uniform

    • Gaussian

Example l.jpg
example algorithms and similar methods.

Drug Delivery Problem

Drug delivery using microrobots tao et al 2005 l.jpg
Drug delivery using microrobots algorithms and similar methods.(Tao, et. al, 2005)

  • (GA)–based area coverage approach for robot path planning.

  • Drawbacks of most currently available drug delivery methods are that the drug target area, delivery amount, and

    • release speed are hard to be precisely controlled.

  • It is very difficult or impossible to eliminate side effects.

  • Open issues

    • actively control the delivery process

    • Access to appropriate areas that cannot be reached using traditional devices

  • Current Issues

    • On-line path planning (solve unexpected obstacles problem)

    • Optimal path planning (efficiency, path planning)

Slide38 l.jpg

  • microcontroller is used to guide the robot movement algorithms and similar methods.

  • GA-based approach uses fine grid cell decomposition for area coverage

  • Because the robot will move cell by cell, the start point of chromosomes has to be changed dynamically whenever the robot reaches the center of a cell

  • The end point of a chromosome is not fixed and needs to be determined by applying GA operators.

  • The robots may move from the center of a cell to its 8 adjacent cells along 8 directions.

  • some obstacles are unknown before drug delivery (the robot discover these obstacles during the motion)

Slide39 l.jpg

Slide40 l.jpg

  • New mutation operators algorithms and similar methods.

    • Travel further

    • Delete

    • Reverse delete

    • Stretch

    • Shortcut

  • The algorithm keep mind the visited nodes

  • Extension to operational research?

Other applications using evolutionary algorithms l.jpg
Other applications using evolutionary algorithms algorithms and similar methods.

  • Autonomous mobile robot navigation - Path planning using ant colony optimization and fuzzy cost function evaluation (Garcia, et. al, 2009).

  • Legged Robots and Evolutionary Design

  • Optimal path and gait generations (Pratihar, Debb, and Gosh, 2002) – 0/1 absence or presence of rule

  • six-legged robot

  • collision-free coordination of multiple robots (Peng and Akela, 2005)

What is better this or this l.jpg
What is better this or this? time?

  • We want to optimize both functions f1 and f2

Biobjective means two objectives to reach l.jpg
Biobjective time? means two objectives to reach

  • We have x and y, two objectives here

Pareto solutions for different algorithms

Pareto Front

Pareto front l.jpg
Pareto front time?

  • The single objective optimisation problem (SOP) conduct to a minimization (or maximization) of one cost function, less or more complex, that is a single objective is taken into account.

  • Conversely, the multi-objective optimization problem takes into account two or more objective that has to be minimized (or maximized) simultaneously.

  • Some objectives can be in competition, so a simultaneous minimization is not possible, but only a trade-off among them.

    • Some time, the number of objectives can be high, like 16 objectives or more that make the multi-objective optimization problem (MOP) and interesting and challenging area of research

Example of pareto optimization of two parameters l.jpg
Example of Pareto Optimization of two parameters time?

Optimization of Airplane Wings

Slide48 l.jpg

Slide49 l.jpg
* In most of the design space the red method is better than the blue method* It is good to use many Pareto methods and modify parameters

  • Two objectives: Maximize lift, and minimize drag

Multi pareto l.jpg
Multi-Pareto the blue method

  • We optimize many parameters,

  • We may switch between subsets of them.

  • Subsets can have two elements each.

Slide55 l.jpg

Three-dimensional the blue method Minimization Problem

Slide57 l.jpg

Pareto Front the blue method

General multiobjective optimization problem l.jpg
General the blue methodmultiobjective optimization problem

  • The multiobjective optimization problem could be generally formulated as minimization of vector objectives Jt(x) subject to a number of constraints and bounds:

Pareto optimal set l.jpg
Pareto-optimal set the blue method

  • In the case of competing objectives a trade-off is involved such a problem usually has no unique solution.

    • Instead, we can admit a set of solutions, equally valid non-dominated as a set of alternative solutions known as Pareto-optimal set

  • In what follows we assume without loss of generality that all the function objectives must be minimized.

    • If we have a maximization case fi we simply minimize the function -fi.

  • For any two points that are usually named candidate solutions V1,V2, V1 dominates V2 in the Pareto sense (P-dominance) if and only if the following condition hold

The pareto set l.jpg
The Pareto set the blue method

  • The Pareto set is the set of PO (Pareto-Optimal) solution in design domain and the Pareto Front (PF) is the set of PO solutions in the objective domain.

  • The most popular way to solving the MOP (Multi Objective Optimization Problem) is to reduce the minimization problem to a scalar form by aggregating the objectives in weighted sum, with the sum of weights constant:

  • The weighted sum method has a serious drawback, the method usually fail in the case of nonconvex PF.

Nice properties l.jpg
Nice properties the blue method

  • GA can provide an elegant solution for tradeoff among different minimization of cost function for each variable versus total cost or other variable.

  • Non-convex solutions

  • “Immigrants”, possible solution for jump from local minima.

  • Dealing with many variables (e.g. 16 variables)

Multi robots l.jpg
Multi-Robots the blue method

  • Pareto optimal multi-robot coordination with acceleration constraints (Jung and Ghrist, 2008)

    • collection of robots sharing a common environment

    • each robot constrained to move on a roadmap in its configuration space

    • each robot wishes to travel to a goal while optimizing elapsed time considering vector-valued (Pareto) optima

    • all illegal or collision sets are removed.

Conclusions l.jpg
Conclusions the blue method

  • GA is not a universal panacea to optimization problems.

  • Coding the problem into a genotype is the most important challenge!

  • The best selection schema of individuals for crossover operator is difficult to be chosen apriori(tournament selection seems to be more promising)

  • A number of parameters are determined empirically:

    • Size of population

    • pc and pm even often values inspired from biology are given

    • Other parameters in hybrid or more sophisticated GA

Good properties l.jpg
Good properties the blue method

  • One of the most important element in the design of a decoder-based evolutionary algorithm is its genotypic representation.

  • The genotype-decoder pair must exhibit efficiency, locality, and heritability to enable effective evolutionary search

  • locality, and heritability:

    • small changes in genotypes should correspond to small changes in the solutions they represent,


    • solutions generated by crossover should combine features of their parents

Sources l.jpg

Sources the blue method


Slide73 l.jpg

example the blue method