Mobile intelligent systems 2004
Download
1 / 30

Mobile Intelligent Systems 2004 - PowerPoint PPT Presentation


Mobile Intelligent Systems 2004. Course Responsibility: Ola Bengtsson. Course Examination. 5 Exercises – Matlab exercises – Written report (groups of two students - individual reports) 1 Written exam – In the end of the period. Course information. Lectures

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha

Download Presentation

Mobile Intelligent Systems 2004

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Mobile intelligent systems 2004 l.jpg
Mobile Intelligent Systems 2004

Course Responsibility:

Ola Bengtsson


Course examination l.jpg
Course Examination

  • 5 Exercises – Matlab exercises – Written report (groups of two students - individual reports)

  • 1 Written exam – In the end of the period


Course information l.jpg
Course information

  • Lectures

  • Exercises (w. 15, 17-20) – will be available from the course web page

  • Paper discussions (w. 14-15, 17-21) – questions will be available from the course web page => Course literature

  • Web page: http://www.hh.se/staff/boola/MobIntSys2004/



Autonomous robots of today l.jpg
Autonomous robots of today

  • Stationary robots

    • - Assembly

    • - Sorting

    • - Packing

  • Mobile robots

    • - Service robots

      • - Guarding

      • - Cleaning

      • - Hazardous env.

    • - Humanoid robots

Trilobite ZA1, Image from: http://www.electrolux.se


A typical mobile system l.jpg
A typical mobile system

Laser (Bearing)

Laser (Bearing + Range)

Laser (Bearing + Range)

Micro controllers Control systems

Emergency stop

Panels

Encoders

Motors

Kinematic model


Mobile int systems basic questions l.jpg
Mobile Int. Systems – Basic questions

  • 1. Where am I? (Self localization)

  • 2. Where am I going? (Navigation)

  • 3. How do I get there? (Decision making / path planning)

  • 4. How do I interact with the environment? (Obstacle avoidance)

  • 5. How do I interact with the user? (Human –Machine interface)



Course contents l.jpg
Course contents

  • Basic statistics

  • Kinematic models, Error predictions

  • Sensors for mobile robots

  • Different methods for sensor fusion, Kalman filter, Bayesian methods and others

  • Research issues in mobile robotics


Basic statistics statistical representation stochastic variable l.jpg
Basic statistics – Statistical representation – Stochastic variable

Battery lasting time, X = 5hours ±1hour

X can have many different values

Continous – The variable can have any value within the bounds

Discrete – The variable can have specific (discrete) values


Basic statistics statistical representation stochastic variable11 l.jpg
Basic statistics – Statistical representation – Stochastic variable

Another way of discribing the stochastic variable, i.e. by another form of bounds

Probability distribution

In 68%: x11 < X < x12

In 95%: x21 < X < x22

In 99%: x31 < X < x32

In 100%: - < X < 

The value to expect is the mean value => Expected value

How much X varies from its expected value => Variance


Expected value and variance l.jpg
Expected value and Variance Stochastic variable

The standard deviation X is the square root of the variance


Gaussian normal distribution l.jpg
Gaussian (Normal) distribution Stochastic variable

By far the mostly used probability distribution because of its nice statistical and mathematical properties

Normal distribution:

~68.3%

~95%

~99%

etc.

What does it means if a specification tells that a sensor measures a distance [mm] and has an error that is normally distributed with zero mean and  = 100mm?



Linear combinations 1 l.jpg
Linear combinations (1) observations

X2 ~ N(m2, σ2)

X1 ~ N(m1, σ1)

Y ~ N(m1 + m2, sqrt(σ1 +σ2))

This property that Y remains Gaussian if the s.v. are combined linearily is one of the great properties of the Gaussian distribution!


Linear combinations 2 l.jpg
Linear combinations (2) observations

We measure a distance by a device that have normally distributed errors,

Do we win something of making a lot of measurements and use the average value instead?

What will the expected value of Y be?

What will the variance (and standard deviation) of Y be?

If you are using a sensor that gives a large error, how would you best use it?


Linear combinations 3 l.jpg
Linear combinations (3) observations

With d and α un-correlated => V[d, α] = 0 (Actually the co-variance, which is defined later)

di is the mean value and d ~ N(0, σd)

αi is the mean value and α ~ N(0, σα)


Linear combinations 4 l.jpg
Linear combinations (4) observations

D = {The total distance} is calculated as before as this is only the sum of all d’s

The expected value and the variance become:


Linear combinations 5 l.jpg
Linear combinations (5) observations

 = {The heading angle} is calculated as before as this is only the sum of all ’s, i.e. as the sum of all changes in heading

The expected value and the variance become:

What if we want to predict X (distance parallel to the ground) (or Y (hight above ground)) from our measured d’s and ’s?


Non linear combinations 1 l.jpg
Non-linear combinations (1) observations

X(N) is the previous value of X plus the latest movement (in the X direction)

The estimate of X(N) becomes:

This equation is non-linear as it contains the term:

and for X(N) to become Gaussian distributed, this equation must be replaced with a linear approximation around . To do this we can use the Taylor expansion of the first order. By this approximation we also assume that the error is rather small!

With perfectly known N-1 and N-1 the equation would have been linear!


Non linear combinations 2 l.jpg
Non-linear combinations (2) observations

Use a first order Taylor expansion and linearize X(N) around .

This equation is linear as all error terms are multiplied by constants and we can calculate the expected value and the variance as we did before.


Non linear combinations 3 l.jpg
Non-linear combinations (3) observations

The variance becomes (calculated exactly as before):

Two really important things should be noticed, first the linearization only affects the calculation of the variance and second (which is even more important) is that the above equation is the partial derivatives of:

with respect to our uncertain parameters squared multiplied with their variance!


Non linear combinations 4 l.jpg
Non-linear combinations (4) observations

This result is very good => an easy way of calculating the variance => the law of error propagation

The partial derivatives of become:


Non linear combinations 5 l.jpg
Non-linear combinations (5) observations

The plot shows the variance of X for the time step 1, …, 20 and as can be noticed the variance (or standard deviation) is constantly increasing.

d = 1/10

 = 5/360


Multidimensional gaussian distributions 1 l.jpg
Multidimensional Gaussian distributions (1) observations

The Gaussian distribution can easily be extended for several dimensions by: replacing the variance () by a co-variance matrix () and the scalars (x and mX) by column vectors.

The CVM describes (consists of):

1) the variances of the individual dimensions => diagonal elements

2) the co-variances between the different dimensions => off-diagonal elements

! Symmetric

! Positive definite


Mgd 2 l.jpg
MGD (2) observations

Eigenvalues => standard deviations

Eigenvectors => rotation of the ellipses


Mgd 3 l.jpg
MGD (3) observations

The co-variance between two stochastic variables is calculated as:

Which for a discrete variable becomes:

And for a continuous variable becomes:


Mgd 4 non linear combinations l.jpg
MGD (4) - Non-linear combinations observations

The state variables (x, y, ) at time k+1 become:


Mgd 5 non linear combinations l.jpg
MGD (5) - Non-linear combinations observations

We know that to calculate the variance (or co-variance) at time step k+1 we must linearize Z(k+1) by e.g. a Taylor expansion - but we also know that this is done by the law of error propagation, which for matrices becomes:

With fX and fU are the Jacobian matrices (w.r.t. our uncertain variables) of the state transition matrix.


Mgd 6 non linear combinations l.jpg
MGD (6) - Non-linear combinations observations

The uncertainty ellipses for X and Y (for time step 1 .. 20) is shown in the figure.


ad
  • Login