Loading in 5 sec....

Biologically Inspired Computing: IntroductionPowerPoint Presentation

Biologically Inspired Computing: Introduction

- 55 Views
- Uploaded on
- Presentation posted in: General

Biologically Inspired Computing: Introduction

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.

- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Biologically Inspired Computing: Introduction

This is lecture one of

`Biologically Inspired Computing’

Contents:

Course structure, Motivation for BIC , BIC vs Classical computing, overview of BIC techniques

Go to my home page:

www.macs.hw.ac.uk/~dwcorne/

Find my Teaching Materials page, and go on from there.

Course Contents

Course Delivery

DC

NT

PF

- All the slides (all online)
- A few additional papers and notes provided online

Exam + Coursework (whole module)

PF c/w 15%

Exam: 50%

DC c/w 35%

Programming/Expts assignment 25%

Question Sheet 10%

- DC = David Corne, will generally lecture about bio-inspired methods for optimisation, with a focus on evolutionary computation (aka genetic algorithms) – broadly this is about how certain aspects of nature (evolution, swarm behaviour) lead to very effective optimisation and design methods.
- PF = Pier Frisco, will generally lecture about molecular computing – how computation is done within biological cells – and how that can be exploited, and how it inspires new ideas in computer science.
- NT = Nick Taylor, will generally lecture about neural computation – this is perhaps the most widely exploited bio-inspired technique, which underpins how we can build machines that learn from examples.

Course Delivery

Marks (BSc)

The quiz sheets add up to 25 questions, each worth 0.4 marks. The

Programming assignment is worth 10 marks.

The MSc students will have some additional reading materials. The web site will clearly indicate this.

The exams will be different

The MSc coursework will be based on the BSc coursework, but a bit harder with more to do.

Lecture 1:

Classical computation vs biological computation

Motivation for biologically inspired computation

Overview of several biologically inspired algorithms

What `classical computing’ is, and what kinds of tasks it is naturally suited for.

What classical computing is not good at.

An appreciation of how computation and problem solving are manifest in biological systems.

Appreciation of the fact that many examples of computations done by biological systems are not yet matched by what we can do with computers.

An understanding of the motivation (consequent on the above) for studying how computation is done in nature.

A first basic knowledge of the main currently and successfully used BIC methods

- the fridge story
- How do you tell the difference between dog and cat?
- How do you tell the difference between male and female face?
- How do you design a perfect flying machine?
- How would we design the software for a robot that could make a cup of tea in your kitchen?
- What happens if you:
- Cut off a salamander’s tail?
- Cut off a section of a CPU?

Classical computing is good at:

Number-crunching

Thought-support (glorified pen-and-paper)

Rule-based reasoning

Constant repetition of well-defined actions.

Classical computing is bad at:

Pattern recognition

Robustness to damage

Dealing with vague and incomplete information;

Adapting and improving based on experience

- Automatically locate a small outburst of violent behaviour in a football crowd
- Classify a plant species from a photograph of a leaf.
- Design robust railway timetables
- Make a cup of tea?

These two things tend to come up a lot when we think of what we would like to be able to do with software, but usually can’t do.

But these are things that seems to be done very well indeed in Biology.

So it seems like a good idea to study how these things are done in biology – i.e. (usually) how computation is done by biological machines

Pattern recognition is often called classification

Formally, a classification problem is like this:

We have a set of things: S (e.g. images, videos, smells, vectors, …)

We have n possible classes, c1, c2, …, cn, and we know that everything

in S should be labelled with precisely one of these classes.

In computational terms, the problem is:

Can we design a computational process that takes a thing s (from S) as

its input, and always outputs the correct class label for s?

1

2

3

4

5

6

7

8

9

- The idea of these examples is to:
- Remind you that pattern recognition is something you do easily, and all the time, and you (probably) do it much better than we can do with classical computation. (e.g. 1, 2, 3, 5)
- Remind (or inform) you that such complex pattern recognition problems are not yet done well by software (e.g. 1, 2, 3, 5)
- Indicate that there are some very important problems that we would like to solve with software (9, 8, 6, 2, 7 are obvious, but of course we would like to do all nine and much more ), which are classification problems, and note that these are just as hard as examples 1, 2, 3, 5.
- So, hopefully we can learn how brains do 1, 2, 5 etc …, so that we can build machines that find land mines, tell fake from genuine signatures, diagnose disease, and so on …

The business end of this is made of lots of these joined in networks like this

Our own computations are performed in/by this network

The brain is a complex

tangle of neurons, connected

by synapses

When neurons are active, they

send signals to others.

A neuron with lots of `strong’

active inputs will become active.

And, when connected neurons

are active at the same time, the

link between them gets stronger

So, suppose these neurons happen

to be active when you see a fluffy

animal with big eyes, small ears and a

pointed face

So, suppose these neurons happen

to be active when you see a fluffy

animal with big eyes, small ears and a

pointed face

… and suppose your mother then

says “Cat”, which excites this

additional neuron.

Links will then strengthen between

the active neurons

So when you see a similar animal

again, this neuron will probably

Automatically be activated, helping

you classify it.

A slightly different group of neurons

will respond to dogs, and sometimes

both the “cat” and “dog” group

will be active, but one will be more

active than the other …

What happens if we damage a single

neuron (remember, in reality there will be thousands involved in simple

classification-style computations)?

Compare this with damaging a line of

code.

In classical computing we provide rules; but biology seems to learn gradually from examples.

Which is the best?

110

000

101

100

111

001

011

010

We have 3 items as follows: (item 1: 20kg; item2: 75kg; item 3: 60kg)

Suppose we want to find the subset of items with total weight closest to 100kg.

Well done, you just searched the space of possible subsets. You

also found the optimal one. If the above set of subsets is called S,

and the subsets themselves are s1, s2, s3, etc …, you just optimised the

function “closest_to_100kg(s)”; i.e. you found the s which minimises the

function |(weight—100)| .

In general, optimisation means that you are trying to find the best solution you can (usually in a short time) to a given problem.

S

We always have a set S of all possible

solutions

…

s1

s2

s3

S may be small (as just seen)

S may be very, very, very, very large

(e.g all possible timetables for a 500-exam/3-week diet)

… in fact something like 1030 is typical for real problems.

S may be infinitely large – e.g. all real numbers.

We wish to design something – we want the best possible (or,

at least a very good) design.

The set S is the set of all possible designs. It is always much too

large to search through this set one by one, however we want to

find good examples in S.

In nature, this problem seems to be solved wonderfully well,

again and again and again, by evolution

Nature has designed millions of extremely complex machines, each

almost ideal for their tasks (assuming an environment that doesn’t

go haywire), using evolution as the only mechanism.

Clearly, this is worth trying for solving problems in science and industry.

Evolutionary algorithms:

Use nature’s evolution mechanism to evolve solutions to all kinds of problems. E.g. to find a very aerodynamic wing design, we essentially simulate evolution of a population of wing designs. Good designs stay in the population and breed to, poor designs die out. EAs are highly successful and come in many variants. There is also a lot to learn to understand how to apply them well to new problems. We will do quite a lot on EAs. EAs are all about optimisation, however classification is also an optimisation problem, so EAs work there too …

A genetically optimized

three-dimensional truss with

improved frequency response.

An EA-optimized concert-hall

design, which improves on human

designs in terms of sound quality

averaged over all listening points.

Swarm Intelligence

How do swarms of birds, fish, etc … manage to move so well as a unit? How do ants manage to find the best sources of food in their environment. Answers to these questions have led to some very powerful new optimisation methods, that are different to EAs. These include ant colony optimisation, and particle swarm optimisation.

Also, only by studying how real swarms work are

we able to simulate realistic swarming behaviour (e.g. as done in Jurassic Park, Finding Nemo, etc …)

Artifical Life and Cellular Automata

This is a research area that tries to learn what the fundamental computational structures and processes are that are necessary for the things that seem to go hand-in-hand with life. For example Growth, and Reproduction. One of the fruits of Alife are simple rule-based systems called L-systems that can be used to simulate very lifelike images of plants, that are used in computer graphics. Meanwhile, Cellular Automata (CA) are very simple computational systems that produce very complex behaviour, including `lifelike’ reproduction. CAs, as we will see, are also very useful for explaining/simulating biological pattern generation and other behaviours

Neural Computing

Pattern recognition using neural networks is the most widely used form of BIC in industry and science. We will learn about the most common and successful types of neural network.

This is Stanley, winner

of the DARPA grand

Challenge – a great

example of bio-inspired

computing winning over

all other entries, which

were largely `classical’

Other BIC techniques

There are many other BIC areas under research, but not yet found as successful in practice as those we have concentrated on in the course. But we will look at the most prominent `other’ techniques. At the moment these are:

Artifical Immune System methods – which lead to algorithms for optimisation and classification based on the workings of the human immune system.

Foraging Algorithms – which lead to optimisation methods based on how herds of animals decide where to graze. These are different from the current main algorithms that have arisen from swarm intelligence.

Week 1 Self-Study & Quiz

- Before we get into looking at Evolutionary Algorithms (as well as
- other methods that do optimisation), we need to understand certain
- things about optimisation, such as
- When we need clever methods to do it, and when we don’t
- What alternatives there are to EAs – no point designing
- an EA for an optimisation problem if it can be solved far
- more simply.
- So the additional material and associated quiz this week is about
- optimisation problems in general, and some key pure
- computer-science things you need to know.
- The next lecture will then introduce evolutionary algorithms.