A computational introduction to the brain mind
Download
1 / 58

A Computational Introduction to the Brain-Mind - PowerPoint PPT Presentation


  • 95 Views
  • Uploaded on

A Computational Introduction to the Brain-Mind. Juyang (John) Weng Michigan State University East Lansing, MI 49924 USA [email protected] Human Physical and Mental Development. Studies on the adult brain. Studies on how the brain develops. Machine Mental Development. Totipotency.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about ' A Computational Introduction to the Brain-Mind' - kara


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
A computational introduction to the brain mind

A Computational Introduction to the Brain-Mind

Juyang (John) Weng

Michigan State University

East Lansing, MI 49924 USA

[email protected]


Human physical and mental development
Human Physical and Mental Development

Studies on the adult brain

Studies on how the brain develops



Totipotency
Totipotency

  • Stem cells and somatic cells

  • Genomic equivalence:

    • All cells are totipotent: whose genome is sufficient to guide the development from a single cell to the entire adult body

  • Consequence: the developmental program is cell-centered


Genomic equivalence
Genomic Equivalence

  • Each somatic cell carries the complete genome in its nucleus

  • Evidence: cloning (e.g., sheep Dolly)

  • Consequences:

    • Genome is cell centered, directing individual cell to develop in cell’s environment

    • No genome is dedicated to more than one cell

    • Cell learning is “in place”: Each neuron does not have an extra-celluer learner: cell learning must be fully accomplished by each cell itself while it interacts with its cell’s environment


How to measure problems in ai
How to Measure Problems in AI

  • Time and space complexity?

  • High or low “level”?

  • Tasks that look intelligent when a machine does it?

  • Rational or irrational?

  • Handling uncertainty?


Task muddiness
Task Muddiness

  • Independent of problem domain

  • Independent of technology level

  • Independent of the performer: machines or animals

  • Can be quantified

  • Help us to understand why AI is difficult

  • Help us to see essence of intelligence

  • Can be used to evaluate intelligent machines

  • Help to appreciate human intelligence


Task muddiness1
Task Muddiness

  • Agent independent

  • Categories only

  • Each category can be extended

  • Categories adopted to model task muddiness:

    • Environment

    • Input

    • Output

    • Internal state

    • Goal



Task executor
Task Executor

  • Human agent:the human is the sole executor

  • Machine agent:Dual task executor

    • A task is given to a human

    • The human programs an machine agent

    • The agent executes




2 d muddiness frame
2-D Muddiness Frame

Size of

input

Visual

recognition

Language

translation

Sonar-based

navigation

Computer

chess

Rawness

of input


Composite muddiness
Composite Muddiness

m = m1 m2 m3 … mn



Traditional manual development
Traditional Manual Development

A: agent

H: human

Ec: Ecological condition

T: Task

A = H(Ec , T)


New autonomous development
New Autonomous Development

Autonomous inside the skull

A: agent

H: human

Ec: Ecological condition

A = H(Ec )


Mode of development aa learning

Closed brain

Unbiased Sensors

biased Sensors

Effectors

Mode of Development: AA-Learning

AA-learning: Automated animal-like learning

World


Existing machine learning types
Existing Machine Learning Types

  • Supervised learningClass labels (or actions) are given in training

  • Unsupervised learningClass labels (or actions) are not given in training

  • Reinforcement learningClass labels (or actions) are not given in training but reinforcement (score) is given


New classification for machine learning
New Classification for Machine Learning

  • Need for considering state imposability after the task is given

  • 3-tuple (s, e, b):symbolic internal representation, effector, biased sensor

    • State: state imposable after the task is given

    • Biased sensor: whether the biased sensor is used

    • Effector: whether the effector is imposed


8 types of machine learning
8 Types of Machine Learning

Learning type 0-7 is based on 3-tuple (s, e, b):

Symbolic internal (s=1), effector-imposed (e=1), biased sensors used (b=1)


The developmental approach
The Developmental Approach

  • Enable a machine to perform autonomous mental development (AMD)

  • Impractical to faithfully duplicate biological AMD

  • Hardware: Embodiment (a robot)

  • Software: A developmental program

    • Task nonspecific

    • AA-learning mode, from the “birth” time through the “life” span



Developmental program vs traditional learning
Developmental Program vs Traditional Learning

[1] For tasks unknown at the programming time.


Motives of research for development
Motives of Research for Development

  • Developmental mechanisms are easier to program:lower level, more systematic, task-independent, clearly understandable

  • Relieve humans from intractable programming tasks: vision, speech, language, complex behaviors, consciousness

  • User-friendly machines and robots:humans issue high-level commands to machines

  • Highly adaptive manufacturing systems (e.g., self-trainable, reconfigurable machining systems)

  • Help to understand human intelligence


Task nonspecificity
Task Nonspecificity

  • A program is not task specific means:

    • Open to muddy environment

    • Tasks are unknown at programming time

    • “The brain” is closed after the birth

    • Learn an open number of muddy tasks after birth

  • Avoid trivial cases:

    • A thermostat

    • A robot that does task A when temperature is high and does task B when temperature is low

    • A robot that does simple reinforcement learning


8 requirements for practical amd

Eight necessary operational requirements:

Environmental openness: muddy environments

High dimensional sensing

Completeness in internal representation for each age group

Online

Real time speed

Incremental:for each fraction of second (e.g., 10-30Hz)

Perform while learning

Scale up to large memory

Existing works (other than SAIL) aimed at some, but not all.

SAIL deals with the 8 requirements altogether

8Requirements for Practical AMD


Definition of aa learning
Definition of AA-Learning

  • A machine M conducts AA-learning if the operation mode is as follows:For t = t0, t1, t2, ... , the brain program f recursively updates the brain B, sensory input-ouput x and effector input-output z


The central nervous system

The forebrain

The midbrainand hindbrain

The spinal cord

The Central Nervous System

Kandel, Schwartz and Jessell 2000


Brodmann areas 1909
Brodmann Areas (1909)

Kandel, Schwartz and Jessell 2000


Sensory and motor pathways
Sensory and Motor Pathways

My hypothesis:Brain has complex networks

that emerge largely shapedby signal statistics (Weng IJCNN 2010)

Adapted from Kandel, Schwartz and Jessell 2000



Brain s vision system
Brain’s Vision System

The brain has only two exposed ends

to interact with the environment:

Weng IJCNN 2010


Triple loops
Triple Loops

Weng IJCNN 2010



Area as a building block
Area as A Building Block

Weng IJCNN 2010


Neurons as feature detectors the lobe component model
Neurons as Feature Detectors: The Lobe Component Model

Weng et al. WCCI 2006

  • Biologically motivated:

    • Hebbian learning

    • lateral inhibition

  • Partition the input space into c regions

    • X= R1 U R2 U ... U Rc

  • Lobe component i: the principal component of the region Ri



Dual optimality of cci lca
Dual Optimality of CCI LCA

  • Spatial optimality leads to the best target:Given the number of neurons (limited resource), the target of the synaptic weight vectors minimizes the representation error based on “observation” x:

  • Temporal optimality leads to the best runner to the target: Given limited experience up to time t, find the best direction and step size for each t based on “observation” u = r x

Weng & Luciw TAMD vol. 1, no. 1, 2009




Plasticity schedule
Plasticity Schedule

(t)

r = 10000

2

t

t1

t2






From fa to ed network
From FA to ED network

  • FA: sn = f(sl,am)

    s: state; a: symbol input

  • ED:The internal area learns:yi = fy (sl, am)The motor area learns: sn = fz (yi)s: a numeric pattern of z, a sample of Z spacea: a numeric pattern of x, a sample of X spacey: a numeric pattern of y, a sample of Y space


Training and tests
Training and Tests

Luciw & Weng IJCNN 2010



Three types of information flow
Three Types of Information Flow

  • Different directions for different intents

  • Mixed modes are possible

  • There is no “if-then-else” type of switches


For any fa there is an ed network
For any FA there is an ED network

Marvin Minsky at MIT criticized ANNs

FS: Finite Automaton

ED: Epigenetic Developer

Relation: An ED network can learn any FA

Weng IJCNN 2010


Almost perfect disjoint test using temporal context
Almost Perfect Disjoint TestUsing Temporal Context

Luciw, Weng & Zeng ICDL 2008


More views better confidence
More Views, Better Confidence

Externally sensed  Internally generated context


For any fa there is an ed network1
For any FA there is an ED network

Marvin Minsky at MIT criticized ANNs

FS: Finite Automaton

ED: Epigenetic Developer

Relation: An ED network can learn any FA

Weng IJCNN 2010


From fa to ed network1
From FA to ED network

  • FA: sn = f(sl,am)

    s: state; a: symbol input

  • ED:The internal area learns:yi = fy (sl, am)The motor area learns: sn = fz (yi)s: a numeric pattern of z, a sample of Z spacea: a numeric pattern of x, a sample of X spacey: a numeric pattern of y, a sample of Y space


Complex text processing
Complex text processing

  • New sentence problem

    • Recognize new sentences from synonyms

  • Word sense disambiguation problem

    • Temporal context

  • Part of speech tagging problem

    • Label words according to part of speech

  • Chunking problem

    • Grouping sequences of words and classify them by syntactic labels

Weng, Zhang, Chi & Xue ICDL 2009


Recent events on amd
Recent Events on AMD

  • ICDL series: http://cogsci.ucsd.edu/~triesch/icdl/

    • Workshop on Development and Learning (WDL) 2000, MSU, MI USA

    • 2nd International Conf. on Development and Learning (ICDL’02): MIT, MA USA

    • 3rd ICDL (2004): San Diego, CA USA

    • 4th ICDL (2005): Osaka, Japan

    • 5th ICDL (2006): Bloomington IN, USA

    • 6th ICDL (2007): London, UK

    • 7th ICDL (2008): Monterey, CA, USA

    • 8th ICDL (2009): Shanghai, China

    • 9th ICDL (2010): An Arbor, Michigan USA

    • 10th ICDL (2011), Frankfurt, Germany

  • EpiRob workshop series, 01, 02, 03, 04, 05, 06, 07, 08, 09, 10

  • AMD Technical Committee of IEEE Computational Intelligence Societyhttp://www.ieee-cis.org/AMD/

  • AMD Newslettershttp:///www.cse.msu.edu/amdtc/amdnl/

  • IEEE Transactions on Autonomous Mental Developmenthttp://www.ieee-cis.org/pubs/tamd/


Now and future
Now and Future

  • Now (not many people agree):

    • Humans start to know roughly how the brain-mind works

  • Future (not too far):

    • Systematic breakthroughs in artificial intelligence along all fronts:

      • Vision

      • Speech

      • Natural language

      • Robotics

      • Creative intelligence

    • A new industry:

      • New type of software industry

      • Cloud computing for brain-scale applications

      • Service robots and smart toys entering homes

      • Robots widely used in public environments


ad