Lecture 1 MATHEMATICS OF THE BRAIN with an emphasis on the problem of a universal learning computer (ULC) and a universal learning robot (ULR) Victor Eliashberg Consulting professor, Stanford University, Department of Electrical Engineering. Slide 0.
MATHEMATICS OF THE BRAIN
with an emphasis on the problem
of a universal learning computer (ULC)
and a universal learning robot (ULR)
Consulting professor, Stanford University,
Department of Electrical Engineering
1. User understanding.
2. Repairman understanding.
3. Programmer (educator) understanding.
4. Systems developer understanding.
5. Salesman understanding.
1. BIOLOGICALLY-INSPIRED ENGINEERING (bionics)
Formulate biologically-inspired engineering / mathematical problems. Try to solve these problems in the most efficient engineering way.
This approach had big success in engineering: universal programmable computer vs. human computer , a car vs. a horse, an airplane vs. a bird.
It hasn’t met with similar success in simulating human cognitive functions.
2. SCIENTIFIC / ENGINEERING (reverse engineering = hacking)
Formulate biologically-inspired engineering or mathematical hypotheses. Study the implications of these hypotheses and try to falsify the hypotheses. That is, try to eliminate biologically impossible ideas!
We believe this approach has a better chance to succeed in the area of brain-like computers and intelligent robots than the first one. Why?
So far the attempts to define the concepts of learning and intelligence per se as engineering/mathematical concepts have led to less interesting problems than the original biological problems.
12 cranial nerves ; ~1010 neurons in each hemisphere
31 pairs of nerves; ~ 107 neurons
The shown cerebellar network has ~1011 granule (Gr) cells and ~2.5 107 Purkinje (Pr) cells. There are around 105 synapses between T-shaped axons of Gr cells and the dendrites of a single Pr cell.
Memory is stored in such matrices
Cerebelum: N=2,5 107 * 105= 2.51012 B= 2.5 TB.
Neocortex: N=1010 * 104= 1014 B= 100 TB.
External system (W,D)
Computing system, B, simulating
the work of human nervous system
Sensorimotor devices, D
Human-like robot (D,B)
External world, W
B(t)is a formal representation ofBat time t, where t=0 is the beginning of learning. B(0) is an untrained brain.B(0)=(H(0),g(0)), where
H(0) = H is the representation of the brain hardware,
g(0) is the representation of initial knowledge (state of LTM)
External system (W,D)
During training, motor signals (M) can be controlled byTeacher or by learner (AM) . Sensory signals (S) are received from external system (W,D).
TWO TYPES OF LEARNING
Working memory, episodic
memory, and mental imagery
Type 1: Context-sensitive grammars
Type 2: Context-free grammars
Type 3: Finite-state machines
Type 4: Combinatorial machines
(the lowest computing power)
Fundamental constraint associated with the general levels of computing power
Traditional ANN models are below thered line. Symbolic systems go above the red line but they require a read/write memory buffer.The brain doesn’t have such buffer.
Fundamental problem: How can the human brain achieve the highest level of computing power without a memory buffer?
a b c b a c
General structure of universal programmable
systems of different types
PROMstands forProgrammable Read-Only Memory.
In psychological terms PROM can be thought of as a Long-Term Memory (LTM). Letter G implies the notion of synaptic Gain.
Memory buffer, e.g, a tape
Data inputs to ILTM
INPUT LONG-TERM MEMORY (ILTM)
DECODING, INPUT LEARNING
E-STATES (dynamic STM and ITM)
MODULATION, NEXT E-STATE PROCEDURE
Modulated (biased) similarity function
Data inputs to OLTM
Selected subset of active locations of OLTM
OUTPUT LONG-TERM MEMORY (OLTM)
ENCODING, OUTPUT LEARNING
Data outputs from OLTM
~4,000 inner hair cells ~12,000 outer hair cells