1 / 44

COST B27, Istanbul, Turkey WG1: Serbia and Montenegro

COST B27, Istanbul, Turkey WG1: Serbia and Montenegro. A discrete operator model of episodic memory Amplitude-modulated carrier frequency model of neurophysiological oscillations Aleksandar Kalauzi Department for biophysics, neuroscience and biomedical engineering,

talbot
Download Presentation

COST B27, Istanbul, Turkey WG1: Serbia and Montenegro

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. COST B27, Istanbul, Turkey WG1: Serbia and Montenegro • A discrete operator model of episodic memory • Amplitude-modulated carrier frequency model of neurophysiological oscillations Aleksandar Kalauzi Department for biophysics, neuroscience and biomedical engineering, Center for multidisciplinary studies, University of Belgrade, Serbia and Montenegro

  2. 1. A discrete operator model of episodic memory Could laws of motion of artificial recurrent neural networks in their state spaces be applied to describeepisodic memory?

  3. 2 3 1 4 n … 5 Recurrent neural networks • RNN: artificial networks, arbitrary (n) number of formal neurons, all connected. • Mathematical description of each neuron in this approach is not relevant. Neurons may be different. It is only required that all neuronal outputs are discrete and finite. E.g.: Oj(k) =Trj(Intj(wij,Oi(k-1))), i = 1,…,n, where Oj(k) – output of the j-th neuron in the k-th tact wij – weighing coeff. between i-th and j-th neurom Intj( ) – input integration function of the j-th neuron Trj( ) – neuron transfer function of the j-th neuron

  4. Mapping points in state space Under these conditions, a discrete and finite network state space (pretty amorphous, at this point) could be introduced, consisting of points p = {o1, o2,…,on} . If left to move through its state space, the network is acting as an operator O, mapping points: p2 = O(p1), p3=O(p2)=O(O(p1)) … pm=O(pm-1)=O(O(…(O(p1))…)). … The series {p1,…,pm} is known as the orbit of point p1. Because the space is discrete and finite, sooner or later the points begin to repeat themselves, i.e. pm = pj, j ≤ m.

  5. Point orbit graph If we introduce an oriented graph, with nodes corresponding to space points and branches corresponding to operator actions, orbit of point p1 is represented by the following graph: pj = pm p1 p2 … … pm-1 There is a linear (optional, blue) and a circular (compulsory, red) part of the graph. If O(pm-1)= pm-1, it reduces to a fixed point.

  6. Operator graph If we unite orbit graphs of all points in the network state space, we get the operator graph (may have >1 connected components). CC3 CC2 CC1

  7. Properties of operator graphs 1) A connected component must end either with a circular sub-graph or a fixed point (circular or point attractor). 2) Non-circular parts are organized into tree-like structures (which are optional). 3) There are “starting points”.A network can not reach them by itself (must be put there externally). 4) For a constant operator, graph attached to it is fixed.

  8. Regular and singular regions Input degree of a node pk, iDk, iDk= Card{pi: pk=O(pi)} i.e. number of branches entering the node. It is important, since it defines where the operator is regular ( O-1). Region of regularity (“regular region”) of the state space: R = {pk: iDk= 1}. Singular regions may be quantitatively differentiated by the degree of singularity, d: Sd = {pk: iDk = d}. Some statistics: average degree of singularity, dav, of a region P = {p1,…,pm} could be calculated as

  9. Composition of operators Since we have not introduced any analytical tool yet, the only possible operation between operators is their multiplication (composition): On = On-1*…*O2*O1. Operator graphs are very practical for performing this operation. If for a given point pi1 O1: pi1→pi2 ; O2: pi2→pi3 ; … On-1: pin-1→pin Then On: pi1→p1n. Specially, if O1=O2=…=On-1, multiplication becomes the power operation: On = (O1)n.

  10. Graph of On • Raising an operator to the n-th power means simply connecting every n-th point of the original operator graph. • Consequences for separate connected components: 1) Attractor regions remain, points are rearranged. A cyclic attractor of length n = n1*n2*…*nk, where n1,n2,…,nk denote prime numbers, (O)nj splits into nj cyclic attractors, each of length n/nj. (O)n invokes the appearance of n fixed points. 2) For a big enough n, “trees” are degraded to “grass”, i.e. maximal length of all linear sub-graphs is 1. 3) 1)+2) => There exists a large enough m, so that CC of (O)m*n+1 contains the original attractor, but maximal length of linear sub-graphs does not exceed 1.

  11. An example of O2 If we observe CC1 of graph on slide 5, its square is presented on the right side. O2 CC1 CC1a CC1b

  12. Graphs of idempotent operators • Idempotent operators, written as I, are defined as those for which I2 = I. • According to graph properties of On, it is possible to prove that graphs of idempotents consist only of fixed points and non-cyclic sub-graphs of length 1. Order = 6 … Order of an idempotent is the number of “starting” (blue) points. Order 1 idempotents are elementary modificators of operators.

  13. ANN and BNN memory storage In artificial neural network (ANN) theory, e.g. Hopfield networks, information is stored in the attractors. This storing is stable, but static. However, in biological neuronal networks (BNN), such as cortical sub-networks where episodes are stored, contents is recalled as a series of actions, pictures, etc. Episodes do NOT exhibit any repetitory (cyclic) behaviour. Respecting these properties and according to the operator graph model, episodes should be stored in “tree trunks”, while “tree crowns” represent a collection of association states for each episode. Association zone ANN “episode” BNN

  14. Storing, recalling and losing information Every episodic memory model should be able to explain three processes: A) Storing the information-episode (learning) B) Recalling the information (remembering) C) Losing or modifying the information (forgetting). How are these processes explained by the operator graph model?

  15. B) Recalling information (remembering) • The network acts as a CONSTANT operator when holding or recalling (not storing) an information. The corresponding graph is fixed. In order to recall an episode, it is sufficient for the network to be put externally into one of the “tree crown” states, and let follow the point orbit until the attractor is reached. • Obviously, there exist regions in the state space where episodes are stored, (Rs), and those where they are not (Rn). With aging, Rs increases, Rn decreases. According to the model, Rs graph consists of trees, ending with either cyclic attractors of reduced length, or fixed points. We do not know, however, what the graph structure or topology of Rn looks like (three slides from the present one, some Rn graph properties will be suggested).

  16. A) Storing information (learning) This kind of modeling is more complex. Operator corresponding to the network after receiving some new information must be different from the one before (e.g. the new graph acquires an additional tree). Mathematically, if O1 and O2 represent network operators before and after storing, respectively, theprocess may be modeled as O2 = S * O1, where S denotes the “storing operator”. Many expressions could be introduced for S. One of the approaches could be to present it as a composition of first order idempotents: S = I1* I2* ...* In, since an episode consists of a series of connected points in the state space, and first order idempotents act as elementary operator modificators. However, a property of operator compositions, called local singularity transfer (LST), imposes some restrictions on the nature of O1 if this approach is to work correctly.

  17. Design of the “storing operator” S Suppose the information to be stored consists of the following statespace points: p1, p2, ..., ps in this exact order. Operator O2 should therefore contain this sequence as a linear sub-graph. If operator O1: p1—>p1’ then Is: p1’ —>p2. Since we do not want to modify any other part of O1, other points of the space should be mapped onto themselves by Is. This procedure should be repeated for every target point pj, j = 1,2,...,s. ... p1 O1 p1’ Is p1 p2 p2 ... p1’ ... ... ps

  18. An example (1) “episode” O1 I3 I3* O1 4 2 1 7 6 5 7 6 1 5 8 3 2 8 2 3 ... 7 4 5 4 5

  19. An example (2) “episode” I3 * O1 I2 I2* I3* O1 3 1 2 7 7 3 7 6 1 5 8 5 8 2 3 ... 7 6 2 4 4 5

  20. An example (3) I2* I3* O1 I1 I1* I2* I3* O1 ”episode” 8 1 1 7 2 7 7 3 3 5 5 8 5 8 ... 6 7 2 4 4 6 2

  21. An example (4) Finally, the “storing operator”, S, is obtained by composing all individual first order idempotents: S = I1* I2* I3. S 4 3 8 1 2 5 6 7

  22. Local Singularity Transfer (LST) S O1 O2= S*O1 ... ... ... ... pj pj pi pk pi pk pm ... = * pm pn pn ... ... ... (O1:pi,pj,...,pk—>pm)Λ(S:pm—>pn)=> (O2:pi,pj,...,pk—>pn)

  23. Local Singularity Transfer (LST) (2) LST means that points, which are later to be inserted into the episode “backbone”, must not form singularities with any of the previous episode points.The idempotent shown below will not be able to “break” the singularity, but will disrupt the previously formed backbone. The “ideal” O1 is the unity (identity) operator. correct O2 O1 ... ... erroneous O2 ... ... ... ...

  24. Increase of average operator singularity degree as information is being stored If operator composition as a model for storing information is valid, LST implies that it is not possible to separate different points if they are mapped to the same (singular) point. All local singularities are being transferred =>operator average degree of singularity increases as the information is being stored. Does this mean that in Rn (regions of BNN network state space without any stored information) O1 average singularity degree is low?Is the operator there close to regular? Is it close to unity operator? If O1 is unity everywhere in Rn, and S unity everywhere except in the new episode region, O2 = S*O1 will work directly.

  25. General operator modifications (GOM) However, it is still an open question whetheroperator modification by composition (OMC) is a valid model for information storing (learning) in BNN. Hippocampo-cortical action and OMC? Theoretically, we may introduce general operator modifications (GOM), without entering into the underlying mechanisms: by “moving” n directed graph branches, we say that O2 is obtained from O1 by n elementary GOMs. In that case, there are no limitations concerning the preservation of local singularities by LST. Singularities may be “broken”. It would be an interesting task to study whether other semi-analytical methods exist (like OMC) which do not obey LST.

  26. Analytical apparatus If an analytical apparatus is to be introduced, such as vector spaces over finite discete fields, the whole rich theory enters the game. Linearity issue becomes interesting (linearity imposing further limitations on O1—> O2 flexibility). In linear operator graphs, all nods have equal number (if ≠0) of input branches=> 1) “storing operator” maygenerally be non-linear (first order idempotents surely are); 2) “episode operator” (O1) in Rs is probably non-linear, but association regions and episode containing regions may separately be close to linear; 3) if “episode operator” in Rn is close to unity, it is close to linear by definition.

  27. C) Losing or modifying the information (forgetting) Modeling this process also involves operator modifications. We shall study them using elementary GOMs. As opposed to a directed, designed operator S, as in learning, in this case any random elementary GOM (eGOM) will disturb the existing operator graph structure. Different eGOMs will have different impacts on the existing graph. It is important, therefore, to see how many topologically different results could be obtained, starting from an existing (may be very elaborate) operator graph, containing stored episodes. We have found seven, so far. In actual BNNs, the real forgetting process may be a combination of these seven different types.

  28. Losing information (2) Let point pi belong to the m-th tree,Tm(O), of a connected component of the graph operator O. Previous zone of the point pi, written as Zpm(pi,O), is the set of all points which have pi as part of their orbit. Let us further denote by Aq(O) all points belonging to the attractor, by Nq(O) all other points of the q-th connected component. Zpm(pi,O) pi Tm(O) ...

  29. Losing information (3) Let an eGOM, G, induce the following change: G: {(pi→pj) →(pi→pk)}. If pi  Nq(O) four cases can be distinguished: 1) if pkZpm(pi,O), then a new connected component is formed. All points between pk and pi become the new attractor (fixed point if pk=pi). Zpm(pi,O) pk pi pj ... ...

  30. Losing information (4) 2) if pk Tm(O)\Zpm(pi,O), i.e. any other point of the m-th tree, then Zpm(pi,O) is simply displaced within the same tree. Number of connected components, as well as the number of trees remain unchanged. Zpm(pi,O) pi pj pk ... ...

  31. Losing information (5) 3) if pkTn(O) = N(O)\Tm(O), i.e. a different, n-th tree, then Zpm(pi,O) is displaced to this other tree. Number of connected components, as well as the number of trees remain unchanged, except if pk is the last point of the m-th tree, when the number of trees is reduced by one. pi pk pj ... Tm(O) ... Tn(O) ... ...

  32. Losing information (6) 4) if pkAq(O), i.e. attractor of CCq, then Zpm(pi,O) forms a new tree. Number of connected components remains unchanged, while the number of trees is increased by one, except if pk is the last point of the m-th tree, when the number of trees is unchanged as well. pk pi Aq(O) pj ... ...

  33. Losing information (7) If pi Aq(O) three cases can be distinguished: 5) if pk CCq= Aq(O) U Nq(O), i.e. any point of the same, q-th connected component, a new attractor (pk,...,pi) is formed within the same CCq. Number of CCs remain unchanged, but the trees are rearranged. pk Aq(O) pi pj

  34. Losing information (8) 6) if pk Tn(O) Nr(O) CCr , i.e. a point of the n-th tree of a different, r-th connected component, Aq(O) is broken and the whole q-th CC becomes part of Tn, as the previous zone of point pi: Zpn(pi ,O1) of the new operator O1. Number of CCs is reduced by one, number of trees by nTq(O), (number of trees in the vanished CCq of the operator O). Tn(O) pk Aq(O) pi ... ... pj

  35. Losing information (9) 7) if pkAr(O) CCr , i.e. a point of the attractor of a different, r-th connected component, Aq(O) is broken and the whole CCq becomes a new tree of CCr, as the previous zone of point pi: Zpn(pi ,O1) of the new operator O1. Number of CCs is reduced by one, number of trees by nTq(O)-1, (nTq(O) is number of trees in the vanished CCq of the operator O). pk Ar(O) Aq(O) pi pj

  36. Future? • To try to answer as many as possible questions from the previous slides. • To link more precisely hippocampo-cortical interaction with the operator model. • To see how other biochemical and physiological processes (sleep, specially REM), changing the BNN, could be modeled, using the presented approach. • Pure theoretical sophistication - a more elaborate analytical approach: algebra, automata theory, continual operator theory, chaos and fractals...

  37. 2. Amplitude-modulated carrier frequency model of neurophysiological oscillations In discrete Fourier analysis, an intercomponental “carrier” sine wave has frequency between two adjacent Fourier components: Δω≠0 Δω=0

  38. Phase shifts of Fourier components For two intercomponental signals phase shifts of F.C.: where

  39. Amplitude-modulated sine wave

  40. Carrier frequency phase shift Given Fourier components of two channels and their phase shift: and according to previous two slides, interchannel carrier frequency phase shift can be calculated as the (vector) mean of FC phase shifts, within a frequency range (fi1,fi2), if signals are both intercomponental and amplitude-modulated:

  41. Phase potential (1) For a particular montage with Nc channels, phase potential of a channel is a statistical measure of the phase of this channel in relation to others: E.g. , for an 8-channel montage, phase potential of channel 3:

  42. Phase potential (2) Phase potentials of a healthy individual. EEG, 8-ch. longitudinal bipolar montage,2 min. alpha activity. Note the stability and “order” of carrier frequency phase potentials (right), compared to separate FC ones (left).

  43. Phase potential (3) Topographic distribution of EEG alpha carrier phase potential for 5 healthy individuals, circle radius proportional to phase potential magnitude. S1 S2 leading following P1 S3 P2

  44. Bibliography: 1) Kalauzi A.,Spasić S. (2000). Neural networks as functional operators (in Serbian). XLIV ETRAN Conference, Proceedings, Book I, 151-154. 2) Kalauzi A., Andjus R. K. (1990). Frequency domain integration of phase shifts of Fourier components of human alpha EEG activity. Arch. Biol. Sci. 42 (3-4), 7P-8P. Kalauzi A., Andjus R. K. (1994). A new concept of EEG synchronization. Arch. Biol. Sci. 46 (3-4), 21P-22P. KalauziA., Janković B., Ćulić M., Šaponjić J, Rajšić N., Šuljagić S.(1998). Phase demodulation of EEG signals. XLII ETRAN Conference, Proceedings, Book IV, 204 - 207.

More Related