1 / 49

Hugues Bersini IRIDIA – ULB Brussels

How new understanding of biological immune functions can shape our vision of what a caring system could be. Hugues Bersini IRIDIA – ULB Brussels. Francisco Varela: 1946 - 2001. Plan. Self/non self in classical immunology

fadhila
Download Presentation

Hugues Bersini IRIDIA – ULB Brussels

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. How new understanding of biological immune functions can shape our vision of what a caring system could be Hugues Bersini IRIDIA – ULB Brussels Francisco Varela: 1946 - 2001

  2. Plan • Self/non self in classical immunology • Computer simulation differentiating among the classical self-recognition and the alternative self-assertion views of immunology • A toy model illustrating the key ingredients of a organic caring system based on the self-assertion view. • And what about computer safety ?

  3. I. The biological attacks against classical self/non-self immunology

  4. Classical immunology Main actors: B cells, antibodies, macrophages and antigens antigens = non self antibodies = self Self binds only non-self and destroys it. Self does not bind self.

  5. Problems with self and non-self • (Poly Matzinger and many others): • Why are we not protected from air, food, fetuses, tumors … Why do we tolerate a lot of non-self ?? • Why are a lot of lymphocites autoreactive with no sign of disease …. Why autoimmunity is always there ?? Why do we mount immune response to a lot of self ?? • If any, where does the self/non-self frontier really lie ??? • Matzinger substitutes this frontier with a new one: danger/non-danger  but still a dichotomous behaviour with a key role given to pattern recognition. • But again, where does this dichotomy come from ? • Matzinger might be more or less radical depending on the view point: • Danger from the outside vs danger from the inside

  6. Idiotypic immune networks: Jerne, Varela and Coutinho Varela Jerne

  7. The most radical alternative:self-assertion • 15 years ago: the Paris immunological group: Varela, Coutinho, Stewart, Bersini … in the continuation of Jerne’s network model • Tauber(2002) • Antigenicity becomes a question of degree • « Self » evokes one kind of response, « non-self » another • Different reaction mechanisms do not superimpose on different kind of impacts • « non-self » is a perturbation of the current state of the network above a certain threshold stimulating a reject • The immune system only knows itself, no recognition is at play. • Host defense might not be the only function even a side-effect. • Francisco Varela’s view: Homeostasis, self-develop an efficient communication pathway in order to create (assert) and maintain a coherent self • John Stewart reverses the classical vision of immunology, rejection and memory are side effects of the homeostatic maintain.

  8. II. Simple computer simulations differentiating between the self-recognition and self-assertion views.

  9. 2D cellular automata type of simulation

  10. Very small simulation of CA type (JTB 94) cx, cy Xo,Yo L affinity =Ci(t)*( L – ( |2Xo – cx –x| + |2Yo – cy – y|)/2) Affj = Si affinityOfCelli

  11. Dynamics: The classical view Evolution of cells concentration Ci(t) if (low < Affj < high) Cj(t) = Cj(t) +1 else Cj(t) = Cj(t) – 1 if Cj(t) = 0 the cell j disappears from the system In the classical view: Affj = Si affinityOfAntigen low high

  12. Evolution of antigen concentration if (Affj > low) Cj(t) = Cj(t) – k*(Affj/low) k is a time rate if Cj(t) = 0 the antigen j disappears from the system

  13. SEE DEMO: first of self-recognition then self-assertion cells

  14. The alternative view: birth of the network Affj = aSi affinityOfCelli + bSi affinityOfAntigeni If a= 0, back to the previous case No difference between the antigenic and the antibody perturbation

  15. See Demo

  16. The most interesting outcome • The shape space is divided in two zones: a tolerant and a reactive one • This division is self-asserted by the system • The system learns what it accepts and what it rejects • Here this division, self-asserted by the system, is the amplification of a random effect • In general, this division depends on the history of the encounter i.e. the most likely tolerated zone will be the first to be filled  the negative selection of reactive cells

  17. Self is indeed learned • Not imposed from outside • No more prior dichotomy • The recognition does not induce the following treatment. • One unique behaviour gives two reactive outcomes • The network role: maintains itself, defines zone of tolerance and reactivity, provides an inertial memory, compromises between memory and adaptivity

  18. III. Engineering artificial immune systems “Artificial immune systems (AIS) are adaptive systems, inspired by theoretical immunology and observed immune functions, principles and models, which are applied to problem solving. The first attempt has been Bersini and Varela (1990)” Today most popular and successful applications: computer defensive systems  virus needs antivirus. In US: Khepart, Forrest and Dasgupta groups. In UK: Timmis and de Castro ICARIS conferences (2002, 2003, 2004) But still a modest field.

  19. The classical self-recognition view still predominate while engineering the immune system • Above all: a pattern recognition system able to distinguish between dangerous impacts and not … • Thus defend against the dangerous ones and possibly cure the damage • Finally, memorize the encounter • Notice: the dichotomy is between dangerous and not • Forrest, Dasgupta, …. • But still a strong ability for recognition of the impact is at play • Since based on this recognition follows the treatment. • What was engineerized were « pattern recognition and learning », « clustering and outlier detection » • I guess it’s a pity since the alternative would be much more fruitful both in biological and engineering terms.

  20. The self-recognition view might not be the most fruitful one because: • Too much emphasis on recognition abilities • Lead to classification, clustering and outlier detection algorithms but are they really new ?? • Do we need immunology to build today efficient antivirus systems based on pattern recognition ability ? • This is where immunology joins together with common sense on what a defensive system should be. • No immunologist is needed to set up an alarm in a house or to have good police officers in sensitive places..

  21. Toy model and key ingredients of our alternative self-assertion proposal • A complex system of interactive variables • State of the system must be viable  viability or homeostasis • External impact occur on some sensitive variables • Need for a frontier including: agents capable of three behaviours: observing and filtering some of the variables and of curing the system by acting on some of the variables • The agents are learning agents based on a memorizing and a statistical analysis of the impacts • Filtering is based on pattern recognition BUT: • Not the impact alone but the couple (impact, state) is accounted by the agents. • Care can be achieved by acting on some internal variables which might be different from the impacted ones. In the following, every aspect will have to be understood more in a metaphoric sense and can be bought separately.

  22. 1. The complex system • It is the system to protect composed of interacting variables For a complex system to be treated, physical knowledge is not enough. We still need input/output observations to infer the internal behavior.

  23. 2. The state of the system • Like a Hopfield neural network  it is a metaphor. • Described by 10 variables comprised between -1 et +1 • Needs to possess a viability zone • Evolve in time here like a neural network but in general as an internal state dynamics

  24. Viability zone When the system exits from its viability zone, it’s in a fatal situation. The closer the system can remain to its viability center, the healthier it is. Viability zone allows a degree of “wellbeing”.

  25. 3. Impacts on some sensitive variables • Impact a variable Xj with a given amount « a »

  26. 4. Agents on the frontier

  27. Monitoring, filtering and curing. • Agents can monitor effect of the impacts • It the impact does not exist in his data base, an agent adds it with an initial “score or risk indicator” • Each impact is stored, the pattern matching is done with a given granularity  instance-based learning. • The monitoring agent then modifies the score: negatively if during the observation period the system exits from its viability zone and positively if during this same period it remains or even get closer to the center of the viability zone. • Agents will filter the impact is its score becomes too negative • If the system exits its viability zone, agents will try to cure the system by acting if possible on the impacted variables or, if impossible, by exploiting in his data base positive impacts. • KEY DIFFERENCE BETWEEN IMPACT AND COUPLE (IMPACT, STATE)

  28. Accounting for the state of the system • 3 times the same impact I, but in 3 different states. • The self-recognition vision rejects them all since it exists at least one case for which I is bad (here B). • It is good for the false negatives (reject enough) but very bad for the false positives (reject too much) • Now, A and C are not damaging. • They are even beneficial. Not taking the state into account can lead to too conservative protection policy and even worse to miss some curing opportunities

  29. The damaging effect of the fifth glass of wine should not prevent the first one

  30. Curing agents • When the variable targeted by the impact is accessible  just inversing the impact • Otherwise  Curing agents observe the data base and, in function of the state, select a beneficial impact on the accessible variables. • Accounting for the state is fundamental here • Curing amounts to regulating some internal variables of the system, included the ones not directly impacted. • A cure can further be classified as bad if its effect is not the one expected.

  31. 5. The data base, the learning and the statistics • Each impact or couple (impact, state) is maintained in a data base with a score and a classification (good, bad, curing) • An impact is never good or bad on a first hand. It is learned to be so. Nothing is known a priori. • The granularity of the instance-based strategy is an important parameter to compromise. • Compromising between adding a new impact or modifying the score of an existing one. • The system evolves in time: new impact comes in and the status of existing impacts can always change. • Learning by experience is a key ingredient • It is unavoidable when interacting with complex systems.

  32. A model ? ? ? ? At the end, a nearest neighbour classifier

  33. Results of the simulation: measure in time Par 1000 tours

  34. Par 1000 tours

  35. Par 1000 tours

  36. Results of the simulation: measure in time

  37. IV. What about computer safety, related works and conclusions

  38. Kephart and collaborators (IBM) and « Symantec Digital Immune System »

  39. Discovery of a new virus • By verifying files integrity and checksum • By pattern matching on the basis of well-known virus and viral effects. • Capture of a sample and send to a central analysis department infection of decoy programs. • Automatic analysis in order to deduce a prescription • Prescription transmission to the user • Diffusion of the prescription to all users

  40. L’AIS ARTIS (Forrest and collaborators) On the basis of data on self, learn and cluster self in order to recognize non-self

  41. L’AIS ARTIS: Detectors generation Algorithm of negative selection

  42. LAN Protection

  43. Conclusions

  44. How do we compare – 1) impact • Essentially intrusion detection systems. • What is good or bad as an intrusion should be defined both from outside but equally from inside, depending on the system and its current state. Information on the impact is far from enough. • Otherwise, too much false positive and missing curing opportunities. • The system should be able of “external” and “internal” monitoring. Agents on the frontier should look outside and inside. • Interesting internal variables for computers: “virtual swap”, “memory and CPU usage”.

  45. 2) viability: how to define it ? • What is an healthy system ? Maintain its state in its viability zone  how to define this zone ? • The advantage of a zone  imply degree of wellbeing. • In our toy problem, it was defined at priori. • For Forrest, it needs to be learned by observing the most frequent situations  viable = most frequent • Basically: what is frequent is good and what is rare is bad ? • Viability might be adaptive • Going in the right direction: Forrest’s more recent work on system calls and Kephart’s file integrity detection. • Virus can be learned to be so instead of being imposed a priori.

  46. 3) Learning and adaptation • learning is a key mechanism for complex systems. Continuously and autonomously adapt the data base to new impacts. • Impact status can change in time • Also adapt the data base to various system configurations and various states. • Learning allows a “case by case” defensive strategy • What is good for my computer could be bad for yours. • What is good for my computer now can be bad in two minutes from now. • Importance of the statistics, the coding and the granularity of the data.

  47. 4) Cares and cures • In general, no cure in the “intrusion detection” systems • In our case, it is • Cure by using the knowledge of the impact (by inverting the impact) • BUT ALTERNATIVELY, cure by treating variables which can be effective, although these variables might not be the impacted ones. • But cure can be damageable in its turn.

  48. A today evolution : from safety tohealthy from antivirus toantispam • Dramatic problems are rare but an increasing number of little disturbances to deal with is to be expected. • It is true for the immune system as for our more and more open, integrative and complex systems of tomorrow. • The proposal here needs not to be taken as a whole package. Only part of it could be of interest: couple (impact,state), viability, learning, pattern recognition, external and internal cure.

More Related