Principles of Complex Systems - PowerPoint PPT Presentation

principles of complex systems n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
Principles of Complex Systems PowerPoint Presentation
Download Presentation
Principles of Complex Systems

play fullscreen
1 / 112
Principles of Complex Systems
95 Views
Download Presentation
lee
Download Presentation

Principles of Complex Systems

- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

  1. Principles of Complex Systems How to think like nature: Part II Russ Abbott Does nature really think?

  2. Complex systems overview Part 1. Introduction and motivation. Overview – unintended consequences, mechanism, function, and purpose; levels of abstraction, emergence, introduction to NetLogo. Emergence, levels of abstraction, and the reductionist blind spot. Modeling; thought externalization; how engineers and computer scientists think. Part 2. Evolution and evolutionary computing. Innovation – exploration and exploitation. Platforms – distributed control and systems of systems. Groups – how nature builds systems; the wisdom of crowds. Summary/conclusions – remember this if nothing else. Lots of echoes and repeated themes from one section to another.

  3. Are there autonomous higher level laws of nature? Fodor cites Gresham’s law. The fundamental dilemma of science Emergence The functionalist claim The reductionist position How can that be if everything can be reduced to the fundamental laws of physics? My answer It can all be explained in terms of levels of abstraction.

  4. Game of Life Gliders A 2-dimensional cellular automaton. The Game of Life rules determine everything that happens on the grid. • A dead cell with exactly three live neighbors becomes alive. • A live cell with either two or three live neighbors stays alive. • In all other cases, a cell dies or remains dead. The “glider” pattern Nothing really moves. Just cells going on and off.

  5. The Game of Life File > Models Library > Computer Science > Cellular Automata > Life Click Open

  6. Gliders are causally powerless. A glider does not change how the rules operate or which cells will be switched on and off. A glider doesn’t “go to a cell and turn it on.” A Game of Life run will proceed in exactly the same way whether one notices the gliders or not. A very reductionist stance. But … One can write down equations that characterize glider motion and predict whether—and if so when—a glider will “turn on” a particular cell. What is the status of those equations? Are they higher level laws? Gliders Like shadows, they don’t “do” anything. The rules are the only “forces!” Good GoL website

  7. Amazing as they are, gliders are also trivial. Once we know how to build a glider, it’s simple to make as many of them as we want. Can build a library of Game of Life patterns and their interaction APIs. Game of Life as a Programming Platform What does it mean to compute with shadows? By suitably arranging these patterns, one can simulate a Turing Machine. Paul Rendell.http://rendell.server.org.uk/gol/tmdetails.htm A second level of emergence. Emergence is not particularly mysterious.

  8. Downward causation The unsolvability of the TM halting problem entails the unsolvability of the GoL halting problem. How strange! We can conclude something about the GoL because we know something about Turing Machines. Yet Turing Machines are just shadows in the GoL world. And the theory of computation is not derivable from GoL rules. Downward causation entailment “Reduce” GoLunsolvability to TM unsolvability by constructing a TM within the GoL. Paul Davies, “The physics of downward causation” in Philip Clayton (Claremont Graduate University), Paul Davies (Macquarie/NSW/Arizona State University), The re-emergence of emergence, 2006

  9. A GoL Turing machine … • … is an entity. • Like a glider, it is recognizable; it has reduced entropy; it persists and has coherence—even though it is nothing but patterns created by cells going on and off. • … obeys laws from the theory of computability. Reductionism holds. Everything that happens on a GoL grid is a result of the application of the GoL rules andnothing else. Computability theory is independent of the GoL rules. • … is a GoL phenomenon that obeys laws that are independent of the GoL rules while at the same time being completely determined by the GoL rules. Living matter, while not eluding the ‘laws of physics’ … is likely to involve ‘other laws,’ [which] will form just as integral a part of [its] science. — Schrödinger Just as Schrödinger said.

  10. Level of abstraction:causally reducible yet ontologically real A collection of entities and relationships that can be described independently of their implementation. • A Turing machine; biological entities; every computer application, e.g., PowerPoint. When implemented, a level of abstraction is causally reducible to its implementation. • You can look at the implementation to see how it works. Its independent description makes it ontologically real. • How it behaves depends on its description at its level of abstraction, which is independent of its implementation. • The description can’t be reduced away to the implementation without losing information. • If the level of abstraction is about nature, reducing it away is bad science.

  11. Supervenience Developed originally in philosophy of mind in an attempt to link mind and brain. • A set of predicatesH (for Higher-level) about a world supervenes on a set of predicates L (for Lower-level) if it is never the case that two states of affairs of that world will assign the same configuration of truth values to the elements of L but different configurations of truth values to the elements of H. • In other words, L fixes H. • Or, no change in H without a change in L. • Think of L as statements in physics and H as statements in a Higher-level (“special”) science. • Or of L as statements in a computer program and H as the specification of the program’s functionality. • Or of L as a description of cells on the GoL grid (which are on and which are off) and H as a description of the patterns (like gliders) on the grid.

  12. L1: {Bit 0 is on., Bit 1 is on., Bit 2 is on. Bit 3 is on., Bit 4 is on.} L2: {Bit 0 is on., Bit 1 is on., Bit 3 is on., Bit 4 is on.} t, t, t, t t, t, f, t, t t, t, t, t, t Supervenience example H: {An odd number of bits is on:. True, False. The bits that are on are the start of the Fibonacci sequence: False, False. The bits that are on represent the value 27: False, True. …} H supervenes over L1. The truth value of a statement in H depends on the truth values of the statements in L1. But not over L2. The H statement “An odd number of bits is on.” can be either true or false (by varying bit 2) without changing the truth values in L2—since L2 ignores bit 2. The world in two different states

  13. Evolution as a level of abstraction • Darwin and Wallace’s theory of evolution by natural selection is expressed in terms of • entities • their properties • how suitable the properties of the entities are for the environment • populations • reproduction • etc. • These concepts are a level of abstraction. • The theory of evolution is about entities at that level of abstraction. • Let’s assume that it’s (theoretically) possible to trace how any state of the world—including the biological organisms in it—came about by tracking elementary particles • Even so, it is not possible to express the theory of evolution in terms of elementary particles. • Reducing everything to the level of physics, i.e., naïve reductionism, results in a blind spot regarding higher level entities and the laws that govern them.

  14. How are levels of abstraction built? • By adding persistent constraints to what exists. • Constraints “break symmetry” by ruling out possible future states. • Should be able to relate this to symmetry breaking more generally. • Easy in software. • Software constrains a computer to operate in a certain way. • Software (or a pattern set on a Game of Life grid) “breaks the symmetry” of possible sequences of future states. • How does nature build levels of abstraction? Two ways. • Energy wells produce static entities. • Atoms, molecules, solar systems, … • Activity patterns use imported energy to produce dynamic entities. • The constraint is imposed by the homeostatic processes that the dynamic entity employs to maintain its structure. • Biological entities, social entities, hurricanes. • A constrained system operates differently (has additional laws—the constraints) from one that isn’t constrained. Isn’t this just common sense?A TM GoL acts differently from a random configuration.

  15. Russ Abbott Not surprisingA constrained system is likely to obey special rules How can you use two tablespoons of water to break a window? 1. Spoon the water into an ice cube tray. 2. Freeze the water, thereby constraining its molecules into a rigid lattice structure. 3. Remove the frozen water from the tray. 4. Hurl the “water stone” at the window.

  16. Russ Abbott Not surprisingA constrained system is likely to obey special rules How can you use two tablespoons of water to break a window? Frozen water implements a solid. It can be used like a solid, and it obeys the laws of solids. (That’s because it is a solid—which is an abstraction.) Is this a trivial observation? Is it just common sense? 1. Spoon the water into an ice cube tray. 2. Freeze the water, thereby constraining its molecules into a rigid lattice structure. 3. Remove the frozen water from the tray. So if we constrain the GoL to act like a TM, it shouldn’t be surprising that it is governed by TM laws. 4. Hurl the “water stone” at the window. A phase transition often signals the imposition or removal of a constraint.

  17. Categories of entities

  18. Does nature use levels of abstraction? • Given the imposition of some (random) constraints, what entities result? Two possibilities. • There are none, or they don’t persist. Back to nature’s drawing board. • They persist and by their interaction create a new level of abstraction. • Nature then asks: what can I build on top of that? (Think James Burke’s Connections.) • Software developers do the same thing. • It’s all very bottom-up—and in nature’s case random. Each new entity or level of abstraction creates a range of possible laws/mechanisms that didn’t exist before. • These could not have been “deduced” from lower levels—except through exhaustive enumeration—any more than a new piece of software can be “deduced” from the programming language in which it is written.

  19. Principle of ontological emergence. Extant levels of abstraction are those whose implementations have materialized and whose environments enable their persistence. In some sense it is possible to “deduce” the theory of every natural process and reconstruct the universe, but the reconstruction will involve random constructions or exhaustive trials.

  20. Practical corollary: feasibility ranges Creating or breaking a level of abstraction frequently corresponds to a phase transition. Physical levels of abstraction are implemented only within feasibility ranges. When the feasibility range is exceeded a phase transition generally occurs. Require contractors to identify the feasibility range within which the implementation will succeed and describe the steps taken to ensure that those feasibility ranges are honored—and what happens if they are not. (Think O-rings.)

  21. Principles of Complex Systems: How to think like nature Modeling, the externalization of thought, and how engineers and computer scientists think Russ Abbott

  22. Modeling problems:the difficulty of looking downward It is not possible to find a non-arbitrary base level for models. What are we leaving out that might matter? Use Morse code to transmit messages on encrypted lines. No good models of biological arms races. Insects vs. plants: bark, bark boring, toxin, anti-toxin, … . Geckos use the Van der Waals “force” to climb. Can only model unimaginative enemies. Models of computer security or terrorism will always be incomplete. Nature is not segmented into a strictly layered hierarchy. Epiphenomenal

  23. Don’t know how to build models that can notice emergent phenomena and characterize their interactions. We don’t know what we aren’t noticing. Use our commercial airline system to deliver mail/bombs. Model gravity as an agent-based system. Ask system to find equation of earth’s orbit. Once told what to look for, system can find ellipse. (GP) But it won’t notice the yearly cycle of the seasons — even though it is similarly emergent. Modeling problems:the difficulty of looking upward Can only model unimaginative enemies. Models of computer security or terrorism will always be incomplete. Exploit an existing process

  24. Turning dreams into reality • Computer Scientists and Engineers both turn dreams (ideas) into reality—systems that operate in the world. • But we do it in very different ways. • Humanists turn reality into dreams. —Debora Shuger. • Mathematicians turn coffee into theorems. —Paul Erdos.

  25. How do we externalize thought? Which one is different? Why?

  26. Intellectual leverage in Computer Science: executable externalized thought • Computer languages enable executable externalized thought—different from (nearly) all other forms of externalized thought throughout history! • Software is both intentional—has meaning—and executable. • All other forms of externalized thought (except music) require a human being to interpret them. • The bit provides a floor that is both symbolic and real. • Bits are: symbolic, physicallyreal, and atomic. • Absolutely solid and concrete—in a virtual sort of way. • Bits don’t have error bars. • Can build (ontologically real) levels of abstraction above them. • But the bit limits realistic modeling. • E.g., no good models of evolutionary arms races and many other multi-scale (biological) phenomena. No justifiable floor. • Challenge: build a computer modeling framework that supports dynamically varying floors.

  27. Intellectual leverage in Engineering: mathematical modeling • Engineering gains intellectual leverage through mathematical modeling and functional decomposition. • Models approximate an underlying reality (physics). • Models don’t create ontologically independent entities. • Engineering is both cursed and blessed by its attachment to physicality. • There is no reliable floor (like the bit) in the material world. • Engineering systems often fail because of unanticipated interactions among well designed components, e.g. acoustic coupling that could not be identified in isolation from the operation of the full systems. National Academy of Engineering, Design in the New Millennium, 2000. • But, if a problem appears, engineers (like scientists) can dig down to a lower level to solve it.

  28. Engineers and computer scientists are different —almost as different as Venus and Mars • Engineers are grounded in physics. • Ultimately there is nothing besides physics. • Even though engineers build things that have very different (emergent) properties from their components, engineers tend to think at the level of physics. • When designing systems, engineers start with an idea and build down to the physics—using functional decomposition and successive approximation. • Engineering is (proudly) applied physics. • Computer scientists live in a world of abstractions. • Physics has very little to do with computer science worlds. • For computer scientists, there is more than physics, but we may have a hard time saying what it is—emergence. • When designing systems, Computer scientists start with the bit and build up to the idea—using levels of abstraction. • Computer science is (cautiously) applied philosophy. Engineering is the art of directingthe great sources of power in naturefor the use and convenience of man. —Thomas Tredgold (1828) 

  29. Principles of Complex Systems: How to think like nature Evolution: how nature thinks Russ Abbott

  30. Peppered moths: evolution in action • Originally, the vast majority of peppered moths in Manchester, England had light coloration—which camouflaged them from predators since they blended into the light-colored trees. • With the industrial revolution: • Pollution blackened the trees. • Light-colored moths died off. • Dark-colored moths flourished. • With improved environmental standards, light-colored peppered moths have again become common.

  31. Try it out File > Models Library > Biology > Evolution > Peppered Moths Click Open

  32. The evolutionary process There is a population of elements. The elements are capable of making copies of themselves perhaps with variants (mutations) and perhaps by combining with other elements. The environment affects the likelihood of an element surviving and reproducing. This results in “evolution by natural (i.e., environmental) selection.” Darwin likened it to breeding. The environment plays the role of the breeder.

  33. The nature of evolution Moth coloring confers survival value (fitness)—which depends on the environment. Hence Darwin’s “natural selection,” i.e., environmental selection. The environment selects the winners. There may be multiple “winners.” All one needs is a niche, not domination. • Nature is not necessarily “red in tooth and claw.” The dark and light moths don’t compete directly with each other. • “Survival of the fittest” doesn’t mean survival of the strongest. It means survival of those that best fit the environment. • There are no moth-on-moth battles. • Nor do the dark moths attempt to convince the light moths that it’s better to be dark — or vice versa. • Moths (and their colors) are rivals, not adversaries. • It’s more like a race than a boxing match. • They are rivals with respect to their ability to survive and acquire resources from the environment.

  34. Six time scales of evolution • Biological evolution is generally slow. • Warfare: often super fast evolution. • IED tactics and counter tactics. • Social/economic/cultural systems evolve at medium speeds. • As rivals: a social system that does well for its members thrives and expands. • As adversaries: social systems sometimes compete for resources—land in the past; now other resources. • Thought: thinking through options is even faster. • Let one’s hypotheses die in one’s stead. —Karl Popper • Simulation: computer modeling of evolutionary processes is faster yet. • Markets are evolution speeded-up. • Coke and Pepsi are rivals for consumer dollars, not adversaries. • They don’t attempt to kill each other’s CEOs or to sabotage each other’s delivery trucks.

  35. Application to engineering problems 20 A C 9 24 7 12 B 13 12 4 12 D 14 E • The Traveling Salesman Problem (TSP). • Connect the cities with the shortest tour that is a permutation of the cities. • Starts and ends at the same city. • Includes each city exactly once. • The obvious tour will include the sequence ACED-54 (or its reverse). • No diagonals: A-E or C-D. • The question is where to put B: ABCED-55, ACBED-57, or ACEBD-56? Why not n! In this case the problem is easy to solve by inspection. In general, it’s computationally explosive since there are (n-1)! possible tours.

  36. Genetic algorithm approach 20 A C 9 24 7 12 B 13 12 4 12 D 14 E • Create a population of random tours. • AEBCD-59, ACBED-57, ADCBE-59, ACDEB-71, … • In this case there are only 4! = 24 possible tours. • Could examine them all. Usually that’s not possible. An exchange (or reverse or mutation) solves this problem in one step. ACBED-57 → ABCED-55 • Repeat until good enough or no improvement. But beware local optima. • Select one or two tours as parents, e.g., AEBCD and ACBED. • Ensure that better tours are more likely to be selected. • Generate offspring using genetic operators to replace poorer elements. • Exchange two cities: ACDEB-71 → ACBED-57 • Reverse a subtour: ACBED-57 → AEBCD-59 • (Re)combine two tours: AEBCD-59 & ACBED-57 → AEDCB-71. • Possibly mutate the result: ADCBE-59 → ACBDE-70

  37. Try it out: TSP.jar After starting a run, double click in the display area to add a city or on a city to remove it. New cities are added to the tour next to their nearest neighbor. Stop and restart for new random cities. The number of new cities will be the same as the number of old cities. The differences between the current best and its immediate predecessor are shown by link color. New links are shown in green. Removed links are in dashed magenta. No “geographical” heuristics are used. Just the structural ones shown on the previous slide.

  38. Genetic algorithms: parameter setting/tuning The number of variables is constant. Both the TSP and the peppered moths examples illustrate genetic algorithms. Peppered moths: one parameter (color) to set. TSP: N variables. As a parameter setting problem think of each tour as consisting of N variables, each of which may contain any city number. The additional constraint is that no city may repeat. Often there are hundreds of variables (or more) or the search space is large and difficult to search for some other reason. There is no algorithmic way to find values that optimize (maximize/minimize) an objective function. Terrile et. al. (JPL), “Evolutionary Computation applied to the Tuning of MEMS gyroscopes,” GECCO, 2005. Abstract: We propose a tuning method for MEMS gyroscopes based on evolutionary computation to efficiently increase the sensitivity of MEMS gyroscopes through tuning and, furthermore, to find the optimally tuned configuration for this state of increased sensitivity. The tuning method was tested for the second generation JPL/Boeing Post-resonator MEMS gyroscope using the measurement of the frequency response of the MEMS device in open-loop operation.

  39. Genetic programming: design The number of variables (and the structure of the possible solution) is not fixed. Original goal was to generate software automatically. Not very successful, but hence the name. Applied successfully to other design and analysis problems. Circuit design Lens design Bongard and Lipson (Cornel), “Automated reverse engineering of nonlinear dynamical systems,” PNAS, 2007. Abstract: Complex nonlinear dynamics arise in many fields of science and engineering, but uncovering the underlying differential equations directly from observations poses a challenging task. The ability to symbolically model complex networked systems is key to understanding them, an open problem in many disciplines. Here we introduce for the first time a method that can automatically generate symbolic equations for a nonlinear coupled dynamical system directly from time series data. This method is applicable to any system that can be described using sets of ordinary nonlinear differential equations, and assumes that the (possibly noisy) time series of all variables are observable. … “Symbolic regression”

  40. The Human-competitive awards: “Humies” Each year at the Genetic and Evolutionary Computing Conference (GECCO), prizes are awarded to systems that perform at human-competitive levels—including the previous two slides. See http://www.genetic-programming.org/hc2005/main.html An automatically created result is considered “human-competitive” if it satisfies at least one of the eight criteria below. John Koza • The result was patented as an invention in the past, is an improvement over a patented invention, or would qualify today as a patentable new invention. • The result is equal to or better than a result that was accepted as a new scientific result at the time when it was published in a peer-reviewed scientific journal. • The result is equal to or better than a result that was placed into a database or archive of results maintained by an internationally recognized panel of scientific experts. • The result is publishable in its own right as a new scientific result — independent of the fact that the result was mechanically created. • The result is equal to or better than the most recent human-created solution to a long-standing problem for which there has been a succession of increasingly better human-created solutions. • The result is equal to or better than a result that was considered an achievement in its field at the time it was first discovered. • The result solves a problem of indisputable difficulty in its field. • The result holds its own or wins a regulated competition involving human contestants (in the form of either live human players or human-written computer programs).

  41. Genetic Algorithm for Constellation Optimization (GACO) Finds optimal constellation orbits using a genetic algorithm under multiple design constraints and with multiple sensor types. For low number of sats, GA arrangement is significantly better than Walker

  42. Principles of Complex Systems: How to think like nature Organizational innovation Russ Abbott

  43. Innovative environments Net-centricity and the GIG Inspired by the web and the internet Goal: to bring the creativity of the web and the internet to the DoD • Other innovative environments • Market economies • Biological evolution • The scientific and technological research process What do innovative environments have in common? How can organizations become innovative?

  44. The innovative process: exploration and exploitation If I were to give an award for the single best idea anyone has ever had, I'd give it to Darwin, ahead of Newton and Einstein and everyone else. In a single stroke, the idea of evolution by natural selection unifies the realm of life, meaning, and purpose with the realm of space and time, cause and effect, mechanism and physical law. Innovation, including human creativity, is always the result of an evolutionary process. Daniel Dennett, Darwin's Dangerous Idea • Generate new variants (e.g., ideas)—typically by combining and modifying existing ones. • This is a random process in nature. • But random or not isn’t the point. • The point is to generate lots of possibilities, to explore the landscape. • (Select and) exploit the good ones • Allow/enable the good ones to flourish. The easy part! The hard part!

  45. Exploration and exploitation in nature • Evolution. • E. Coli navigation. • The immune system. • Ant and bee foraging. • Termite nest building (to come). • Building out the circulatory and nervous systems.

  46. Exploration and exploitation:like water finding a way down hill Microbes attempting to get into your body must first get past your skin and mucous membranes, which not only pose a physical barrier but are rich in scavenger cells and IgA antibodies. Next, they must elude a series of nonspecific defenses—and substances that attack all invaders regardless of the epitopes they carry. These include patrolling phagocytes, granulocytes, NK cells, and complement. Infectious agents that get past these nonspecific barriers must finally confront specific weapons tailored just for them. These include both antibodies and cytotoxic T cells. From a tutorial on the immune system from the National Cancer Institute. Quite a challenge! We are very well defended. But we still get sick! If there is a way, some will inevitably find it. (Murphy's law?) The trick is to make the inevitability work for you, not against you.

  47. Exploration, exploitation, and asymmetric warfare It is the nature of complex systems and evolutionary processes that conflicts become asymmetric.  No matter how well armored one is … there will always be chinks in the armor, … and something will inevitably find those chinks. The something that finds those chinks will by definition be asymmetric since it attacks the chinks and not the armor.

  48. Exploration and exploitation:groups and individuals • Successful group exploration typically requires multiple, loosely coordinated, i.e., autonomous, individuals. • That’s because nature is not regular; one can’t fully plan an exploration. • If one knew in advance what the landscape looked like, it wouldn’t be an exploration. • Much exploration is wasted effort. • One may hit the jackpot while the others find nothing.