1 / 78

Fingerprints of complexity

Imre Kondor Collegium Budapest and Eötvös University, Budapest EUNICE Spring School on Complexity and Thought Patterns Elba, May 10-17, 2009. Fingerprints of complexity. This work has been supported by the National Office for Research and Technology under grant No. KCKHA005. PRELIMINARIES.

solange
Download Presentation

Fingerprints of complexity

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Imre Kondor Collegium Budapest and Eötvös University, Budapest EUNICE Spring School on Complexity and Thought Patterns Elba, May 10-17, 2009 Fingerprints of complexity This work has been supported by the National Office for Research and Technology under grant No. KCKHA005

  2. PRELIMINARIES

  3. Simplicity • Einstein: Things should be made as simple as possible, but not simpler. • Feynman: You can always recognize truth by its beauty and simplicity. • The Dirac equation in Erice • Leonardo: Simplicity is the ultimate sophistication • Saint Exupéry: Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away

  4. Occam’s razor • Simplicity has been generally regarded as a theoretical virtue. • Aristotle, Posterior Analytics: We may assume the superiority ceteris paribus of the demonstration which derives from fewer postulates or hypotheses. • Newton: Nature is pleased with simplicity, and affects not the pomp of superfluous causes • Lavoisier: If all of chemistry can be explained in a satisfactory manner without the help of phlogiston, that is enough to render it infinitely likely that the principle does not exist, that it is a hypothetical substance, a gratuitous supposition. It is, after all, a principle of logic not to multiply entities unnecessarily. • Extremum (variational) principles in physics (Fermat, Maupertuis, Lagrange, Hamilton) • Einstein: The grand aim of all science…is to cover the greatest possible number of empirical facts by logical deductions from the smallest possible number of hypotheses or axioms • But: Jakob Burkhardt: The essence of tyranny is the denial of complexity.

  5. Occam’s broom • All these quotations revolve around reduction, or information compression • It is not clear that every single problem can be compressed to such an extent as to make it simple, or esthetically pleasing. • Compression may be due to ignorance or interest, or deliberate, value laden choice • Besides, simplicity or beauty are subjective • This obsession about simplicity may be due to the structure of human intelligence? (very limited short term memory??) • Does it have an evolutionary background? • Monty Python: Summarizing Proust.

  6. Some characteristic features of complex systems I. • Many, typically heterogeneous parts, or constituents. • Lack of symmetries • Strong interactions, collective effects • Confluence of scales • Nonlinearity (vs pre 60’s physics as the „apotheosis of series expansion”) • Multiattractor structure, complicated attractors, basins of attraction • Sensitivity to control parameters, initial- and boundary conditions. Long range correlations.The system cannot be split into parts („more than the sum of its parts”).

  7. Some characterisitc features of complex systems II. • Small structural details are important, large number of relevant variables, irreducibility, randomness • Historicity, often only a single realisation can be observed, the experiment is impossible to repeat. • Evolution conditioned by antecedents – limits to rational choice or decision making • Mode slaving • Emergence • Adaptation, learning, self-organization, self-reproduction • Evolution conditioned by learning about the system, or learing by the system about itself, cognition, self-reflection, self- fulfilling prophecies (Santa Claus rally, Rakosi’s exclamation) • Etc.

  8. A few examples • A spin glass (perhaps) • The cell • The brain • A living being • An ecosystem • The society • The economy • The financial system

  9. A few counterexamples • The quadratic equation (despite its generating thought provoking sets and maps) • The hidrogen atom (although it is „more than the sum of its parts”) • The ideal gas ( although the temperature is an emergent notion) • Etc.

  10. INFORMATION COMPRESSION

  11. The number 0.666666…contains infinitely many digits, but it is highly symmetric, and can be compressed into 2/3. • N = 0.123456789123456789123456789… is slightly more complicated, but it can still be compressed as

  12. One more step: √2 – 1 = 0.414213562373…is an infinite sequence that never repeats itself. It seems completely random; yet, it can be generated by the simple iteration: Xn+1= 1/(2 + Xn) Xn converges to √2 – 1 whenever we start the iteration above – 2. Here we can see that an extremely simple prescription can produce a seemingly random sequence. Conversely, the apparently random sequence can be encoded into a simple recipe.

  13. A side remark on chaos The previous iteration converges fast. Other, slightly more complicated iterations like Xn+1 = c Xn(1 – Xn) can produce a lot more complicated behaviour, depending on the value of c. As c increases, the iteration undergoes an infinite sequence of bifurcations and goes over into fully chaotic behaviour.

  14. The number on the previous slide is the ratio of the circumference and the diameter of a circle. There are several simple algorithms to calculate it; one (due to Leibniz) is the following: π/4 = 1/1 – 1/3 + 1/5 – 1/7 + 1/9 – 1/11 + … The information content of something very complicated has been compressed into something reasonably straightforward. NB: the above series does not converge fast enough to allow one to calculate billions of digits of π.

  15. In algorithmic information theory the complexity of a string is measured by the length of the simplest algorithm that produces the string (Kolmogorov, Chaitin). • By this measure, all the above strings are fairly simple. • Note that this measure assigns maximal complexity to randomness.

  16. CELLULAR AUTOMATA

  17. Generating complex patterns from simple rules Cellular automata: Illustrations from S. Wolfram’s book

  18. Next step

  19. The 256 simplest cellular automata

  20. Nested structure

  21. Getting complicated

  22. Even more complicated

  23. „Digital filosophy” • Wolfram’s cellular automaton #110 is a universal Turing machine. • Wolfram believes that the whole Universe is a gigantic automaton governed by some fundamental, simple rule. He is trying to guess the rule that would, running long enough, produce all the observed complexity in the world. • String theorists are not very far from this kind of delusion. (The School of Natural Sciences in Princeton.)

  24. Even if all this were true.. it would not help one bit to understand the cell cycle, the mind, or the current financial mess.

  25. INCOMPLETENESS

  26. Formal axiomatic systems • Since Euclid the axiomatic method has been considered the highest form of organizing scientific information (?) • The ultimate form is Hilbert’s programme: Alphabet Grammar Axioms Rules of Inference Proof-Checking Algorithm

  27. Evident requirements • Consistency • Completeness • Decision procedure

  28. Incompleteness • Gödel, 1931: if number theory (positive integers plus addition and multiplication) is consistent, then it is incomplete. • The proof involves self-reference: Gödel constructed an assertion about integers that said of itself that it was unprovable.

  29. Uncomputable numbers • Turing, 1936: „On computable numbers, with an application to the Entscheidungsproblem” • A computable number is one for which there is an algorithm (a computer program) for calculating the digits one by one. • Computer programs are denumerable, real numbers are not. Hence, there must exist numbers that cannot be computed. In fact, most numbers are such.

  30. The halting problem Turing showed that uncomputability implies that it is impossible to decide whether a computer program ever terminates on a given result or not, i.e. there is no algorithm that will decide if a computer program ever halts.

  31. Algorithmic complexity • Chaitin, Kolmogorov: the complexity of a number, a mathematical result, a theoretical statement must be measured by the length of the shortest computer program that produces it. • Most real numbers are infinitely complex, random, structureless, in that the shortest algorithm is as long as themselves – no compression is possible. (Borges’ map)

  32. Irreducibility in mathematics • Incompleteness results show that most of mathematics is irreducible. • As such, mathematics is much closer to physics than usually believed: there are facts that can be discovered about numbers but that cannot be deduced from any finite axiomatic system. • „Experimental mathematics” • On hindsight this is more of a relief than a tragedy. • What does all this say about the structure of human thought – or the brain?

  33. TIME COMPLEXITY

  34. Hard computational problems I. • All incompleteness results are asymptotic. • In practice we never use infinite sets, real numbers, etc. Perhaps all this irreducibility is merely a theoretical issue and in actual, finite problems we can settle everything in finite time, in the worst case by exhaustive search. • This is very emphatically not the case.

  35. Hard computational problems II. • Computational problems can be classified according to how fast the running time of the algorithm that solves them grows with the size of the input. • Manageable problems produce a linear (~N), or quadratic (~N2), or some other low degree polynomial growth. • Hard problems require exponentially long algorithms, and are, therefore, intractable

  36. Growth of algorithm length with size

  37. Size of largest problem instance solvable in 1 hour

  38. Hard computational problems III. • These data (taken from Garey and Johnson) show that the problem of computational complexity is not just of a practical nature, is not just a matter of technology. • If a computation takes longer than the age of the Universe, then the problem is, for all intents and purposes, intractable.

  39. SCALES

  40. Characteristic scales I. • 10-33 cm: Planck length (probably the smallest length scale) • 10-18 cm: smallest scale described by the Standard Model • 10-12 cm: size of the atomic nucleus • 10-8 cm: size of atoms • 10-5 cm: size of macromolecules

  41. Characteristic scales II. • 1 cm: macroscopic scale • 105 cm: mountains • 108 cm: radius of Earth • 1015 cm: size of the Solar System • 1020 cm: size of the Galaxis • 1028 cm: size of the Universe A hierarchy of nested structures ranging over 60 orders of magnitude

  42. Separation of scales I. • As long as the scales are well separated, it is possible to describe phenomena on a given level of this hierarchy as if they were independent of the phenomena on lower (or higher) levels. This creates the illusion (?) of independent sciences. • For example: atomic spectra are to a large extent insensitive to what goes on in the nucleus. It is only the mass and the charge of the nucleus that really matters.

  43. Separation of scales II. • Does it make much sense to seek a consistent axiomatic description on any level (say, in thermodynamics), when we know that it is the result of averaging over smaller scales? • When scales are well separated, reduction works perfectly (reducing thermodynamics or hydrodynamics by statistical mechanics)

  44. Separation of scales III. • Averages over „microscopic” degrees of freedom enter the „macroscopic” equations as empirical input parameters. • Reduction explains these parameters, thereby compresses the information content of the theory. • How far can this program be taken? What would an ultimate theory look like?

  45. Separation of scales IV. • Cutoff: an explicit or implicit smallest scale, beyond which a theory is not valid. • If scales are widely separated, the description is insensitive to the precise value of the cutoff (renormalization invariance). • Unjustified extrapolation beyond the cutoff leads to inconsistencies (divergences in field theory).

  46. Renormalization • As we move the cutoff upwards, more and more degrees of freedom are averaged out. • A continuous series of effective theories are being generated. • New, effective degrees of freedom emerge, with renormalized interactions.

  47. OTHER COMPLEXITY FEATURES

  48. Mode slaving • Cooperative behaviour builds up macroscopic collective coordinates that appear as external fields or constraints for each individual particle. • Examples: macroscopic magnetization created by cooperating elementary magnets acts on them as an external field. • Institutions created by individual agents act as external constraints on them.

  49. Emergence • A hazy term, often used to describe situations where „the whole is not the sum of its parts”. • Nothing is the sum of its parts. • Interaction between the parts may weakly, strongly, or fundamentally alter the components. • Quantum mechanics is a very concrete and successful framework to describe emergence.

More Related