1 / 36

Black box, memory and datification

Maison de la recherche Paris 27 A pril 2016 Calcul , logique , linguistique. Black box, memory and datification. a perspective on the origins of computer science. Teresa Numerico (University of Rome three) teresa.numerico @uniroma3.it. Memory according to alan Turing.

floriaj
Download Presentation

Black box, memory and datification

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Maison de la recherche Paris 27 April 2016 Calcul, logique, linguistique Black box, memory and datification a perspective on the origins of computer science Teresa Numerico (University of Rome three) teresa.numerico@uniroma3.it

  2. Memory according to alan Turing

  3. human computers and their mechanical counterpart: Turing Machine/1 • The behaviour of the computer at any moment is determined by the symbols which he is observing, and his “state of mind” at that moment. […] • Let us imagine the operations performed by the computer to be split up into “simple operations” which are so elementary that it is not easy to imagine them further divided. • Every such operation consists of some change of the physical system consisting of the computer and his tape. […] • We may construct a machine to do the work of this computer. To each “state of mind” of the computer corresponds an “m-configuration” o the machine Turing 1936/2004, pp. 75-77

  4. human computers and their mechanical counterpart: TM/2 We avoid introducing the "state of mind" by considering a more physical and definite counterpart of it. It is always possible for the computer to break off from his work, to go away and forget all about it, and later come back and go on with it. If he does this he must leave a note of instructions (written in some standard form) explaining how the work is to be continued. This note is the counterpart of the "state of mind". We will suppose that the computer works in such a desultory manner that he never does more than one step at a sitting. The note of instructions must enable him to carry out one step and write the next note. Thus the state of progress of the computation at any stage is completely determined by the note of instructions and the symbols on the tape. Turing 1936/2004, 79

  5. Turing’s heretical theory(1951) • The machine would incorporate a memory. This does not need very much explanation. It would simply be a list of all the statements that had been made to it or by it, and all the moves it had made ant the cards it had played in its games. This would be listed in chronological order. Besides this straightforward memory there would be a number of “indexes of experiences” Turing 1951/2004, p. 474

  6. Indexes of experiences • It might be an alphabetical index of the words that had been used giving the times at which they had been used, so that they could be looked up in the memory • Another such index might contain patterns of men or parts of a GO board that had occurred • At comparatively later stages of education the memory might be extended to include important parts of the configuration of the machine at each moment […] it would begin to remember what its thoughts had been

  7. Indexes of experiences/2 • This raises a number of problems. If some of the indications are favourable and some are unfavourable what is one to do? • The answer to this will probably differ from machine to machine and will also vary with its degree of education

  8. Wiener, Rosenblueth, Bigelow Scientific models and closed-box problems

  9. Behavior,Purpose, teleology • “a uniform behavioristic analysis is applicable to both machines and living organisms, regardless of the complexity of the behavior” • between animals and machines “there is, therefore, a considerable overlap of the two realms of behavior” Rosenblueth, Wiener Bigelow 1943, 4

  10. A comparison of living organisms and machines • A further comparison of living organisms and machines leads to the following inferences. The methods of study for the two groups are at present similar. Whether they should always be the same may depend on whether or not there are one or more qualitatively distinct, unique characteristics present in one group and absent in the other. Such qualitative differences have not appeared so far. • The broad classes of behavior are the same in machines and in living organisms Rosenblueth, WienerBigelow 1943, 21

  11. Closed-box problems • As an introduction to the analysis of theoretical models it is appropriate to define what will be meant by a "closed box", as opposed to an "open box" problem. • There are certain problems in science in which a fixed finite number of input variables determines a fixed finite number of output variables. In these, the problem is determinate when the relations between these finite sets of variables are known. • It is possible to obtain the same output for the same input with different physical structures. If several alternative structures of this sort were enclosed in boxes whose only approach would be through the input and output terminals, it would be impossible to distinguish between these alternatives without resorting to new inputs, or outputs, or both Rosenbluethand Wiener 1945, pp. 318-319

  12. Closed-box representations • It is obvious, therefore, that the difference between open-box and closed-box problems, although significant, is one of degree rather than of kind. All scientific problems begin as closed-box problems, i.e., only a few of the significant variables are recognized. Scientific progress consists in a progressive opening of those boxes • Many of these small compartments may be deliberately left closed, because they are considered only functionally, but not structurally important Rosenbluethand Wiener 1945, p. 319

  13. Computing machine as a model for living organisms • The computing machines, at least in their recent forms to which I am referring […], are purely digital. Thus I must ask you to accept this oversimplification of the system. Although I am well aware of the analogy component in living organisms, and it would be absurd to deny its importance, I shall, nevertheless, for the sake of the simpler discussion, disregard that part Von Neumann 1948/1961, 297

  14. Licklider and the library of the future

  15. Libraries of the future • It is both our hypothesis and our conviction that people can handle the major part of their interaction with the fund of knowledge better by controlling and monitoring the processing of information than by handling all the detail directly themselves Licklider 1965, p. 28

  16. Experimenter, library and laboratory • The only channels for interaction between them are the telephone, the experimenter himself, and the books he borrows from the library and examines in the laboratory • The part of the fund of knowledge that interacts with nature during an experiment, is only that part that is stored inside the experimenter’s head plus the small amounts that come into his head from book he reads… Licklider 1965, 22-23

  17. Data management and the body of knowledge • The data are collected, not only in isolation from these concurrent processes, but also in isolation from one another, and the result is a chaos of miscellaneous individual cases. • The difficulties of integrating the results of many simultaneous projects […] is at present object of much concern Licklider 1965, p. 24

  18. Organizing information into knowledge • The raw materials or inputs to the “organizer” are alphanumeric data, geometrical patterns, pictures, […] • The outputs of the organized system are expressed in one or more of the input forms, but they are not mere reproductions or translations of particular inputs; • They are suggestions, answers to questions and made-to-order summaries of the kind that a good human assistant might prepare if he had a larger and more accurate memory and could process information faster Licklider 1965, p. 25

  19. Licklider’s proposal • In organizing knowledge, just as in acquiring knowledge, it would seem desirable to bring to bear upon the task the whole corpus, all at one time […]. This aim seems to call for direct interactions among various parts of the body of knowledge, and thus to support the requirement […] for an active or directly processible store Licklider 1965, p. 25

  20. Memory organization • Memory organization deals with the design of memory structures and systems, as distinct from structure and systems of information or knowledge. Its aim is to achieve two resonance or congruences: 1) between the memory and the information patterns that are likely to be stored in it, and 2) between the memory and the requests (e.g. questions) that are likely to be directed to it Licklider 1965, 25-26

  21. The aim of procognitive systems • A basic part of the over-all aim for procognitive systems is to get the user of the fund of knowledge into something more nearly like an executive’s or commander’s position. He will still read and think and, hopefully, have insights and make discoveries, but he will not have to do all the searching […] all the transforming, nor all the testing for matching or compatibility that is involved in creative use of knowledge Licklider 1965, p. 32

  22. Bob Taylor and Vietnam reports • There were discrepancies in reporting that was coming back from Vietnam to the White House about enemy killed, […] logistics reports of various kinds • […] I talked to various people who were submitting these reports back to Washington. I got a sense of how the data was collected, how it was analyzed, and what was done with it before it was sent back to the White House, and I realized that there was no uniform data collection or reporting structure • So they built a computer center at Tonsinook and had all of this data come in through there. After that the White House got a single report rather than several. That pleased them; whether the data was any more correct or not, I don't know, but at least it was more consistent Taylor 1989, pp. 12-13

  23. Alan Kay and DynaBook • Jimmy connected his DynaBook to his class’s LIBLINK and became heir to the thought and knowledge of ages past, all perusable through the screen of his DB. It was like taking an endless voyage through a space that knew no bound. As always he had a little trouble remembering what his original purpose was.[…] • He composed a simple filter for his DynaBook to aid their search… Kay 1972 A personal computer for children of all ages

  24. The influences of digital memory on science and society

  25. Artificial intelligence • Use of memory would seem to require representations, and these representations must have their effects on behavior independently of the time at which the memory representation was created. • …nonetheless, it is not plausible that there will be devices that will be widely accepted as exhibiting […] intelligence but do not rely on memory. • It is, however, not clear how this can be done without returning us to the previously discussed questions about how representations can be processed to yield intelligent outcomes. “The Cambridge Handbook of Artificial Intelligence” W.S. Robinson cap. 3, 2014

  26. Memory as an archive • The desire to expunge volatility, obliterate ephemerality, and neutralize time itself, so that our computers can become synonymous with archives • These desires are key to stabilizing hardware so that it can contain, regenerate, and thus reproduce what it stores Chun 2011, 139

  27. Cybernetics and the destruction of memory • There is a triple destruction of memory […] first past disciplines are destroyed: they need to be created anew from first principles. Second an individual experimenter must destroy his or her knowledge of previous experiments. Third, one result of this double destruction will be the discovery by cybernetics that memory itself is epiphenomenal. […] In cybernetics memory is destroyed so that history can be unified; in classical physics nonreversible time is destroyed so that history can be ignored” Bowker2008, 101

  28. The eternal present of Big Data • Significantly, though, these journeys, or tracings, can always be reincorporated into a map, and every journey we take, through the storage and sites we use, can be recompiled in a map that allegedly contains the truth of our journey […] • These databases, which drive computer “mapping” / machine intelligence, become “dirty,” unreliable, when they do not actively erase information: they become flooded with old and erroneous information that dilutes the maps they produce. Deliberately making databases dirty — by providing too much or erroneous information — may be the most effective way of preserving something like privacy. Chun 2011, 93-94

  29. Firmness and flexibility • Biologic memory and digital memory are not the same • Biologic memory is the set of processes that guarantee acquisition, consolidation and retrieving of information. • We ignore its complete functioning, though we have various hints on its activities, particularly connected to the study of malfunctioning situations • The storage capacity of memory can vary though the quantity of neurons can remain the same, due to cerebral plasticity • Storage capacity of digital memory cannot augment if the chips remain the same number • Biologic memory never returns the same datum, the memories are always rebuilt. Human memory is at the same time unreliable and effective • Digital memory is on the contrary reliable (until it breaks) and ineffective in the selection of the right items for an intelligent retrieval

  30. The interactions between language and science Data-oriented instead of model-oriented science

  31. Rhetoric of BD/1: Computer are better problem solver than humans • It’s human nature to focus on the problems […] where human skills and ingenuity are most valuable. And it’s normal human prejudice to undervalue the problems [of] the domain where data-driven intelligence really shines. • But […] what problems can computers solve that we can’t? And how, when we put that ability together with human intelligence, can we combine the two to do more than either is capable of alone? Nielsen, 2011, p. 213

  32. Rhetoric of BD/2: data-driven science • Science is no more oriented by interpretations, models and theory • Science is “data-driven” which - in the BD jargon - means that there is no interpretation and no theory prior to data, because they are just making sense by themselves (Mayer-SchönbergerCukier2013) • But this is just rhetoric because in order to find out the correlation among data series you need to seek for them choosing the right machine learning algorithms, or you risk that the correlations are just random, particularly with high dimensionality

  33. Body of a new machine • Today’s biological organism bears little resemblance to the traditionally maternal guarantor of vital integrity, the source of nurture and sustenance; it is no longer even the passive material substrata of classical genetics. The body of modern biology, like the DNA molecule – and also like the modern corporate or political body – has become just another part of an informational network, now machine, now message, always ready for exchange, each for the other. Keller 1995, 118

  34. Scientific hypotheses Science does not follows proofs but hypotheses and desires The popular view that scientists proceed inexorably from well-established fact to well-established fact, never being influenced by any unproved conjecture, is quite mistaken. Provided it is made clear which are proved facts and which are conjectures, no harm can result. Conjectures are of great importance since they suggest useful lines of research Turing 1950 in Copeland 2004, 449

  35. Language and science • Ambivalences, ambiguities, approximations play a role in scientific as well as in natural language • we need to pay attention to the false idea that scientific language is free from darkness and completely pure, because this conviction can produce misunderstandings • It is crucial to maintain a critical vision. We need the courage be aware of prescientific challenges we face when we take epistemological decisions such as: which is our object of research and which is the chosen method to interrogate it

  36. Bibliographic references • bowker G.C. (2008) Memory practices in the sciences, MIT Press, Cambridge (Mass.) • Bush V. (1945) As we may think • Chun W.H. (2011) Programmed Visions, The MIT Press, Cambridge (Mass.). • Cronenberg D. (2012) Cosmopolis, Alfama Films, Kinology, Prospero Pictures, Toronto Antenna, released 25/05/2012, Canada. • De Lillo D. (2003) Cosmopolis, Simon & Schuster, New York. • Edwards P.N. (1997) The closed world, The MIT Press, Cambridge (Mass.). • Keller E. F. (1991) Conversazioni con Evelyn Fox Keller. Una scienziata anomala, a cura di Donini E, Eleuthera, Milano. • Keller E.F. (1995)Refiguring life, Columbia University Press, New York • Levy S. et al. The diploid genome sequence of an individual human,inPLoS Biology, 10, 2007 • Licklider J.C.R. (1965): Libraries of the future, The MIT Press,Cambridge, MA. • Mayer-Schönberger V., Cukier K. (2013) Big Data. A revolution that will transform how we live, work and think, Houghton Mifflin Harcourt, Boston. • Nielsen M. (2012) Reinventing discovery: the new era of networked science, Princeton University Press, Princeton. • Nowotny H., Testa G. (2012) Geni a nudo, Codice, Torino • Somenzi V. e Cordeschi R. (1994) (A cura di) La filosofia degli automi, Bollati Boringhieri, Torino. • Turing A.M. (2004) The essential Turing, a cura di Copeland J., Clarendon Press, Oxford. • Wiener, N. (1950): The Human Use of Human Beings. Houghton Mifflin, Boston. • Burks, Goldstine, von Neumann 1947

More Related