14.Evolution. " evolution: The gradual process by which the present diversity of plant and animal life arose from the earliest and most primitive organisms, which is believed to have been continuing for the past 3000 million years."
Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.
"evolution: The gradual process by which the present diversity of plant and animal life arose from the earliest and most primitive organisms, which is believed to have been continuing for the past 3000 million years."
Evolution is the change in the inherited traits of a population from generation to generation. These traits are the expression of genes that are copied and passed on to offspring during reproduction. Mutations in these genes can produce new or altered traits, resulting in heritable differences (genetic variation) between organisms. New traits can also come from transfer of genes between populations, as in migration, or between species, in horizontal gene transfer. Evolution occurs when these heritable differences become more common or rare in a population, either non-randomly through natural selection or randomly through genetic drift.
All Living things follow the same Codon table.
Environment or Others
Several specific mechanisms to enable "order for free" such as the robustness of genetic regulatory networks, the spontaneous self-sustaining order of chemical reactions as autocatalytic sets and the properties of the RNAgenotype-to-phenotype map (in this case, the RNA-sequence-to-RNA-shape mapping), have been cautiously incorporated as part of a workable theory as it applies to evolution. However, the entire program as outlined by Kauffman remains a matter for debate.
Nature Environment 1
Nature Environment 1
Nature Environment 1
Marine Plant Birth
1,000M years ago
400M years ago
4,600M years ago
Codon Table (Inheritance Code Table)
If DNA , U→T
20 Kinds of Amino Acid
1938, South Africa
4,000 kinds, 1~10cm, 300km/h, 300M years ago
15. Swarm Intelligence
Swarm intelligence (SI) is an artificial intelligence technique based around the study of collective behavior in decentralized, self-organized systems. The expression "swarm intelligence" was introduced by Beni & Wang in 1989, in the context of cellular robotic systems.
SI systems are typically made up of a population of simple agents interacting locally with one another and with their environment. Although there is normally no centralized control structure dictating how individual agents should behave, local interactions between such agents often lead to the emergence of global behavior. Examples of systems like this can be found in nature, including ant colonies, bird flocking, animal herding, bacterial growth, and fish schooling.
The application of swarm principles to large numbers of robots is called swarm robotics.
Swarm intelligence deals with systems composed of many individuals that coordinate using decentralized control and self-organization. In particular, it focuses on the collective behaviors that result from the local interactions of the individuals with each other and with their environment. Examples of systems studied by swarm intelligence are colonies of ants and termites, schools of fish, flocks of birds, herds of land animals, and also some human artifacts, including some robotic systems and some computer programs for tackling optimization and data analysis problems.
The Swarmanoid project is a Future and Emerging Technologies (FET-OPEN) project funded by the European Commission.
The main scientific objective of this research project is the design, implementation and control of a novel distributed robotic system. The system will be made up of heterogeneous, dynamically connected, small autonomous robots. Collectively, these robots will form what we call a swarmanoid. The swarmanoid that we intend to build will be comprised of numerous (about 60) autonomous robots of three types: eye-bots, handbots, and foot-bots.
Swarm-based Network Management
Swarm-based Data Analysis
Swarm robotics is a new approach to the coordination of multirobot systems which consist of large numbers of relatively simple physical robots. The goal of this approach is to study the design of robots (both their physical body and their controlling behaviors) such that a desired collective behavior emerges from the inter-robot interactions and the interactions of the robots with the environment, inspired but not limited by the emergent behavior observed in social insects, called swarm intelligence. It has been discovered that a set of relatively primitive individual behaviors enhanced with communication will produce a large set of complex swarm behaviors.
Unlike distributed robotic systems in general, swarm robotics emphasizes a large number of robots, and promotes scalability, for instance, by using only local communication. Local communication is usually achieved by wireless transmission systems, using radio frequency or infrared communication.
Potential application for swarm robotics include tasks that demand for extreme miniaturization (nanorobotics, microbotics), on the one hand, as for instance distributed sensing tasks in micromachinery or the human body. On the other hand, swarm robotics is suited to tasks that demand for extremely cheap designs, for instance a mining task, or an agricultural foraging task. Artists are using swarm robotic techniques to realize new forms of interactive art installation.
Both miniaturization and cost are hard constraints that emphasize simplicity of the individual team member, and thus motivate a swarm-intelligent approach to achieve meaningful behavior on swarm-level.
Further research is needed to find methodologies that allow for designing, and reliably predicting, swarm behavior, given only features of the individual swarm members. Here, video tracking is an essential tool for systematically studying swarm-behavior, even though other tracking methods are available. Recently Bristol robotics laboratory has developed an ultrasonic position tracking system for swarm research purposes.
In the real world, ants (initially) wander randomly, and upon finding food return to their colony while laying down pheromone trails. If other ants find such a path, they are likely not to keep traveling at random, but to instead follow the trail, returning and reinforcing it if they eventually find food (see Ant communication).
Over time, however, the pheromone trail starts to evaporate, thus reducing its attractive strength. The more time it takes for an ant to travel down the path and back again, the more time the pheromones have to evaporate. A short path, by comparison, gets marched over faster, and thus the pheromone density remains high as it is laid on the path as fast as it can evaporate. Pheromone evaporation has also the advantage of avoiding the convergence to a locally optimal solution. If there were no evaporation at all, the paths chosen by the first ants would tend to be excessively attractive to the following ones. In that case, the exploration of the solution space would be constrained.
Thus, when one ant finds a good (short, in other words) path from the colony to a food source, other ants are more likely to follow that path, and positive feedback eventually leaves all the ants following a single path. The idea of the ant colony algorithm is to mimic this behavior with "simulated ants" walking around the graph representing the problem to solve.
A pheromone is a chemical that triggers a natural behavioural response in another member of the same species. There are alarm pheromones, food trail pheromones, sex pheromones, and many others that affect behavior or physiology. Their use among insects has been particularly well documented, although many vertebrates and plants also communicate using pheromones.
A few well-controlled scientific studies have been published suggesting the possibility of pheromones in humans. The best-studied case involves the synchronization of menstrual cycles among women based on unconscious odor cues (the so called McClintock effect, named after the primary investigator).
Nanotechnology refers broadly to a field of applied science and technology whose unifying theme is the control of matter on the molecular level in scales smaller than 1 micrometre, normally 1 to 100 nanometers, and the fabrication of devices within that size range.
Examples of nanotechnology in modern use are the manufacture of polymers based on molecular structure, and the design of computer chip layouts based on surface science. Despite the great promise of numerous nanotechnologies such as quantum dots and nanotubes, real commercial applications have mainly used the advantages of colloidal nanoparticles in bulk form, such as suntan lotion, cosmetics, protective coatings, and stain resistant clothing.
The term "nanotechnology" was defined by Tokyo Science University Professor Norio Taniguchi in a 1974 paper (N. Taniguchi, "On the Basic Concept of 'Nanotechnology'," Proc. Intl. Conf. Prod. Eng. Tokyo, Part II, Japan Society of Precision Engineering, 1974.) as follows: "'Nanotechnology' mainly consists of the processing of, separation, consolidation, and deformation of materials by one atom or by one molecule."
One nanometer (nm) is one billionth, or 10-9 of a meter. For comparison, typical carbon-carbon bond lengths, or the spacing between these atoms in a molecule, are in the range .12-.15 nm, and a DNA double-helix has a diameter around 2 nm.
What is Nanotechnology?
A basic definition: Nanotechnology is the engineering of functional systems at the molecular scale. This covers both current work and concepts that are more advanced. In its original sense, 'nanotechnology' refers to the projected ability to construct items from the bottom up, using techniques and tools being developed today to make complete, high performance products.
With 15,342 atoms, this parallel-shaft speed reducer gear is one of the largest nanomechanical devices ever modeled in atomic detail.
1.a medical device that travels through the human body to seek out and destroy small clusters of cancerous cells before they can spread. Or
2.a box no larger than a sugar cube that contains the entire contents of the Library of Congress. Or
3.materials much lighter than steel that possess ten times as much strength.
— U.S. National Science Foundation
1 nm = 10-9m
DNA Wide ～2nm
100pm=1 A = 10-10m
Hydrogen Atom～100pm=1A (angstrom)
New Dream Life by Nanotechnology
Molecular biology is the study of biology at a molecular level. The field overlaps with other areas of biology and chemistry, particularly genetics and biochemistry. Molecular biology chiefly concerns itself with understanding the interactions between the various systems of a cell, including the interactions between DNA, RNA and protein biosynthesis and learning how these interactions are regulated.
The periodic table of the chemical elements is a tabular method of displaying the chemical elements. Although precursors to this table exist, its invention is generally credited to Russian chemist Dmitri Mendeleev in 1869. Mendeleev intended the table to illustrate recurring ("periodic") trends in the properties of the elements. The layout of the table has been refined and extended over time, as new elements have been discovered, and new theoretical models have been developed to explain chemical behavior.
The periodic table is now ubiquitous within the academic discipline of chemistry, providing an extremely useful framework to classify, systematize and compare all the many different forms of chemical behavior. The table has also found wide application in physics, biology, engineering, and industry. The current standard table contains 117 confirmed elements as of October 16, 2006 (while element 118 has been synthesized, element 117 has not).
The atomic mass may be considered to be the total mass of protons, neutrons and electrons in a single atom
Lithium-7 atom: 3 protons, 4 neutrons &
17. Carbon Nanotubes
Carbon nanotubes (CNTs) are allotropes of carbon. A single wall carbon nanotube is a one-atom thick sheet of graphite (called graphene) rolled up into a seamless cylinder with diameter of the order of a nanometer. This results in a nanostructure where the length-to-diameter ratio exceeds 10,000. Such cylindrical carbon molecules have novel properties that make them potentially useful in many applications in nanotechnology, electronics, optics and other fields of materials science. They exhibit extraordinary strength and unique electrical properties, and are efficient conductors of heat. Inorganic nanotubes have also been synthesized.
Nanotubes are members of the fullerene structural family, which also includes buckyballs. Whereas buckyballs are spherical in shape, a nanotube is cylindrical, with at least one end typically capped with a hemisphere of the buckyball structure. Their name is derived from their size, since the diameter of a nanotube is in the order of a few nanometers (approximately 50,000 times smaller than the width of a human hair), while they can be up to several millimeters in length. There are two main types of nanotubes: single-walled nanotubes (SWNTs) and multi-walled nanotubes (MWNTs).
The nature of the bonding of a nanotube is described by applied quantum chemistry, specifically, orbital hybridization. The chemical bonding of nanotubes are composed entirely of sp2 bonds, similar to those of graphite. This bonding structure, which is stronger than the sp3 bonds found in diamond, provides the molecules with their unique strength. Nanotubes naturally align themselves into "ropes" held together by Van der Waals forces. Under high pressure, nanotubes can merge together, trading some sp2 bonds for sp3 bonds, giving great possibility for producing strong, unlimited-length wires through high-pressure nanotube linking.[
Some allotropes of carbon: Diamond, graphite, lonsdaleite, buckyballs (C60, C540, C70), amorphous carbon and a carbon nanotube.
Graphite is used in pencils.
The Icosahedral Fullerene C540
An icosahedron is any polyhedron having 20 faces
Golden rectangles in an icosahedron
18.Genetic Programming (GP)
1992 John Koza, Stanford U
Lisp (List Processing) language
What is Genetic Programming?
One of the central challenges of computer science is to get a computer to do what needs to be done, without telling it how to do it. Genetic programming addresses this challenge by providing a method for automatically creating a working computer program from a high-level problem statement of the problem. Genetic programming achieves this goal of automatic programming (also sometimes called program synthesis or program induction) by genetically breeding a population of computer programs using the principles of Darwinian natural selection and biologically inspired operations. The operations include reproduction, crossover (sexual recombination), mutation, and architecture-altering operations patterned after gene duplication and gene deletion in nature.
Genetic programming is a domain-independent method that genetically breeds a population of computer programs to solve a problem. Specifically, genetic programming iteratively transforms a population of computer programs into a new generation of programs by applying analogs of naturally occurring genetic operations. The genetic operations include crossover (sexual recombination), mutation, reproduction, gene duplication, and gene deletion.
Genetic Algorithm (GA) : Progression form
Genetic Programming (GP) on GA: Tree form
Evolution Strategy (ES): Vector form
Evolutional Programming (EP): Function form
Evolutionary Algorithm (EA)
GP evolves computer programs, traditionally represented in memory as tree structures. Trees can be easily evaluated in a recursive manner. Every tree node has an operator function and every terminal node has an operand, making mathematical expressions easy to evolve and evaluate. Thus traditionally GP favors the use of programming languages that naturally embody tree structures
The main operators used in evolutionary algorithms such as GP are crossover and mutation. Crossover is applied on an individual by simply switching one of its nodes with another node from another individual in the population. With a tree-based representation, replacing a node means replacing the whole branch. This adds greater effectiveness to the crossover operator. The expressions resulting from crossover are very much different from their initial parents.
Mutation affects an individual in the population. It can replace a whole node in the selected individual, or it can replace just the node's information.
Tree form → Functional form
Example of Crossover:
Example of GP: Artificial Ant is taking all foods during limited energies.
Total 32 x 32 blocks
Pink color shows 89 pieces foods.
If_Food_Ahead(x,y) --- If food on ahead, execute x. If no food, execute y
Prog2(x,y) ---------- Execute x and y
Prog3(x,y,z) --------- Execute x,y and z
RIGHT --- Change direction to 90 degree right
LEFT ---- Change direction to 90 degree left
MOVE --- Go ahead one block
２．If move, Energy=Energy-1
３．If it have a food in block, food=food+1, erase one pink color.
４．If Energy=0, report number of getting foods.
５．Back to Item No. 2
Example of GP: Analysis of Life Evolution
Plant → Herbivore → Carnivore
<Condition of stable evolution>
<Process of specialization>
19. Artificial Intelligence
John McCarthy (born September 4, 1927, in Boston, Massachusetts, sometimes known affectionately as Uncle John McCarthy), is a prominent computer scientist who received the Turing Award in 1971 for his major contributions to the field of Artificial Intelligence (AI). He was responsible for the coining of the term "Artificial Intelligence" in his 1955 proposal for the 1956 Dartmouth Conference and is the inventor of the Lisp programming language.
John McCarthyBorn September 4, 1927 (age 80 in 2007)Boston, Massachusetts, USA
Field Computer Technology
Institutions Massachusetts Institute of Technology; Stanford University
Alma mater California Institute of Technology
Known for Artificial Intelligence; Circumscription; Situation calculus; Lisp Notable prizes Turing Award, 1971; Benjamin Franklin Medal in Computer and Cognitive Science, 2003
McCarthy received his B.S. in Mathematics from the California Institute of Technology in 1948 and his Ph.D. in Mathematics from Princeton University in 1951. After short-term appointments at Princeton, Stanford, Dartmouth, and MIT, he became a full professor at Stanford in 1962, where he remained until his retirement at the end of 2000. He is now a Professor Emeritus. McCarthy is listed on Google Directory as one of the all time top six people in the field of artificial intelligence.
Problem of AI:
1. No Motivation
2. No Idea
3. No New Activities
4. Need to teach first
5. No Self Learning
6. No Mind
7. No Personality
Acquisition of Self learning, Reasoning, Judgment
Lisp was invented by John McCarthy in 1958 while he was at MIT.
Lisp was originally created as a practical mathematical notation for computer programs, based on Alonzo Church's lambda calculus. It quickly became the favored programming language for artificial intelligence research. As one of the earliest programming languages, Lisp pioneered many ideas in computer science, including tree data structures, automatic storage management, dynamic typing, object-oriented programming, and the self-hosting compiler.
The name Lisp derives from "List Processor". Linked lists are one of Lisp languages' major data structures, and Lisp source code is itself made up of lists. As a result, Lisp programs can manipulate source code as a data structure, giving rise to the macro systems that allow programmers to create new syntax or even new "little languages" embedded in Lisp.
FORTRAN → LISP (Formula Programming Language)
First, LISP was called FLPL ( FORTRAN List Processing Language )
SQL , commonly expanded as Structured Query Language, is a computer language designed for the retrieval and management of data in relational database management systems, database schema creation and modification, and database object access control management.
SQL is a standard interactive and programming language for getting information from and updating a database. Although SQL is both an ANSI and an ISO standard, many database products support SQL with proprietary extensions to the standard language. Queries take the form of a command language that lets you select, insert, update, find out the location of data, and so forth. There is also a programming interface.
The first version of SQL was developed at IBM by Donald D. Chamberlin and Raymond F. Boyce in the early 1970s. This version, initially called SEQUEL, was designed to manipulate and retrieve data stored in IBM's original relational database product, System R. The SQL language was later formally standardized by the American National Standards Institute (ANSI) in 1986. Subsequent versions of the SQL standard have been released as International Organization for Standardization (ISO) standards.
SQL is designed for a specific purpose:
ANSI SQL/PSM SQL/Persistent Stored Module
IBM SQL PL SQL Procedural Language
Microsoft/Sybase T-SQL Transact-SQL
OraclePL/SQL Procedural Language/SQL
PostgreSQLPL/pgSQL Procedural Language/PostgreSQL Structured Query Language
1. DDL : Data Definition Language
CREATE, DROP, ALTER
2. DML : Data Manipulation Language
INSERT, DELETE, UPDATE, SELECT
3. DCL : Data Control Language
GRANT, REVOKE, SET TRANSACTION, BEGIN, COMMIT,
ROLLBACK, SAVEPOINT, LOCK
3 Value Logic ( TRUE, FALSE, UNKNOWN )
Support vector machines (SVMs) are a set of related supervised learning methods used for classification and regression. They belong to a family of generalized linear classifiers. They can also be considered a special case of Tikhonov regularization. A special property of SVMs is that they simultaneously minimize the empirical classification error and maximize the geometric margin; hence they are also known as maximum margin classifiers.
Support Vector Machines are based on the concept of decision planes that define decision boundaries. A decision plane is one that separates between a set of objects having different class memberships. A schematic example is shown in the illustration below. In this example, the objects belong either to class GREEN or RED. The separating line defines a boundary on the right side of which all objects are GREEN and to the left of which all objects are RED. Any new object (white circle) falling to the right is labeled, i.e., classified, as GREEN (or classified as RED should it fall to the left of the separating line).
The above is a classic example of a linear classifier, i.e., a classifier that separates a set of objects into their respective groups (GREEN and RED in this case) with a line. Most classification tasks, however, are not that simple, and often more complex structures are needed in order to make an optimal separation, i.e., correctly classify new objects (test cases) on the basis of the examples that are available (train cases). This situation is depicted in the illustration below. Compared to the previous schematic, it is clear that a full separation of the GREEN and RED objects would require a curve (which is more complex than a line). Classification tasks based on drawing separating lines to distinguish between objects of different class memberships are known as hyperplane classifiers. Support Vector Machines are particularly suited to handle such tasks.
An ontology is a data model that represents a set of concepts within a domain and the relationships between those concepts. It is used to reason about the objects within that domain.
Ontologies are used in artificial intelligence, the semantic web, software engineering, biomedical informatics and information architecture as a form of knowledge representation about the world or some part of it.
Metadata is data about data. An item of metadata may describe an individual datum, or content item, or a collection of data including multiple content items.
Metadata (sometimes written 'meta data') is used to facilitate the understanding, use and management of data. The metadata required for effective data management varies with the type of data and context of use. In a library, where the data is the content of the titles stocked, metadata about a title would typically include a description of the content, the author, the publication date and the physical location. In the context of a camera, where the data is the photographic image, metadata would typically include the date the photograph was taken and details of the camera settings. On a portable music player such as an Apple iPod, the album names, song titles and album art embedded in the music files are used to generate the artist and song listings, and are metadata. In the context of an information system, where the data is the content of the computer files, metadata about an individual data item would typically include the name of the field and its length. Metadata about a collection of data items, a computer file, might typically include the name of the file, the type of file and the name of the data administrator.
The semantic web is an evolving extension of the World Wide Web in which web content can be expressed not only in natural language, but also in a format that can be read and used by software agents, thus permitting them to find, share and integrate information more easily. It derives from W3C director Sir Tim Berners-Lee's vision of the Web as a universal medium for data, information, and knowledge exchange.
At its core, the semantic web comprises a philosophy, a set of design principles, collaborative working groups, and a variety of enabling technologies. Some elements of the semantic web are expressed as prospective future possibilities that have yet to be implemented or realized. Other elements of the semantic web are expressed in formal specifications. Some of these include Resource Description Framework (RDF), a variety of data interchange formats (e.g. RDF/XML, N3, Turtle, N-Triples), and notations such as RDF Schema (RDFS) and the Web Ontology Language (OWL), all of which are intended to provide a formal description of concepts, terms, and relationships within a given knowledge domain.
The Web was designed as an information space, with the goal that it should be useful not only for human-human communication, but also that machines would be able to participate and help. One of the major obstacles to this has been the fact that most information on the Web is designed for human consumption, and even if it was derived from a database with well defined meanings (in at least some terms) for its columns, that the structure of the data is not evident to a robot browsing the web. Leaving aside the artificial intelligence problem of training machines to behave like people, the Semantic Web approach instead develops languages for expressing information in a machine processable form.
Implementing the Semantic Web requires adding semantic metadata, or data that describes data, to information resources. This will allow machines to effectively process the data based on the semantic information that describes it. When there is enough semantic information associated with data, computers can make inferences about the data, i.e., understand what a data resource is and how it relates to other data.
The Semantic Web can be seen as a huge engineering solution... but it is more than that. We will find that as it becomes easier to publish data in a repurposable form, so more people will want to publish data, and there will be a knock-on or domino effect. We may find that a large number of Semantic Web applications can be used for a variety of different tasks, increasing the modularity of applications on the Web. But enough subjective reasoning... onto how this will be accomplished.
A graphical example of an OWL ontology is below.
All the detailed relationship information defined in an OWL ontology allows applications to make logical deductions. For instance, given the ontology above, a Semantic Web agent could infer that since "Goose" is a type of "DarkMeatFowl," and "DarkMeatFowl" is a subset of the class "Fowl," which is a subset of the class "EdibleThing," then "Goose" is an "EdibleThing."
Dendral was an influential pioneer project in artificial intelligence (AI) of the 1960s, and the computer software expert system that it produced. Its primary aim was to help organic chemists in identifying unknown organic molecules, by analyzing their mass spectra and using knowledge of chemistry. It was done at Stanford University by Edward Feigenbaum, Bruce Buchanan, Joshua Lederberg, and Carl Djerassi. It began in 1965 and spans approximately half the history of AI research.
The software program Dendral is considered the first expert system because it automated the decision-making process and problem-solving behavior of organic chemists. It consists of two sub-programs, Heuristic Dendral and Meta-Dendral, It was written in Lisp (programming language), which was considered the language of AI.
Many systems were derived from Dendral, including MYCIN, MOLGEN, MACSYMA, PROSPECTOR, XCON, and STEAMER.
The name Dendral is a portmanteaux of the term "Dendritic Algorithm"
Urea is an organic compound with the chemical formula (NH2)2CO.
Urea was discovered by Hilaire Rouelle in 1773. It was the first organic compound to be artificially synthesized from inorganic starting materials, in 1828 by Friedrich Wöhler, who prepared it by the reaction of potassium K cyanate with ammonium sulfate S. Although Wöhler was attempting to prepare ammonium cyanate, by forming urea, he inadvertently discredited vitalism, the theory that the chemicals of living organisms are fundamentally different from inanimate matter, thus starting the discipline of organic chemistry.
< Urea >
The Turing test is a proposal for a test of a machine's capability to demonstrate intelligence. Described by Professor Alan Turing in the 1950 paper "Computing machinery and intelligence" it proceeds as follows: a human judge engages in a natural language conversation with one human and one machine, each of which try to appear human; if the judge cannot reliably tell which is which, then the machine is said to pass the test. In order to keep the test setting simple and universal (to explicitly test the linguistic capability of the machine instead of its ability to render words into audio), the conversation is usually limited to a text-only channel such as a teletype machine as Turing suggested or, more recently, instant messaging.
How much can the machine imitate the human perfectly ?