1 / 30

Artificial Cognition Systems General Cognition Engine Module; January 2013

Artificial Cognition Systems General Cognition Engine Module; January 2013. Marcelo Funes-Gallanzi, Ph.D. The Goodwill Company, Ltd. Guildford, England. E-mail: mfg@thegoodwillcompany.co.uk. DISTRIBUTION A:  Approved for public release; distribution unlimited. Executive Summary.

hailey
Download Presentation

Artificial Cognition Systems General Cognition Engine Module; January 2013

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Artificial Cognition Systems General Cognition Engine Module; January 2013 Marcelo Funes-Gallanzi, Ph.D. The Goodwill Company, Ltd. Guildford, England. E-mail: mfg@thegoodwillcompany.co.uk DISTRIBUTION A: Approved for public release; distribution unlimited.

  2. Executive Summary In short, the system can be described as a first-generation general cognition engine. It is able to understand natural language, that is successfully acquire, store and retrieve, associate and/or process concepts, even if provided by unstructured sources, regardless of the grammar or wording used, is also able to combine contextual information and user-supplied information, thanks to the fact that the position of information in its knowledge base is given by meaning, regardless of the source, method or order of data or knowledge input. It is also able to improve upon itself and do some basic “reasoning” using the knowledge available in its knowledge base to yield suitable replies, statements or actionable conclusions. The scheme involves acquiring, storing, retrieving and processing an input and combining said input with knowledge from a basic-level categorization representation of knowledge realized by representing knowledge as ideograms in a multidimensional space, conditioned by abstract relations between essential words; as in those of Basic English.

  3. Background information • At the International Joint Conference for Artificial Intelligence in Pasadena, California, on 15 July 2009, the results of an analysis by a panel of experts were presented under the chairmanship of Eric Horvitz on the prospects for AI. • Panel members agreed that creating human-level artificial intelligence is possible in principle. Human-level artificial intelligence or self-evolving machines were seen as long-term, abstract goals not yet ready for serious consideration. • Panel member Tom Dietterich pointed out that much of today’s AI research is not aimed at building a general human-level AI system, but rather focuses on systems which are effective at tasks in a very narrow range of application, such as mathematics. • The panel discussed at length the idea of a runaway chain reaction of machines capable of building ever-better machines at length. Most members were skeptical that such an AI explosion would occur in the foreseeable future, given the lack of projects today that could lead to systems capable of improving upon themselves which exist today. • Our research programme is precisely directed at building a human-level artificial cognition system capable of improving upon itself.

  4. Background information • In order to achieve the goal of developing a viable artificial cognition system, we first of all need to develop a general cognition engine (GCE), i.e. a self-evolving brain analogue that is able to acquire unstructured knowledge (e.g. books, speech, patents, etc.), store and retrieve knowledge, and constantly improve upon itself, irrespective of the field of knowledge. This module is commonly referred to as the knowledge database in currrent systems, which are normally trained with information, deal with structured or semi-structured knowledge and cannot self-evolve. • Possibly the most fundamental aspect of cognition is memory, which is itself broken down into 3 distinct tasks: acquisition, storage and retrieval; bearing in mind that it is well-established that order helps memory. • Knowledge and experience are most easily transferred through language and at the root of the concept of language lies the very definition of a word.  Wittgenstein (1953) suggested that there exists a "family resemblance" which allows us to identify a particular instance as a member of a group, and following the work of Rosch (1973) we can conceive of these family resemblances as based on the fact that the human brain really possesses prototypes, in order to represent the meaning of a particular word in relation to these.  She found that there are in fact about 400 core concepts in western children, which are intensively used in growing up to interpret meaning.  Moreover, Rosch argued that there is a "natural" level of categorization that we tend to use to communicate.  This level is known as "basic-level categorization". • Using the idea of meaning being defined in terms of a word's relationship to others is attractive but involves deriving a matrix of word relations that implies many more entries than the average number of neurons in a human brain, if a standard vocabulary is used. This fact leads us to two conclusions: first, that it is likely that basic-level categorization is in fact used by the brain, and second that it would be useful to find a way to represent a full vocabulary in terms of a reduced vocabulary, that can act as a proxy for this set of basic-level categorizations.

  5. Background information • Simplish is a tool that converts standard English into a reduced-vocabulary version of 1,000 words, 850 basic words, 50 international words and 100 specialized words, and we propose to use this as a proxy for a set of basic-level categorizations, there being other potential candidates. This representation therefore yields an effective means of knowledge acquisition. A reduced-vocabulary also has the advantage of reducing ambiguity as a by-product of the translation process. • Using a reduced-vocabulary representation enables the mapping of language, through the use of standard multivariate techniques, to a low-dimensionality space where a multidimensional ideogram (agraphic symbol that represents an idea or concept) can be produced as an illustrative point in this subspace, as might be represented in the brain (using variables such as coordinates x, y, z, potential, neurotransmitter, frequency, phase, etc.). The low-dimensionality ideogram is the storage medium. These ideograms will be similar even if different words or grammar are used, because their form and position is given by the relationship of a concept/word to all other members of the basic-level categorization. • The result of this strategy will be to establish a means to map semantic similarity into spatial proximity, i.e. the distance between two points (concepts) is a measure of how similar their meanings are. • Spatial proximity can be used to yield a means for information retrieval for a given query, via the association of ideas as a human does, implemented here through the application of nearest-neighbour search algorithms, a method that is well-known in the art. • This strategy enables concepts to be mapped either as part of a data-driven step or concept-driven contextual information needed for problem-solving. We can plot and step along an evolving path and come across both types of information if relevant. • Thus, we can move from keywords and pattern-matching to concept-matching, for instance in a web search engine!

  6. www.simplish.orgA standard English to Basic English tool This approach enables: UNIQUE INFINITE EXPRESSIVENESS 100,000 + Words incl. 30,000 scientific 1,000+ Words BASIC ENGLISH (Annotated) STANDARD ENGLISH • Some words such as “name” have arguments (e.g. Jesus) while others do not (e.g. flat). • Idiomatic phrases, names and many places are also considered • Personal dictionary allows adapting the translation to the user's vocabulary • File standards: .doc, .txt, .pdf & .html • Aimed also at Orientals (5m S&T graduates last year!...) & non-scientists • Pending improvements: apostrophes, images, spaces, hyphens, etc.

  7. Order in Memory... • The use of basic-level categorization in order to represent the meaning of a particular word, in relation to all other words, results in a matrix where all words can be related to each other.  This matrix kernel is what confers order to the memory process, each such matrix being equivalent to describing one mind's perception, so no confusion (deriving from many authors' use of language) as in a corpus of data so ambiguity is reduced. Thus, the shape and position of an ideogram depends on this unique broad abstract “association criteria”, with no need for specific training or ontologies (descriptions of the concepts and relationships that can exist for an agent or a community of agents). • Entries are rated between opposite (minimum) up to highly related (maximum) with columns being ordered by syntax and rows by a rough semantic classification of 50 semantic tags, so word order is important; unlike in say Latent Semantic Analysis. • In fact, various models already exist that provide automatic means of determining similarity of meaning by analysis of large text corpora, without any understanding.

  8. The matrix of basic broad abstract relations between words as proposed by Wittgenstein can be converted into a low-dimensionality representation using standard singular value decomposition methods, such as Principal Component Analysis. This subspace, conditioned by word relations, is used to display scientific words and all user data, with a position given by the meaning of user-defined data streams. • We can then display the semantic relations, converted into spatial distances, between words in a low-dimensionality space as shown (and equivalently for the case of syntax):

  9. Displaying knowledge in a preconditioned space... We can also display more complex concepts being explained in terms of Basic English by assigning a high value to the words being used in an illustrative labelled extra row displayed in the GCE space such as: Humerus - The bone of the top part of the arm in man In some cases, a mapping module converts words into a valid form (e.g. “worked” – “past work” & “unsafe” - not safe). It also deals with compound words (e.g. “outline”). More complex concepts can be displayed that use previously defined simpler concepts: Elbow - JOINT in the arm between the HUMERUS and the ULNA. The problem here is that in order to map the word “elbow” we also need to use points defined by “joint” & “ulna” as well as humerus: Joint -part or structure where two bones or parts of an animal's body are so joined that they have the power of motion in relation to one another. Ulna - The back one of the two bones of the lower front leg in animals with 4 legs or the arm in man. So, the solution is that where we need to use previously defined points to display a new more complex concept, we can simply join them together and build trajectories (i.e. ideograms) between a number of points in the GCE. This trajectory definition is done by the mapping module, which segments phrases as required. Of course, we can build trajectories with many sentences, even multiple sources multiplexed all updating the GCE in a parallelized scheme, and in that process extrapolate and come across relevant facts, thereby resolving incorporating contextual data to a data-driven process.

  10. Trajectories - Ideograms A segmentation module helps to determine fragments that can be assigned to a point and those that must be broken down and displayed as a trajectory. Elbow - JOINT in the arm between the HUMERUS and the ULNA... this is really just a machine-generated multidimensional ideogram! For any segment/point in a trajectory there is a definition: [JOINT ; in the arm between the ; HUMERUS] [HUMERUS ; and the ; ULNA] Compare with Chinese symbols for example: 人 “man” + 木 “tree” = 休 “to rest” or... 日 “sun” + 月 “moon” = 日 月 “clear, bright”. or even the Aztec symbol for Mazatlan: Mazatl: deer tlantle: teeth

  11. Demonstration ofcomputerunderstanding In order to check if similar-meaning sentences in fact are displayed near one another, and the computer is actually able to understand language, we can try displaying four similar sentences, taken from the New Testament: LU - LUKE 23:38 And these words were put in writing over him, THIS IS THE KING OF THE JEWS. MT - MATHEW 27:37 - And they put up over his head the statement of his crime in writing, THIS IS JESUS THE KING OF THE JEWS. MC - MARK 15:26 - And the statement of his crime was put in writing on the cross, THE KING OF THE JEWS. JH - JOHN 19:19 And Pilate put on the cross a statement in writing. The writing was: JESUS THE NAZARENE, THE KING OF THE JEWS. Notes: 1) where words have arguments, multiple points in the same position are generated (Jesus/Pilate). 2) There is no training, just the sentence being displayed in the GCE module memory space according to meaning!

  12. In the previous graphical display we can see that this method does indeed convert semantic similarity into spatial proximity! Thus, if two phrases have the same meaning they will be mapped close to each other, even if different words or grammar is used. • In the display we can see that Mathew and Luke are closest, with Mark who mentions the cross some distance away, and the sentence that is most dissimilar of John lies furthest away. • It is possible to do some conventional ascending hierarchical clustering and show these relationships as below: • Grouping into clusters has the effect, more generally, of agglomerating information according to knowledge domain. Thus, information about anatomy, the New Testament, etc. will agglomerate into distinct clusters. REPRESENTATION OF THE HIERARCHICAL CLASSIFICATION +--------+---------+---------+---------+---------+---------+---------+---------+---------+---------+ lu ---------------*----*----------------------------------------------------*------------------------*- | | | | mt ---------------- | | | | | | mc --------------------- | | | | jh -------------------------------------------------------------------------- | | prueba ---------------------------------------------------------------------------------------------------

  13. Responding to a givenconcept - I One can obviously also do the reverse and create a dummy ideogram for a concept for which a certain response is required. This can include the firing of contextual information, updating or other actions related to a given concept. Note that the reply to a given type of concept, however stated, does not have to be a simple answer. It could have contextual data attached to the dummy ideogram, routines that have to be performed, updating, monitoring, calculations, etc. in order to give an output such as a command, action, answer or a simple statement as a response. For illustration purposes, the diagnosis of diabetes could be undertaken by uploading (although the system could have acquired such knowledge itself by “reading” a book for instance and mapping this knowledge to the correct position and form) onto the knowledge base 5 common symptoms and if they are found to be true, then output the diagnosis of suspected diabetes. In this specific case some more complicated vocabulary is also required in order to display the relevant ideograms: Dummy ideograms to serve as contextual knowledge: [Med.] An increase in thirst or urination in a child is a sign of diabetes.[Med.] Lethargy in a child is a sign of diabetes.[Med.] Increased desire for food with sudden or unexplained weight loss in a child is a sign of diabetes.[Med.] Vision changes in a child is a sign of diabetes.[Med.] Odor of fruit to the breath in a child is sign of diabetes. Diagnosis [answer] Diabetes in a child has five common signs that have to be confirmed.

  14. Responding to a givenconcept - Diagnosis In this example, if the user enters a sentence that is semantically close to one of the symptoms (“My child is thirsty and goes to urinate all day”), whatever the specific wording or grammar, the mapping process, contextual knowledge and association modules enable the system to identify the suspected symptom and, if all other symptoms are confirmed in the patient, the system is able to confirm the diagnosis as diabetes:

  15. Specialist knowledge & vocabulary In order to test the viability of this approach for scientific material we took the example of some knowledge about anatomy (260 concepts). If we look at the following three phrases: • Joint in the arm between the humerus and the ulna. • Outgrowth of bone at the top end of the ulna, forming the point of the elbow, to which the muscle pulling the lower arm straight is fixed. • A rounded expansion at the end of a bone which goes into the hollow end of another bone forming a joint with limited power of motion. The 3 concepts lie very close to each other. Thus, unstructured knowledge can be acquired, stored in a form and position related to its meaning and retrieved, with similar semantic units being stored in a similar shape and position, regardless of the specific wording or grammar used, without any kind of further training or association of ideas as in an ontology for example, i.e. the system is able to “understand” the meaning of language and logically stores knowledge accordingly (definition of elbow, olecranon & condyle respectively above). On the other hand, this GCE is able to correctly acquire, store and retrieve these three concepts successfully and identify that they are closely related. If more information is input, it can be displayed in the correct semantic position and where many equivalent sentences are input, the GCE can fuse together trajectories as equivalent, if ideograms are similarly shaped and close.

  16. Specialist knowledge & vocabulary

  17. Comparison of information (I) As an example of the capability to compare incoming information, 4 paragraphs were compared and, by clustering analysis in the GCE space, those whose information was corroborated by one or more other sources were highlighted. The random example chosen was of the descriptions of the arrival of Christ to Jerusalem in the New Testament (Mathew 21 (1-11), Mark 11 (1-11), Luke 19 (28-40) & John 12 (12-19)) with no other text used neither to compare to or train the engine:

  18. Comparison of information (II) The Gospel of Mathew is the most corroborated, except for one fragment of text. Mark and Luke increasingly differ, while only one segment is corroborated of what John reports. Based on the corroboration of specific concepts by other Gospels, we can conclude that the most reliable is Mathew, followed by Mark, Luke and lastly John, whose account is the least reliable one; a view consistent with the order in which the Gospels are generally believed to have been written. Mathew 21 (1-11) And when they were near Jerusalem, and had come to Beth-phage, to the Mountain of Olives, Jesus sent two disciples, Saying to them, Go into the little town in front of you, and straight away you will see an ass with a cord round her neck, and a young one with her; let them loose and come with them to me.And if anyone says anything to you, you will say, The Lord has need of them; and straight away he will send them. Now this took place so that these words of the prophet might come true, Say to the daughter of Zion, See, your King comes to you, gentle and seated on an ass, and on a young ass. And the disciples went and did as Jesus had given them orders, And got the ass and the young one, and put their clothing on them, and he took his seat on it.And all the people put their clothing down in the way; and others got branches from the trees, and put them down in the way. And those who went before him, and those who came after, gave loud cries, saying, Glory to the Son of David: A blessing on him who comes in the name of the Lord: Glory in the highest. And when he came into Jerusalem, all the town was moved, saying, Who is this? And the people said, This is the prophet Jesus, from Nazareth of Galilee. Mark 11 (1-11) And when they came near to Jerusalem, to Beth-phage and Bethany, at the Mountain of Olives, he sent two of his disciples, And said to them, Go into the little town opposite: and when you come to it, you will see a young ass with a cord round his neck, on which no man has ever been seated; let him loose, and come back with him. And if anyone says to you, Why are you doing this? say, The Lord has need of him and will send him back straight away. And they went away and saw a young ass by the door out-side in the open street; and they were getting him loose. And some of those who were there said to them, What are you doing, taking the ass? And they said to them the words which Jesus had said; and they let them go.And they took the young ass to Jesus, and put their clothing on him, and he got on his back. And a great number put down their clothing in the way; and others put down branches which they had taken from the fields. And those who went in front, and those who came after, were crying, Glory: A blessing on him who comes in the name of the Lord: A blessing on the coming kingdom of our father David: Glory in the highest. And he went into Jerusalem into the Temple; and after looking round about on all things, it being now evening, he went out to Bethany with the twelve. Luke, 19 (28-40) And when he had said this, he went on in front of them, going up to Jerusalem. And it came about that when he got near Beth-phage and Bethany by the mountain which is named the Mountain of Olives, he sent two of the disciples, Saying, Go into the little town in front of you, and on going in you will see a young ass fixed with a cord, on which no man has ever been seated; let him loose and take him. And if anyone says to you, Why are you taking him? say, The Lord has need of him. And those whom he sent went away, and it was as he said. And when they were getting the young ass, the owners of it said to them, Why are you taking the young ass? And they said, The Lord has need of him.And they took him to Jesus, and they put their clothing on the ass, and Jesus got on to him. And while he went on his way they put their clothing down on the road in front of him. And when he came near the foot of the Mountain of Olives, all the disciples with loud voices gave praise to God with joy, because of all the great works which they had seen; Saying, A blessing on the King who comes in the name of the Lord; peace in heaven and glory in the highest. And some of the Pharisees among the people said to him, Master, make your disciples be quiet. And he said in answer, I say to you, if these men keep quiet, the very stones will be crying out. John, 12 (12-19) The day after, a great number of people who were there for the feast, when they had the news that Jesus was coming to Jerusalem, Took branches of palm-trees and went out to him, crying, A blessing on him who comes in the name of the Lord, the King of Israel! And Jesus saw a young ass and took his seat on it; as the Writings say, Have no fear, daughter of Zion: see your King is coming, seated on a young ass. (These things were not clear to his disciples at first: but when Jesus had been lifted up into his glory, then it came to their minds that these things in the Writings were about him and that they had been done to him.) Now the people who were with him when his voice came to Lazarus in the place of the dead, and gave him life again, had been talking about it. And that was the reason the people went out to him, because it had come to their ears that he had done this sign. Then the Pharisees said one to another, You see, you are unable to do anything: the world has gone after him.

  19. Memory and general knowledge We are also able to upload in free-text form general knowledge and memory as a human brain would possess so that the system can respond adequately and pass the Turing test for example. We uploaded a general knowledge and basic memory file into the GCE of our conversational agent Rachael Repp (www.rachaelrepp.org) where the source file can be found. It contains a little about history, about Rachael's house and also her family. Question/imperative/statement forms as well as anaphora & cataphora resolution are implemented in the conversational agent and in the process of being ported to the GCE. Of course, the nearest concept to a user enquiry/comment in fact could be quite far... hence the need for a substantial memory/knowledge base (vocabulary, books, memory) so that responses are closely related to the input. Many simultaneous sources can be used to update the knowledge base, being displayed in a position dependent on their meaning so that contextually important information will immediately be displayed to a user looking at a specific subject (i.e. a sphere of interest in space). We can also create dummy ideograms to give a reply or command to a certain concept, which can be embodied in a point, trajectory or event a group of interconnected ideograms, that could have arguments, such as speed, bearing, etc..

  20. Logic capabilities: word problems We have implemented a logic function for equivalence based on a minimum distance between concepts below which we define two concepts as being equivalent, and for the case of similarity we again define a slightly larger range of distances over which we consider two concepts as being similar. There is another form of logic that is easy to implement using the well-known STUDENT algorithm: word problems. To test this capability, we stated “In my university, the number of professors is not enough for the number of students” and found using a standard nearest-neighbour algorithm (means of retrieval) that the nearest sentence in the memory space was “the number of students is 50 times the number of professors” Now, by extracting all intersecting/interacting trajectories (i.e. a contextual information search), it is possible to derive other information such as “the number of professors plus the number of students is 2040” and other contextual information, which allows the system to respond to the question “what is the number of professors in your university?” with the correct answer, that is 40 professors. Of course, closely related knowledge could be close but not actually intersect a set of trajectories already identified as being of interest, so it is possible to identify by a nearest-neighbour search the closest piece of knowledge and then the intersecting trajectories to that concept, in an iterative fashion in order to widen the relevant knowledge base and enable the resolution of a given query or other input. It is this methodology of identifying relevant knowledge that is a key novelty. Once all relevant knowledge is available, there exist many methods to resolve queries in the previous art.

  21. Logic capabilities: word problems Apart from implementing the many already known methods, current work centres on developing more advanced and versatile expert system architectures that are able to use the contextual information and nearest-neighbor search functions to be able to answer more sophisticated questions.

  22. GCE Overview • As initially configured, the GCE is a general system that only has a large vocabulary (i.e. basic abstract concepts) and can be then uploaded with information on any subject, as well as a more specialized vocabulary, i.e. a memory that can self-improve as it is fed with unstructured knowledge. • The GCE currently uses simplish to acquire knowledge, has uploaded a scientific dictionary of 30,000 common scientific concepts, as well as a few books (Tolstoy, Kafka, Machiavelli, etc.) and some general knowledge, chosen to demonstrate the technology in a conversational agent. • The uploaded data in the engine can be fused together where equivalent concepts are found and used to recalculate the original ordering kernel, thus altering its memory structure and making it better able to deal with the specific material being uploaded, i.e. a self-improving system. • The GCE is able to successfully associate concepts regardless of the grammar or wording used, and it is also able to combine contextual information and user-supplied data, thanks to the fact that the position of information is given by its meaning regardless of the source, method or order of data input. • Currently, the GCE is able to identify equivalence between concepts, similarity, do clustering, nearest-neighbour search, find contextual information, and to carry out simple word algebra. • Many improvements remain to be implemented but in its current state, it is certainly the most advanced realization of a general cognition engine to date, to our knowledge.

  23. A practical application: Rachael Repp bot • A first step towards an artificial cognition system is a system capable of intelligent human-machine interaction. • Rachel is a conversational agent that can be contacted at www.rachaelrepp.org • She has memory as previously described, including some recollection of her personal details, history, science and knowledge of a few books. • She can also work out algebra and logic problems (STUDENT), clustering and syntactic analysis. • She can understand standard English of 100,000 words using simplish. • Rachael can also do some simple semantic associations. • She’ll be competing for the Loebner Prize in 2013. • She has a multimedia interface in Blender

  24. Current Work • Endow Rachael with more extensive memory and knowledge. • Extend her logic capability. • Improve general cognition engine. • Improvements to the Simplish tool. • Expansion of Rachael's vocabulary. • Improvements to cognition capabilities to go beyond memory.

  25. Potential short term applications • All sources intelligence analysis (can accommodate contextual data and multiple source trajectories) so we can arrive to actionable conclusions. • Human-machine interaction. • Internet large data volumes analysis. • Semantic search engines • Data mining • Databases • Games • Bio-informatics • Expert systems • Virtual assistants

  26. Initial application chosen for licensing • Databases has been chosen as the first commercial application area in which we shall seek to license our technology. • Our technology is protected by a UK patent application No. GB1301143.2 filed on the 22nd January, 2013. • The main players in this field are: SAP, Oracle, SAS, Microsoft and IBM. • We are approaching these companies and choose the most attractive economic proposal, subject to our strategic limitations (licensing only, preferably non-exclusive, product and niche-specific). • Our initial internal valuation of this license is based on an estimate of the impact of our technology on sales of the above companies in a market we estimate at 24 billion dollars with a penetration potential for this technology of the order of 20%, factoring in a 10% royalty, a 5% interest rate and a contract duration of 10 years. • The process of auction will be that of “best and final offer” between interested parties.

  27. Who we are • The Goodwill Company Ltd. was established as a private limited company on the 12th of September 2000 in Guildford, England where it is registered at Companies House with No. 4070363. The company is owned by 75% British capital and 25% Mexican capital, with offices both in the UK and in Guadalajara, Mexico. • Our principal line of business is doing research and development both for our own projects and for third parties, mainly in the aerospace, electronics, IT and defense industries. • Past customers include Technicolor Inc., Hitachi GST, Alstom Power PLC, BAE Systems PLC, Rolls-Royce PLC, Mexico City's International Airport Authority (ASA) and more recently, for an artificial intelligence project, the Information and Intelligence Warfare Directorate of the U.S. Army (I2WD-CERDEC). • Current R&D programmes include an entirely autonomous solar-powered micro-helicopter, hybrid fuel cells, an Android basic English language learning app using advanced AI techniques, an online automatic English simplifying tool, geographical information systems, an artificial cognition systems programme, a night-vision advanced camera for defense applications and 3D reconstruction from video, among others.

  28. Thank You!! www.thegoodwillcompany.co.uk

  29. References • Deerwester, S et al., “Indexing by latent semantic analysis”, J. Am. Soc. Inf. Sci., vol. 41 pp391, 1990. • Landauer, T.K. et al., “Introduction to latent semantic analysis”, Discourse Process. Vol. 25, pp. 259, 1998. • Lund, K., Burgess, C., “Producing high-dimensional semantic spaces from lexical co-occurrence”, Behav. Res. Methods. Instrum. Comput., vol. 28, pp. 203, 1996. • Hofmann, T., “Probabilistic latent semantic analysis”, Proc. Uncertainty in Artificial Intelligence, UAI’99, Stokholm, 1999. • Blei, D.M., et. al., “Latent Dirichlet allocation”, J. Mach. Learn. Res., vol. Pp. 993, 2003. • Griffiths T.L. & Steyvers, D.L., “Word association spaces for predicting semantic similarity effects in episodic memory, to appear in Experimental Cognitive Psychology and its Applications, Festschrift in Honour of L. Bourne, W. Kintsch and T. Landauer, ed. A. Healy. • Landauer, T.K. & Dumais, S.T., “A solution of Plato’s problem: the latent semantic analysis theory of the acquisition, induction, and representation of knowledge, Psychol. Rev., vol. 104, pp. 211, 1997.

  30. References • Landauer, T.K., “On the computational basis of learning an cognition: arguments from LSA”, Psychology of Learning and Motivation, vol. 41, ed. B.H. Ross (New York: Academic) pp. 43, 2002. • Landauer, T.K. et. al., “Learning human-like knowledge by singular value decomposition: a progress report”, Advances in Neural Information processing Systems, vol. 10 (Cambridge, MA: MIT Press), pp. 45, 1998. • Davies, J., Studer, R., Warren, P., (eds.), “Semantic Web Technologies”, Wiley, 2006. • Jurafsky, D., Martin, J.H., “Speech and Language Processing”, Prentice Hall, 2000. • Katona, G., “Organizing and Memorizing”, New York: Columbia University Press, 1940. • Moorhouse, A.C., “The Triumph of the Alphabet”, Henry Schuman, New York, 1953. • Reisberg, D., “Cognition”, Third Edition, W.W. Norton & Co., 2006. • Rosch, Eleonor, “Principles of categorization”, in E. Rosch & B.B. Lloyd (eds.) Cognition and Categorization, pp. 27-48, Hillsdale, NJ: Erlbaum, 1978. • Wittgenstein, L., “Philosophical Investigations”, (G.E.M. Anscombe, trans.), Oxford, England, Blackwell, 1953.

More Related