1 / 33

UAST and Evolving Systems of Systems in the Age of the Black Swan

UAST and Evolving Systems of Systems in the Age of the Black Swan Part 2: On Detecting Aberrant Behavior. www.parshift.com/Files/PsiDocs/Pap090901IteaJ-PathsForPeerBehaviorMonitoringAmongUAS.pdf www.parshift.com/Files/PsiDocs/Pap091201IteaJ-MethodsForPeerBehaviorMonitoringAmongUas.pdf.

sage
Download Presentation

UAST and Evolving Systems of Systems in the Age of the Black Swan

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. UAST and Evolving Systems of Systems in the Age of the Black Swan Part 2: On Detecting Aberrant Behavior www.parshift.com/Files/PsiDocs/Pap090901IteaJ-PathsForPeerBehaviorMonitoringAmongUAS.pdf www.parshift.com/Files/PsiDocs/Pap091201IteaJ-MethodsForPeerBehaviorMonitoringAmongUas.pdf “There is no difficulty, in principle, in developing synthetic organisms as complex and as intelligent as we please. But we must notice two fundamental qualifications; first, their intelligence will be an adaptation to, and a specialization towards, their particular environment, with no implication of validity for any other environment such as ours; and secondly, their intelligence will be directed towards keeping their own essential variables within limits. They will be fundamentally selfish. Principles of the self-organizing system, W. Ross Ashby, 1962 Based on a presentation atUAST Tutorial Session ITEA LVC Conference,12 Jan 2009, El Paso, TX. UAST: Unmanned Autonomous Systems test also: L3 Art Brooks did Masters paper here

  2. Domain Independent Principles Can Inform UAST ConOps system • Systems in Context Class 2 (federated?) testing enterprise Class 2 systems under test environment (an ecology) Politics Technology Govt Procedures Mil procedures Military reality Competitors Enemies systems Class 1 testingsystem(s) UAST UASoS systems

  3. Problem and Observation • Self Organizing Systems of Systems are too complex to test beyond “minimal” functionality and “apparent” rationality. • Autonomous self organizing entities have a willful mind of their own. • Unpredictable emergent behavior will occur in unpredictable situations. • Emergent behavior is necessary and desirable (when appropriate). • Inevitable: sub-system failure, command failure, enemy possession. • UAS will work together as flocks, swarms, packs, and teams. • Even human social systems exhibit unintended “lethal” consequences. • -------- • In biological social systems, members monitor/enforce behavior bounds. • Could UAS have built-in socially attentive monitoring (SAM) on mission? • Could UAST employ SAM proxies for monitoring antisocial UAS? • Challenges: • 1) “Learning” the behavior patterns to monitor. • 2) Technology for monitoring complex dynamic patterns in real time. • 3) Decisive counter-consequence action.

  4. www.cc.gatech.edu/ai/robot-lab/online-publications/MoshkinaArkinTechReport2008.pdf Responsibility Survey on Lethality and Autonomous Systems Responsibility for Lethal Errors by Responsible Party. The soldier was found to be the most responsible party, and robots the least. Responsible Party Lethality and Autonomous Systems:Survey Design and Results, Lilia Moshkina, Ronald C. Arkin, Technical Report GIT-GVU-07-16, Mobile Robot Laboratory,College of Computing, Georgia Institute of Technology, p. 30, 2007

  5. www.cc.gatech.edu/ai/robot-lab/online-publications/MoshkinaArkinTechReport2008.pdf • Applicability of ethical categories is ranked from more concrete and specific to more general and subjective. • Lilia Moshkina, Ronald C. Arkin, Lethality and Autonomous Systems: Survey Design and Results,Technical Report GIT-GVU-07-16, Mobile Robot Laboratory, College of Computing, Georgia Institute of Technology, p. 29, 2007

  6. Four The Three Lawsof Robotics(Isaac Asimov) 0) A robot may not harm humanity, or, by inaction, allow humanity to come to harm (added later). 1) A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2) A robot must obey orders given it by human beings except where such orders would conflict with the First Law. 3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. This cover of I, Robot illustrates the story "Runaround", the first to list all Three Laws of Robotics (Asimov 1942)

  7. Self Organizing Inevitability • Isaac Asimov's three laws of robotics were developed to allow UxVs to coexist with humans, under values held dear by humans (imposed on robots). • These were not weapon systems. • Asimov’s robots existed in a peaceful social environment. Ours are birthing into a community of warfighters, with enemies, cyber warfare, great destructive capabilities, human confusion, and a code of war. • Ashby notes that a self organizing system by definition behaves selfishly, and warns that its behaviors may be at odds with its creators. • So – can we afford to build truly self organizing systems? • A foolish question. We will do that regardless of the possible dangers, just as we opened the door to atomic energy, bio hazards, organism creation, nanotechnology, and financial meltdown. • Can a cruise missile on a mission be hacked and turned to the enemy’s bidding? Perhaps we can say that it hasn’t occurred yet. Can a cruise missile get sick or confused, and hit something it shouldn’t? That’s already happened. • The issue is not “has it happened”. The issue is “can it happen”. • We cannot test-away bad things from happening, so we better be vigilant for signs of imminence,and have actionable options when the time has come.

  8. Four Selfish (Potential) Guiding Principles(for synthetics) • Protection of permission to exist (civilians, public assets) • Protection of mission • Protection of self • Protection of others of like kind • A safety mechanism based on principles, • for we can never itemize • all of the • situational patternsand theappropriate response to each

  9. ARTURO MEDINA

  10. ARTURO MEDINA

  11. ARTURO MEDINA … and here’s theCat’s Cradle

  12. wip.warnerbros.com/marchofthepenguins/ • Aberrant behavior arising in a stable social systemis detected and opposed • Example: Female penguin attempting to steal a replacement egg for the one she lost is prevented from doing so by others.

  13. Ganging Up on Aberrant Behavior T. Monnin, F.L.W. Ratnieks, G.R. Jones, R. Beard, Pretender punishment induced by chemical signaling in a queenless ant, Nature, V. 419, 5Sep2002 • Queenless ponerine ants have no queen caste. All females are workers who can potentially mate and reproduce. A single “gamergate” emerges, by virtue of alpha rank in a near-linear dominance hierarchy of about 3–5 high-ranking workers. Usually the beta replaces the gamergate if she dies. A high-ranker can enhance her inclusive fitness by overthrowing the gamergate, rather than waiting for her to die naturally. • (a) To end coup behavior, the gamergate (left) approaches the pretender, usually from behind or from the side, briefly rubs her sting against the pretender depositing a chemical signal, then runs away, leaving subsequent discipline to others. • (b) One to six low-ranking workers bite and hold the appendages of the pretender for up to 3–4 days with workers taking turns. Immobilization can last several days, and typically results in the pretender losing her high rank. It is not clear why punishment causes loss of rank, but it is probably a combination of the stress caused by immobilization and being prevented from performing dominance behaviours. Occasionally the immobilized individual is killed outright. http://lasi.group.shef.ac.uk/pdf/mrjbnature2002.pdf

  14. Promising Things to Leverage Social pattern monitoring • Relationships (Gal Kaminka, Ph.D. dissertation) • Trajectories (Stephan Intille, Ph.D. dissertation) • Emergence (SviatoslavBraynov, repurposed algorithm concepts) Technology and Knowledge • Human expertise (Gary Klein, Phillip Ross, Herb Simon) • Biological feedforward hierarchies (Thomas Serre, Ph.D. dissertation) • Parallel pattern processor (Curt Harris, VLSI architecture)

  15. Accuracy: Decentralized Beats Centralized Monitoring • “We explore socially-attentive algorithms for detecting teamwork failures under various conditions of uncertainty, resulting from the necessity of selectivity. • We analytically show that despite the presence of uncertainty about the actual state of monitored agents, a centralized active monitoring scheme can guarantee failure detection that is either sound and incomplete, or complete and unsound. • [centralized: no false positives (sound) or no false negatives (complete), not both] • However, this requires monitoring all agents in a team, and reasoning about multiple hypotheses as to their actual state. • We then show that active distributed teamwork monitoring results in sound and complete detection capabilities, despite using a much simpler algorithm. By exploring the agents’ local states, which are not available to the centralized algorithm, the distributed algorithm: (a) uses only a single, possibly incorrect hypothesis of the actual state of monitored agents, and (b) involves monitoring only key agents in a team, not necessarily all team-members (thus allowing even greater selectivity). From: Gal A. Kaminka, Execution Monitoring in Multi-Agent Environments, Ph.D. Dissertation, USC, 2000, p. 6. www.isi.edu/soar/galk/Publications/diss-final.ps.gz.

  16. Execution Monitoring in Multi-Agent Environments Gal A. Kaminka, Execution Monitoring in Multi-Agent Environments, Ph.D. Dissertation, USC, www.isi.edu/soar/galk/Publications/diss-final.ps.gz. A key goal of monitoring other agents: • Detect violations of the relationships that agent is involved in • Compare expected relationships to those actually maintained • Diagnose violations, leading to recovery Motivation for relationship failure-detection: • Cover large class of failures • Critical for robust performance of entire team Relationship models specify how agents’ states are related: • Formation model specifies relative velocities, distances • Teamwork model specifies that team plans jointly executed • Many others: Coordination, mutual exclusion, etc. Agent Modeling: • Infer agents state from observed actions via plan-recognition • Monitor agents, attributes specified by relationship models enemy attacker incorrectly flying with scout attacker correctly waiting for scout report scout looking for enemy

  17. Identifying Football Play Patterns from Real Game Films Visual Recognition of Multi-Agent Action Stephen Sean Intille, Ph.D.Thesis, MIT, 1999http://web.media.mit.edu/~intille/papers-files/thesis.pdf. • The task of recognizing American football plays was selected to investigate the general problem of multi-agent action recognition. A p51curl play. Doesn’t happen like the chalk board, but is still recognizable. Chalk board patterns a receiver can run. This work indicates one method for monitoringmulti-agent performance according to plan

  18. Maybe Even….Detecting Emergent Behaviors in Process Sviatoslav Braynov, Murtuza Jadliwala, Detecting Malicious Groups of Agents. The First IEEE Symposium on Multi-Agent Security and Survivability, 2004. • “In this paper, we studied coordinated attacks and the problem of detecting malicious networks of attackers. The paper proposed a formal method and an algorithm for detecting action interference between users. The output of the algorithm is a coordination graph which includes the maximal malicious group of attackers including not only the executers of an attack but also their assistants. The paper also proposed a formal metric on coordination graphs that help differentiate central from peripheral attackers.” • “Because the methods proposed in the paper allow for detecting interference between perfectly legal actions, they can be used for detecting attacks at their early stages of preparation. For example, coordination graphs can show all agents and activities directly or indirectly related to suspicious users. • ------------------------- conjecture begging investigation ------------------------- • This work focused on identifying the members of a group of “perpetrators” among a group of “benigns”, based on their cooperative behaviors in causing an event. It is applied in both forensic analysis and in predictive trend spotting. • It may be a methodology for identifying the conditions of specific emergent behavior after the fact – for “learning” new patterns of future use. • It may also provide anearly warning mechanism for detecting emergent aberrant team behavior, rather than aberrant UAS behavior.

  19. The RPD (Recognition Primed Decision) model offers an account of situation awareness. It presents several aspects of situation awareness that emerge once a person recognizes a situation. These are the relevant cues that need to be monitored, the plausible goals to pursue and actions to consider, and the expectancies. Another aspect of situation awareness is the leverage points. When an expert describes a situation to someone else, he or she may highlight these leverage points as the central aspects of the dynamics of the situation. • Experts see inside events and objects. They have mental models of how tasks are supposed to be performed, teams are supposed to coordinate, equipment is supposed to function. This model lets them know what to expect and lets them notice when the expectancies are violated. These two aspects of expertise are based, in part, on the experts’ mental models. Garry Klein (1998) Sources of Power: How people make decisions, 2nd MIT Press paperback edition, Cambridge, MA. page 152

  20. 'Field Sense’ Gretzky-Style • Jennifer Kahn, Wayne Gretzky-Style 'Field Sense' May Be Teachable, Wired Magazine, May 22, 2007. • www.wired.com/science/discoveries/magazine/15-06/ff_mindgames# Five seconds of the 1984 hockey game between the Edmonton Oilers and the Minnesota North Stars: The star of this sequence is Wayne Gretzky, widely considered the greatest hockey player of all time. In the footage, Gretzky, barreling down the ice at full speed, draws the attention of two defenders. As they converge on what everyone assumes will be a shot on goal, Gretzky abruptly fires the puck backward, without looking, to a teammate racing up the opposite wing.The pass is timed so perfectly that the receiver doesn't even break stride. "Magic," Vint says reverently. A researcher with the US Olympic Committee, he collects moments like this. Vint is a connoisseur of what coaches call field sense or "vision," and he makes a habit of deconstructing psychic plays: analyzing the steals of Larry Bird and parsing Joe Montana's uncanny ability to calculate the movements of every person on the field.

  21. The Stuff of Expertise • Research indicates that human expertise (extreme domain specific sense-making) is primarily a matter of meaningful pattern quantity – not better genes. • According to an interview with Nobel Prize winner Herb Simon (Ross 1998), people considered truly expert in a domain (e.g. chess masters, medical diagnosticians) are thought unable to achieve that level until they’ve accumulated some 200,000 to a million meaningful patterns, requiring some 20,000 hours of purposeful focused pattern development. • The accuracy of their sense making is a function of the breadth and depth of their pattern catalog. • In biological entities, the accumulation of large expert-level pattern quantities does not manifest as slower recognition time. • All patterns seem to be considered simultaneously for decisive action. There is no search and evaluation activity evident. • On the contrary, automated systems, regardless of how they obtain and represent learned reference patterns, execute time-consuming sequential steps to sort through pattern libraries and perform statistical feature mathematics. • This is the nature of the computing mechanisms and recognition algorithms employed in this service. Philip Ross (1998), “Flash of Genius,” an interview with Herbert Simon,Forbes, November 16, pp. 98- 104, www.forbes.com//forbes/1998/1116/6211098a.html. Also: Philip Ross, The Expert Mind, Scientific American, July 2006

  22. Reverse Engineering the Brain • Rapid visual categorization • Visual input can be classified very rapidly…around 120 msec following image onset…At this speed, it is no surprise that subjects often respond without having consciously seen the image; consciousness for the image may come later or not at all. • Dual-task and dual-presentation paradigms support the idea that such discriminations can occur in the near-absence of focal, spatial attention, implying that purely feed-forward networks can support complex visual decision-making in the absence of both attention and consciousness. • This has now been formally shown in the context of a purely feed-forward computational model of the primate’s ventral visual system (Serre et al., 2007). www.scholarpedia.org/article/Attention_and_consciousness/processing_without_attention_and_consciousness www.technologyreview.com/printer_friendly_article.aspx?id=17111

  23. Explaining Rapid Categorization. Thomas Serre, Aude Oliva, Tomaso Poggio.http://cbcl.mit.edu/seminars-workshops/workshops/serre-slides.pdf

  24. The Monitoring Selectivity Problem:Unacceptable Accuracy Compromise • “A key problem emerges when monitoring multiple agents: a monitoring agent must be selective in its monitoring activities (both raw observations and processing), since bandwidth and computational limitations prohibit the agent from monitoring all other agents to full extent, all the time. • However, selectivity in monitoring activities leads to uncertainty about monitored agent’s states, which can lead to degraded monitoring performance. We call this challenging problem the Monitoring Selectivity Problem: Monitoring multiple agents requires overhead that hurts performance; but at the same time, minimization of the monitoring overhead can lead to monitoring uncertainty that also hurts performance. • Key questions remain open: • What are the bounds of selectivity that still facilitate effective monitoring? • How can monitoring accuracy be maintained in the face of limited knowledge of other agents’ states? • How can monitoring be carried out efficiently for on-line deployment? • This dissertation begins to address the monitoring selectivity problem in teams by investigating requirements for effective monitoring in two monitoring tasks: Detecting failures in maintaining relationships, and determining the state of a distributed team (for both faire detection and visualization). From: Gal A. Kaminka, Execution Monitoring in Multi-Agent Environments, Ph.D. Dissertation, USC, 2000, pp. 3-4. www.isi.edu/soar/galk/Publications/diss-final.ps.gz.

  25. Processor Recognition Speed Independent of Pattern Quantity and Complexity Snort chart source: Alok Tongaonkar, Sreenaath Vasudevan, R. Sekar, Fast Packet Classification for Snort by Native Compilation of Rules, Proceedings of the 22nd Large Installation System Administration Conference (LISA '08), USENIX, Nov 9–14, 2008. www.usenix.org/events/lisa08/tech/full_papers/tongaonkar/tongaonkar_html/index.html • Comparison shows pattern processor’s flat constant speed recognition vs typical computational alternative. Example chosen for ready availability. Processor info source: Rick Dove, Pattern Recognition without Tradeoffs: Scalable Accuracy with No Impact on Speed, To appear in Proceedings of Cybersecurity Applications & Technology Conference For Homeland Security, IEEE, April 2009. www.kennentech.com/Pubs/2009-PatternRecognitionWithoutTradeoffs-6Page.pdf. 4000 3000 2000 1000 0 Nanoseconds per Packet 8 million real packets run on3.06 GHz Intel Xenon processor Snort 2.6 Packet Header Interpreter Interpreter Replaced with Native Code Pattern processor comparative speed (unbounded) Number of Rules Employed 0 40 100 200 300 400 500 600

  26. Reconfigurable Pattern ProcessorReusable Cells Reconfigurable in a Scalable Architecture Cell-satisfaction output pointers Independent detection cell: content addressable by current input byte If active, and satisfied with current byte, can activate other designated cells including itself Up to 256 possible features can be “satisfied” by all so-designated byte values Cell-satisfaction activation pointers Individual detection cells are configured into feature cell machines by linking activation pointers (adjacent-cell pointers not depicted here) an unbounded number of feature cells configured as feature-cell machines can extend indefinitely across multiple processors All active cells have simultaneous access to current data-stream byte

  27. Simple Example: Pattern Classification Method Suitable for Many Syntactic, Attributed Grammar, and Statistical Approaches Reinitialization Transforms Output Register R Very SimpleWeighted FeatureExample Logical Intersection Transforms Output Register S Logical Union Transforms Output Register P Threshold Counter Transforms Output Register T Multiple Threshold Down Counters Output Transform Pointers Output Transform Pointers FCM Activation Pointers ½ Million Detection Cells M1 M2 M3 M4 M5 Configured FCMs Mn Layered Architecture Stack Partial Conceptual Architecture Stack Weight=3 counter 1 Class-1 classification output occurs for any down counter reaching zero output pointers counter 2 Class-2 Weight=2 counter 3 Class-3 counter 4 Class-4 ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● ● FCM-1 FCM-2 FCM-3 FCM-4 FCM-6 FCM-7 FCM-n FCM-5 Additional transforms provide sub-pattern combination logic Finite Cell Machines, as depicted, could represent sub-patterns or “chunked” features shared by multiple pattern classes. Padded FCM-7 and FCM-n increase feature weight with multiple down counts. On detecting and classifying aberrant behavior in unmanned autonomous systems under test and on mission,www.kennentech.com/Pubs/2009-OnDetectingAndClassifyingAberrantBehaviorInUAS.pdf

  28. Value-Based Feature Example A reference pattern example for behavior-verification of a mobile object. Is it traveling within the planned space/time envelop? Using GPS position data: Latitude, Longitude, Altitude. linear, log or other scale Output F = failure S = success F F S relative absolute 256 distance values minimum separation L A T L O N A L T L A T L O N A L T L A T L O N A L T FCM configured toclassify failure/success showing acceptable ranges of values On detecting and classifying aberrant behavior in unmanned autonomous systems under test and on mission,www.kennentech.com/Pubs/2009-OnDetectingAndClassifyingAberrantBehaviorInUAS.pdf

  29. Example: Monitoring Complex Multi-Agent Behaviors Packetized data can use multi-part headers to activate appropriate reference pattern sets for different times Output F = failure S = success F F F S F F F S UAS 1002 on task 3018 UAS 1002 on task 3002 UAS ID 001.002 Task ID 003.018 L A T L O N A L T UAS ID 001.002 Task ID 003.002 L A T L O N A L T FCM-49 FCM-50 On detecting and classifying aberrant behavior in unmanned autonomous systems under test and on mission,www.kennentech.com/Pubs/2009-OnDetectingAndClassifyingAberrantBehaviorInUAS.pdf

  30. Nature has sufficient, but not necessarily optimal, systems – One example: Hybrid Adaptation Could Improve on Natural Systems Applies to Outcome Time for one loop to execute Parallelism of processing through interaction Context sensitivity Alignment of fitness and selection mechanism Evolutionary Populations Produces new design features. Period between generations – generally slow compared to timescale of actions. Highly parallel – every member of the population is a simultaneous experiment ‘evaluating’ the fitness of one set of variations. In retrospect only – through some variations turning out to be fitter in the context than others. 100% Learning Individuals Improves use of fixed design. Period for one action (sense-process-decide-act) loop, plus the associated learning (observe action consequences – process – make changes) loop. Serial – an individual system or organism experiments with one strategy at a time. In anticipation – i.e. before choice of action or response, as well as in retrospect through feedback from consequences of action. Highly variable. Hybrid or Augmented Either May be able to do both, or do either better. Could be accelerated. Could use learning mechanism to create directed evolution, and evolutionary strategies to improve learning. Could also parallelize learning through either parallel processing in single individual, or through networking a population of learning systems. Could extend context sensitivity to influence design choices as well as action choices. Could improve alignment in learning systems by developing better proxies for fitness to drive selection. Grisogono, A.M. “The Implications of Complex Adaptive Systems Theory for C2.” Proceedings of the 2006 Command and Control Research and Technology Symposium, 2006, www.dodccrp.org/events/2006_CCRTS/html/papers/202.pdf

  31. Related Implications and Points • T&E cannot be limited to pre-deployment – it must be an ongoing never-ending activity built-in to the SoS operating methods. • LVC – Put the tester into the environment – total VR immersion – as a player with intervention capability (the ultimate driving machine). Humans will “see” experientially and recognize things in real-time that forensics and remote data analysis will not recognize. • These things we build are not children that we can watch and guide and correct. They need to have a sense of ethics and principles that inform unforeseen situational response. • The biological “expertise” pattern recognition capability needs to exist in both the testing environment and on-board. We are building intelligent willful entities that carry weapons.

  32. Status Q1 2010 • Kaminka’s Socially Attentive Monitoring examples are modeled. • Intelle’s trajectory recognition modeling was started, another approach is wip. • Serre’s feedforward hierarchy image recognition Level 1 is modeled. • These algorithm models reside with others in a wikiinvestigating collaborative parallel-algorithm development. • A processor emulator/compiler exists for algorithm modeling. • One defense contractor already working on classified project. • VLSI availability eta Q1 2012. • ~128,000 feature cells expected for first generation modules. • Chips can be combined for unbounded scalability. • Pursuits of interesting problems to attack with this new capability… • x Inc: Collision avoidance in cluttered airspace. • PSI Inc: Distributed anomaly detection, and hierarchical sensemaking • OntoLogic LLC: Secure software code verification • This work was supported in part by the U.S. Department of Homeland Security award NBCHC070016.

  33. Aberrant behavior will not be tolerated!

More Related