1 / 48

Agent animation: capabilities, issues, and trends

Agent animation: capabilities, issues, and trends. Paolo Petta Austrian Research Institute for Artificial Intelligence, Vienna. Introduction. Computer animation developments Geometry Resolution, detail Model-driven dynamics Ambient physics modeling, Behavioural modeling Control

gail-martin
Download Presentation

Agent animation: capabilities, issues, and trends

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Agent animation: capabilities, issues, and trends Paolo Petta Austrian Research Institute for Artificial Intelligence, Vienna

  2. Introduction • Computer animation developments • Geometry • Resolution, detail • Model-driven dynamics • Ambient physics modeling, Behavioural modeling • Control • Interactivity, communication techniques, autonomy, learning • Population • Multiple actors, distributed systems

  3. Typical Applications • Synthetic characters,virtual Humans,visualisation/simulation • Design choices • “Sparse” top-down models vs.“complete” bottom-up models • Application requirements • deep-and-narrow vs. • broad-and-shallow

  4. Artificial Intelligence Research topics Robotics User Interface Animation User interfaceforEmotion control Actor behaviouremotion control Vision-basedanimation Path planning Kinematics Dynamics Walkingmodels Objectgrasping Behaviouralanimation Spatialrelationships shape transformation Collision detection Facial animation Clothanimation Musclemodels Collisionresponses Geometric Modelling Finite-element deforma-tions Facedesign Hair Physics ImageSynthesis Skin texture

  5. IMPROV (MRL, NYU) • Artistic and commercial applications • Animated staging • Choreography • Interactive multi-user environments • ... • Surface model of mood&emotions • Productivity tool • API for “laypersons”(educators, historians, social scientists)

  6. IMPROV • Microlevel: • Procedural animation • Accurate modeling of single actions and all permissible transitions • Statistically controlled parameter randomization for variability and consistency

  7. IMPROV • Microlevel: • Behavioural layering • Scripts are classified in a hierarchy according to level of behaviour • User-defined connections between layers define the effective heterarchy • Action selection:deterministic linear scripts or stochastic selection from alternatives • Exclusion of pursuit of conflicting goals at same level • Parallelism across the hierarchy

  8. IMPROV • Macrolevel: • Blackboard architecture Characters (attributes + scripts) Avatars Story agent („director“) Stage Manager

  9. IMPROV • Macrolevel: • Behaviour layers spanning across groups of agents forcoordinated action • Distributed environment modeling: “Inverse Causality” (=> MOO) • information about interactions is attached to objects • characters are “contaminated” by use (new/update of state variables: competence learning)

  10. Edge of Intention (Oz, CMU) • Interactive drama • Believable autonomous characters • Goal-directed • Emotional(folk theory of emotions, OCC) • Simple appearance, emphasis on behaviours(-> internal processing) • Interaction modes • Moving/gesturing, “talking” (typing)

  11. TOK architecture • Microlevel • Hap • Goal-oriented reactive action engine • Static plan library • Action behaviours • Emotion behaviours • Sensing behaviours • Sensing of low-level actions of other Woggles • Action blending

  12. TOK architecture • Microlevel • Em • Model of emotional and social aspects • Explicit state variables for beliefs and standards of performance • Variables are influenced by comparison of current goal states with events and perceived actions (thresholding)

  13. TOK architecture • Microlevel • Behavioural features • Mapping of emotional state to overt behaviour • Manifestation of “personality” • Tight integration of Hap and Em • No need for arbitration

  14. TOK architecture behaviour featuresand raw emotions goal successes,failures & creation standardsattitudesemotions Em goalsbehaviours Hap senselanguagequeries senselanguagequeries sensory routines andintegrated sense model The world

  15. TOK architecture • Macrolevel: • Fixed plan library encodes all possible communications/interactions

  16. ALIVE (MIT Media Lab) • Entertainment • Magic mirror metaphore • Unincumbered immersive environment

  17. ALIVE • Microlevel: • Hamsterdam • Behaviour system for action selection • Based on ethological model • Sensory inputs via release mechanism • Loose hierarchy of behaviour groups • “Avalanche effect” for persistent selection • Inhibited behaviours can issue secondary and meta commands • Motor skills layer for coordination of motions • Geometry layer for animation rendering

  18. Behaviour ALIVE External World World SensorySystem ReleasingMechanism Goals/Motivations InternalVariable InternalVariable Levelof Interest Inhibition Motor Commands

  19. ALIVE

  20. ALIVE • Levels of control: • Motivations via variables of single behaviours • “You are hungry” • Directions via motor skills • “Go to that tree” • Tasks via sensory, release, and behaviour systems • “Wag your tail”

  21. ALIVE • Increased situatedness • Synthetic vision • For navigation • Generic interface • Plasticity: • reinforcement learning (conditioning)

  22. ALIVE • Macrolevel: • Totally distributed control

  23. Virtual Humans (Miralab/EPFL) • Goal • Simulation of existing people • Real-time animation of virtual humans that are realistic and recognizable • Inclusion of synthetic sensing capabilities allows simulation of (seemingly) complex capabilities,e.g. real-time tennis

  24. Virtual Humans • Issues requiring compromising • Surface modeling • Deformation • Skeletal animation • Locomotion • Grasping • Facial animation • Shadows • Clothes • Skin • Hair

  25. Virtual Humans • Methodology • Modeling: • Prototype-based • Head and hand sculpting • Layered body definition:Skeleton, Volume, Skin • Animation: • Skeleton motioncaptured, play-back, computed • Body deformationfor realistic rendering of joints • Detailled hand and facial animation

  26. Virtual Humans • Synthetic sensing as a main information channel between virtual environment and digital actor(since ca. 1990) • Synthetic audition, vision and tactile • Differs fundamentally from robotic sensing:direct access to semantic information

  27. Virtual Humans • Example: synthetic vision • Environment is perceived from a field-of-view that is rendered from the actor’s point of view • Access to pixel attributes:color, distance,index to semantic information • Simple case: color coding of objects=> perception of color = recognition of object • Object attributes areretrieved directly from the simulation

  28. Virtual Humans • Navigation: • Path planning & obstace avoidance • Global navigation: • Based on prelearned model • Determines the global navigation goal • Local navigation • Purely indexical, based on sensing=> No need for model of environment=> No need for current position • Three modules: • synthetic vision, controller, performer

  29. Virtual Humans • Navigation controller: • Regularly invokes vision to retrieve updated state of environment • Creates temporary local goals if an obstacle “up front” • Local goals are determined by obstacle-specific Displacement local automata

  30. Virtual Humans • Interaction with the environment:Smart Objects • Each modeled object includes detailled solutions for each possible interaction with the object • Objects are modeled according to situated decomposition

  31. Virtual Humans • Smart Objects include: • Description of moving parts, physical properties, semantic index(purpose and design intent) • Information for each possible interaction: position of interaction part, position and gesture information for the actor (capacity limits!) • Object behaviours with state variables (=> actor state info) • Triggered agent behaviours

  32. Virtual Humans • Example: virtual tennis • Actor model based on stack machine of state automata • Actor state can change according to currently active automaton and sensorial input

  33. Virtual Humans Architectureof behaviourcontrol

  34. Virtual Humans Tennisgameautomata sequence

  35. JACK (UPenn) • Ergonomic environment analysis • Workplace assessment • Product evaluation • Device interfaces • Logistics

  36. JACK • Microlevel: • Biomechanically correct model • Synthetic sensors for high-level behaviours • Three-level architecture realising “truly situated” low-level behaviour

  37. PaT-Net object-specific and genericsymbolic reasoning capabilites controlsystems stimulus           perceptual                             motor             response modules behaviours JACK • Microlevel (learned sense-control-act loop parameters)

  38. JACK • Macrolevel • Taskable virtual agent • Global intentions and expectations of all characters are statically captured (explicitly anticipated) • Parallel Transition networks

  39. JACK • Macrolevel: PaT Net

  40. Topics for Discussion • “Completeness” of modeling • “True” agent characteristics(Wooldridge&Jennings) • Autonomy • Social abilities • Reactivity • Pro-activeness

  41. Topics for Discussion • The “TLA Debate” • Situatedness/synthetic sensing • Variability/adaptiveness/plasticity • Believability

  42. Modelling completeness • “Sparse” models • Abstract, “top down” • Based on explicit, reified design elements • Bridging/obviating of full detail by careful selection of modeled elements • Broader coverage at differing resolution • Believability/impression over fidelity • (Bound to) Lose in the long run?

  43. Modelling completeness • “Complete” models • Situated, “bottom up” • Depend on balanced design(including environment&coupling) • Limited coverage/complexity • Allow for flexible action-selection • Fidelity over believability/impression • Win in the long run?

  44. Autonomy (McFarland/Boesser) • Automaton:state-dependent behaviour • Autonomous agent:self-controlling, motivated • Motivation:reversable internal processes that are responsible for changes in behaviour • Multiple goals/actions are the rule!=> concurrency, transitioning • Insights on own skills&conditions of applicability

  45. Social abilities • “Deep” agent modeling • Of the self: BDI and variants • Of others (recursively) • Of the society • Coordination • Communication • Generation&understanding of facial expressions, postures, gestures, task execution, text/speech,… • (social) Emotions(including display rules)

  46. Social abilities • From Action Selection to Action expression • Sign management: context-dependent behaviour sematics • What should an agent do at any point in order to best communicate its goals and activities? • Goal: increase comprehensibility of behaviour

  47. Believability • Quality vs. correctness • Self-motivation • pursuit of multiple simultaneous goals • => entails requirement of broad capabilities • Personality/Emotion • Plasticity/change over time • Situatedness • social skills • affordances

  48. And then... • Methodologies for assembly of architectures with understandable/predicatable (motivated, goal-directed,…) behaviour • Agent control systems • Persistency, plasticity • Agent animation as simulation

More Related