1 / 25

An Investigation Into the Use of Synthetic Vision for NPC’s / Agents in Computer Games

An Investigation Into the Use of Synthetic Vision for NPC’s / Agents in Computer Games. Paper by Sebastian Enrique Presented by Adam Karkkainen. Table of Contents. Introduction Problem Statement Synthetic Vision Model Brain Module Known Problems Extended Behavior with Dynamic Reactions

mick
Download Presentation

An Investigation Into the Use of Synthetic Vision for NPC’s / Agents in Computer Games

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. An Investigation Into the Use of Synthetic Vision for NPC’s / Agents inComputer Games Paper by Sebastian Enrique Presented by Adam Karkkainen

  2. Table of Contents • Introduction • Problem Statement • Synthetic Vision Model • Brain Module • Known Problems • Extended Behavior with Dynamic Reactions • Conclusions • Future Work

  3. Introduction • Today, 3D computer games take information directly from the internal database for the non-player characters (NPC’s). • The use of synthetic vision together with complex brain modules could improve gameplay and make for better and more realistic NPC’s.

  4. How computer vision issues are addressed • Depth perception – pixel depth is used to determine the actual position of objects in the agent’s field of view by inverting the modeling and projection transform. • Object recognition – object function or identity part of the system. • Motion determination – Code the motion of object pixels into the synthetic vision viewport.

  5. Problem Statement • Create a visual system of a virtual character who lives in a 3D virtual world. • Uses the scene rendered from the character’s point of view.

  6. Synthetic Vision Model • The AI system should eventually be able to do the following with the model: • Obstacle avoidance. • Low-level navigation. • Fast object recognition. • Fast dynamic object detection.

  7. Synthetic Vision Model (cont) • To do this, the synthetic vision approach uses two viewports. • static information viewport. • dynamic information viewport.

  8. Static Information Viewport • Floor : Any polygon from the level geometry with Z normal normalized component greater or equal than 0.8. • Ceiling : Any polygon from the level geometry with Z normal normalized component less or equal than -0.8. • Wall : Every polygon from the level geometry that does not fit in any of the two previous cases.

  9. Objects and Color IDs Mapping Table with Level Geometry

  10. Dynamic Information Viewport • Represents instantaneous movement information of the objects being seen in the static viewport. • Fully static object would have a color of (0.0, 0.5, 1.0) • Moving objects will have different colors based on movement direction and velocity

  11. Brain Module • Takes the synthetic vision module as input, processes it, and acts after it • Very basic AI, only enough to do proof of concept for synthetic vision • If health or ammo drops below a certain threshold, search for pickups • Health and ammo drop periodically to facilitate testing

  12. NPC Behavior • Walk around • Looking for Health • Looking for Ammo • Looking for any Pick-up • Looking Quickly for Weapon • Looking Quickly for Health

  13. Walking Around Algorithm • If a central free way exists, it is taken. • Try right most free way of the left half of the viewport. • Try left most free way of the right half of the viewport. • Fails if a thin corridor or a blocking wall is detected. • no central free way • In the viewport bottom’s closest row that contains at least one pixel different than floor, the two pixels closest to the left and right side of the searching rectangle are at the same distance to the left and right sides. • Randomly turn to the left or right and try again

  14. Example Free Way Tests

  15. Looking for Pickups Algorithm • If a % of current path has not been walked, algorithm returns. • Create a list called objectlist of the wanted class power-ups seen in the static viewport at that moment. • If no wanted class power-ups are seen, Walk Around. • If at least one is found, get the closest from objectlist, p. • Get floor pixel located in straight vertical line under p and with approximately the same depth value, d. • If d does not exist, use Walk Around. • If it exists, set d as the new destination coordinates.

  16. Known Problems • Higher Floor Problem • Perspective Problem • Looking-For Problem

  17. Higher Floor Problem • Floor heights are not checked • A box that is too high to climb onto but short enough so the upper face is seen • Static Viewport represents it as a floor

  18. Perspective Problem • Free way algorithm has difficulty with corridors due to the trapezoidal shape • Also has difficulty with columns close to each other

  19. Looking-For Problem • When looking for an object, the Actor doesn’t take into account obstacles or the Actor’s width

  20. Extended Behavior with Dynamic Reactions • Reactive and Rule-based AI to use the information provided in the dynamic viewport • Three States for the Actor • Intercept • Avoid • Don’t Worry

  21. The Three States • Intercept • At least one enemy is approaching • Actor has health or weapon above upper thresholds • Avoid • At least one enemy is approaching • Actor has health or weapon below the upper thresholds • Don’t Worry • No enemies or they are all going away from the actor

  22. Conclusions • More manageable information than pure vision systems. • Avoids unrestricted access to the game database. • Simple rule-based AI model made to demonstrate the model in a computer game

  23. Future Work • Enhancements to Synthetic Vision • Infrared vision, heat sensing (like the Predator) • Lighting effects • Adding noise to the vision representation • Enhancements to the Brain • Memory • Learning • Interaction between Agents • Personality, based of previous three enhancements

  24. Questions? • Paper at: http://www1.cs.columbia.edu/~senrique/files/thesis_english.pdf

More Related