1 / 65

Underlying Technologies Part Two: Software

Underlying Technologies Part Two: Software. Mark Green School of Creative Media. Introduction. Software not as easy as hardware: wide range of software techniques, hard to classify like hardware several components that need to work together, hard to know where to start

Download Presentation

Underlying Technologies Part Two: Software

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Underlying TechnologiesPart Two: Software Mark Green School of Creative Media

  2. Introduction • Software not as easy as hardware: • wide range of software techniques, hard to classify like hardware • several components that need to work together, hard to know where to start • wide range of hardware configurations, not as simple as 2D software

  3. Hardware Configurations • In 2D have a standard hardware configuration: • input: keyboard and mouse • output: single 2D display • with 3D can have many configurations: • HMD • projection • single screen

  4. Hardware Configurations • Want to produce an application once, not once for every possible hardware configuration • software needs to be more adaptable, change based on hardware configuration • complicates the development of support software

  5. Range of Software Techniques • Want our software to be very efficient: reduce latency, high update rates • some applications can be quite large, need to efficiently organize data • all of this complicates VR software, too many things to consider, hard to know where to start

  6. Components • What are the main components of a VR application? • 3D Objects: geometry and appearance, but may also want sound and force • Behavior: the objects need to be able to do things, move and react • Interaction: users want to interact with the application, manipulate the objects

  7. 3D Objects • Need object geometry, object’s shape, basis for everything else, called model • polygons used for geometry, sometimes restricted to triangles • different from animation, free form surfaces based on sophisticated math • need speed, so restricted to polygons

  8. 3D Objects • Where does geometry come from? • Really depends on the application • Could use a text editor to enter all the polygon vertices, some people actually do this! • Could use a program, for example OpenGL, works for small models

  9. 3D Objects • Use a 3D modeling or animation program • for non-programmers this is the easiest way, but it takes time to develop modeling skills • also many different program and file formats • want a modeler that does a good job of polygons, not all modelers are good at this

  10. 3D Objects • Another source of objects is scientific and engineering computations • can be easy to convert to polygons, already have position data • other types of data can also be converted into geometry, but this can be more difficult

  11. 3D Objects • Also need to consider appearance: • colour of the object • how it reflects light • transparency • texture • can be done with modeler, or later in the VR program

  12. Behavior • How should objects behave? • What happens when the user hits an object? • What happens when an object hits another object? • Can objects move around the environment? • Each object could have a range of behaviors, react differently to different events in the environment

  13. Behavior • Behavior is harder than modeling • animation programs can be useful, but not always • animation is quite different: • animator is in complete control, knows what’s happening all of the time • in VR the user is in control, can interrupt or mess up any animation

  14. Behavior • Short animations (less than 5 seconds) can be useful, basic motion units • other types of behaviors must be programmed or scripted • more flexible, can respond to the events that occur in the environment • easier to combine, objects can do two things at same time

  15. Interaction • Users want to interact with the environment • pick up objects and move them around • very different from 2D interaction • much more freedom, more direct interaction • still exploring the design space, not stable like 2D interaction • still working on standard techniques

  16. Application Structure • look at application structure • provides a framework for discussing various software technologies • divide an application into various components, and then look at the components individually

  17. Application Structure Application Processing Input Devices Model Input Processing Model Traversal Output Devices

  18. Application Structure • Model: representation of objects in the environment, geometry and behavior • Traversal: convert the model into graphical, sound, force, etc output • Input Processing: determine user’s intentions, act on other part of application • application processing: non-VR parts of the application

  19. Interaction Loop • Logically the program consists of a loop that samples the user, performs computations and traverses the model Input processing Computation Model Traversal

  20. Model • Contains the information required to display the environment: • geometry, sound, force • behavior • the graphical part is the most developed, so concentrate on it • try to position sound and force within this model

  21. Geometry • This is what we know the best • need to have a graphical representation of objects in the environment: • accurate shape representation • ease of modeling • efficient display • integrates with behavior

  22. Scene Graph • Main technique for structuring the model • based on hierarchical structure, divide the object into parts or components • simplifies the modeling task, work on one part at a time • easy to modify the individual parts • add behaviors, sound, force, etc to the model

  23. Scene Graph car Wheel Wheel Wheel Wheel Body

  24. Scene Graph • Individual units are called nodes: • shapes: polygons, meshes, cubes, etc • transformations: position the nodes in space • material: colour and texture of objects • grouping: collecting nodes together as a single object • sounds • behavior

  25. Scene Graph • Many different scene graph architectures, will look at one in more detail later • differences: • scene graph for whole VE Vs. one per object • types of nodes in the scene graph • ease of modification, static Vs dynamic

  26. Behavior • Harder to deal with than geometry • simple motions aren’t too bad, but much harder to get sophisticated behavior • the general solution now is to write code, okay for programmers • would like to have a higher level approach for non-programmers

  27. Behavior • Problem: want objects to respond to events in the environment • can have some motions that are simple animations, but most of the motions need some knowledge of the environment • example: an object moving through the environment must be aware of other objects so it doesn’t walk through them

  28. Behavior • Some simple motions produced by animating transformation nodes • animation variables used to control transformation parameters, example: rotation or translation • could import animations, use some form of keyframing package to produce the motion

  29. Behavior • Simple motions could be triggered by events in the environment • example: collision detection, if an object is moving through the environment and a collision detected it changes direction • hard to come up with good trigger conditions, a few obvious ones, but not much after that

  30. Behavior • Another approach is to use a general motion model • best example of this is physics, try to simulate real physics in the environment • this gives a number of natural motions, and objects respond to the environment • works well in some environment, but has some problems

  31. Behavior • One problem is the complexity of the mathematics, often need to simplify • computations can be a problem, particularly for complex objects • hard to control, need to know forces and torque's that produce the desired motions, can be very hard to determine

  32. Behavior • Some attempts to produce general motion controllers • maybe the eventual solution, but nothing much now • use of scripting languages, can add some program control to the scene graph, but not full programming

  33. Model Traversal • The process of going through the model and generating the information to be displayed • this is part software and part hardware, look through the entire process • hardware parts have implications for how we build models and the graphics techniques used

  34. A Simple Model • A simplified model of the display process, explains hardware performance Model Screen traverse geometry Pixel

  35. Traverse • Traverse the model, determine objects to be drawn, send to graphics hardware • usually combination software/hardware, depends on CPU and bus speed • early systems were hardware, didn’t scale well • many software techniques for culling models

  36. Geometry • Geometrical computations on polygons: transformations and lighting • floating point intensive • divide polygons into fragments, screen aligned trapezoid • time proportional to number of polygons and vertices

  37. Pixel • Fill fragments, colour interpolation, texture mapping, transparency, hidden surface • all the per pixel computations, time depends on number of pixels, also colour depth on low end displays • scalable operations, can add more processors for more speed

  38. Design Considerations • Any of the stages could block, depend on display mix • lots of small polygons cause traversal and geometry stages to block • large polygons cause pixel stage to block • can use buffers to reduce blocking • a careful balancing process

  39. Design Considerations • CPU/Image Generator trade-off • cheap boards just do pixel stage, use CPU for everything else: • scales with CPU speed • large polygons and texture mapping • moving geometry onto board increases performance, trend in low cost displays

  40. PC Hardware Evolution • Start with CPU doing most of the work • Graphics board: • image memory • fill and hidden surface • texture mapping • graphics speed determined by CPU, limited assistance from graphics card

  41. Graphics Card Memory • Memory used for three things: • image store • hidden surface (z buffer) • texture maps • texture can be stored in main memory with AGP, but this isn’t most efficient • better to have texture memory on board

  42. Image Memory • Amount depends on image size • double buffer, two copies of image memory • front buffer: image displayed on screen • back buffer: where the next image is constructed • can construct next image while the current image is displayed, better image quality and faster display

  43. Z Buffer • Used for hidden surface removal • z buffer: one value for each pixel, distance from eye to object drawn at that pixel • when drawing a pixel, compare depth of pixel to z buffer • if closer draw pixel and update z buffer • otherwise, ignore the pixel

  44. Graphics Acceleration • Next step: move pixel operations to graphics card • fill and z buffer 3D triangles • add smooth shading and texture mapping • CPU does traversal and geometry processing

  45. Graphics Acceleration • Next step: move geometry processing to graphics card • CPU traverses model, send graphics primitives to display card • all transformations and lighting done on graphics card • less dependence on CPU

  46. Current Trends • Pixel processing (Geforce 2): a program that processes each pixel, control lighting and other effects • support for multiple textures, etc • Vertex processing (Geforce 3): a program processes each vertex, can change geometry at display time • real-time deformations and IKA

  47. Current Trends • Move to programming all aspects of the graphics card (3DLabs VP series) • Also making programming more sophisticated, closer to CPU • Floating point textures and image memory (ATI and 3DLabs VP series) • Higher dynamic range -> better image quality, better for programming

  48. Input Processing • Users need to interact with the environment • they have a set of input devices, have position and orientation information • need to translate this into their intentions • Interaction Technique (IT): basic unit of interaction, converts user input into something the application understands

  49. Input Processing • Each IT address a particular interaction task, something that the user wants to do • look at interaction tasks first, then talk a little bit about ITs for them • interaction tasks divide into two groups: • application independent: required by many different applications • application dependent

  50. Interaction Tasks • Mainly look at application independent interaction tasks • the main ones are: • navigation • selection • manipulation • combination

More Related