1 / 25

HCI 530 : Seminar (HCI)

HCI 530 : Seminar (HCI). Interaction. HCI 530: Seminar (HCI). Input Devices Motion Capture (Data Gloves) Touch Screens. Interaction. Motion Capture

vashon
Download Presentation

HCI 530 : Seminar (HCI)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. HCI 530 : Seminar (HCI) Interaction

  2. HCI 530: Seminar (HCI) • Input Devices • Motion Capture • (Data Gloves) • Touch Screens

  3. Interaction Motion Capture Motion capture, motion tracking, or mocap are terms used to describe the process of recording movement and translating that movement onto a digital model. It is used in military, entertainment, sports, and medical applications. In filmmaking it refers to recording actions of human actors, and using that information to animate digital character models in 2D or 3D computer animation. When it includes face, fingers and captures subtle expressions, it is often referred to as performance capture.

  4. Interaction Motion Capture (TinTin)

  5. Interaction Motion Capture (TinTin)

  6. Interaction Motion Capture (TinTin)

  7. Interaction Motion Capture Motion capture is defined as "The creation of a 3D representation of a live performance." This is in contrast to animation that is created 'by hand' through a process known as keyframing. Motion capture (AKA Mocap) used to be considered a fairly controversial tool for creating animation. In the early days, the effort required to 'clean up' motion capture data often took as long as if the animation was created by an animator, from scratch. Thanks to hard work by the manufacturers of motion capture systems as well as numerous software developers, motion capture has become a feasible tool for the generation of animation. Software tools for working with motion-captured data have evolved to the point where animators now have the means to edit and blend takes from multiple capture sessions and mix and match them with keyframed animation techniques; allowing great control of style and quality of final output, for anything ranging from realistic to 'cartoony' motion. Understanding Motion Capture for Computer Animation and Video Games by Alberto Menache

  8. Interaction • Motion Capture – Advantages • Motion capture offers several advantages over traditional computer animation of a 3D model: • More rapid, even real time results can be obtained. In entertainment applications this can reduce the costs of keyframe-based animation. For example: Hand Over • The amount of work does not vary with the complexity or length of the performance to the same degree as when using traditional techniques. This allows many tests to be done with different styles or deliveries. • Complex movement and realistic physical interactions such as secondary motions, weight and exchange of forces can be easily recreated in a physically accurate manner. • The amount of animation data that can be produced within a given time is extremely large when compared to traditional animation techniques. This contributes to both cost effectiveness and meeting production deadlines. • Potential for free software and third party solutions reducing its costs

  9. Interaction • Motion Capture – Disadvantages (1) • Specific hardware and special programs are required to obtain and process the data. • The cost of the software, equipment and personnel required can potentially be prohibitive for small productions. • The capture system may have specific requirements for the space it is operated in, depending on camera field of view or magnetic distortion. • When problems occur it is easier to reshoot the scene rather than trying to manipulate the data. Only a few systems allow real time viewing of the data to decide if the take needs to be redone. • The initial results are limited to what can be performed within the capture volume without extra editing of the data.

  10. Interaction • Motion Capture – Disadvantages (2) • Movement that does not follow the laws of physics generally cannot be captured. • Traditional animation techniques, such as added emphasis on anticipation and follow through, secondary motion or manipulating the shape of the character, as with squash and stretch animation techniques, must be added later. • If the computer model has different proportions from the capture subject, artifacts may occur. For example, if a cartoon character has large, over-sized hands, these may intersect the character's body if the human performer is not careful with their physical motion.

  11. Interaction Motion Capture – Optical Optical systems utilize data captured from image sensors to triangulate the 3D position of a subject between one or more cameras calibrated to provide overlapping projections. Data acquisition is traditionally implemented using special markers attached to an actor; however, more recent systems are able to generate accurate data by tracking surface features identified dynamically for each particular subject. Tracking a large number of performers or expanding the capture area is accomplished by the addition of more cameras. These systems produce data with 3 degrees of freedom for each marker, and rotational information must be inferred from the relative orientation of three or more markers; for instance shoulder, elbow and wrist markers providing the angle of the elbow.

  12. Interaction Motion Capture – Optical

  13. Interaction Motion Capture – Optical Passive markers A dancer wearing a suit used in an optical motion capture system Several markers are placed at specific points on an actor's face during facial optical motion capture Active marker Active optical systems triangulate positions by illuminating one LED at a time very quickly or multiple LEDs with software to identify them by their relative positions. Time modulated active marker Active marker systems can further be refined by strobing one marker on at a time, or tracking multiple markers over time and modulating the amplitude or pulse width to provide marker ID. Semi-passive imperceptible marker One can reverse the traditional approach based on high speed cameras. The specially built multi-LED IR projectors optically encode the space. Markerless Emerging techniques and research in computer vision are leading to the rapid development of the markerless approach to motion capture.

  14. Interaction Motion Capture – Non-Optical Inertial systems Inertial Motion Capture technology is based on miniature inertial sensors, biomechanical models and sensor fusion algorithms. Mechanical motion Mechanical motion capture systems directly track body joint angles and are often referred to as exo-skeleton motion capture systems, due to the way the sensors are attached to the body. Magnetic systems Magnetic systems calculate position and orientation by the relative magnetic flux of three orthogonal coils on both the transmitter and each receiver.

  15. Interaction Motion Capture – Bullet Time Bullet time is a special and visual effect that refers to a digitally-enhanced simulation of variable-speed (i.e. slow motion, time-lapse, etc) photography used in films, broadcast advertisements, and video games. It is characterized both by its extreme transformation of time (slow enough to show normally imperceptible and unfilmableevents, such as flying bullets) and space (by way of the ability of the camera angle—the audience's point-of-view—to move around the scene at a normal speed while events are slowed).

  16. Interaction Motion Capture – Bullet Time The first movie to use the bullet time technique was Blade in 1998, where bullets were computer-generated and digitally implemented. However, the actual term bullet time is a registered trademark of Warner Bros., the distributor of The Matrix. It was formerly a trademark of 3D Realms, producer of the Max Payne games. This is almost impossible with conventional slow-motion, as the physical camera would have to move impossibly fast; the concept implies that only a "virtual camera," often illustrated within the confines of a computer-generated environment such as a virtual world or virtual reality, would be capable of "filming" bullet-time types of moments. Technical and historical variations of this effect have been referred to as time slicing, view morphing, slow-mo, temps mort and virtual cinematography.

  17. Interaction Motion Capture – Bullet Time The bullet time effect was originally achieved photographically by a set of still cameras surrounding the subject. These arrays are usually triggered at once or sequentially. Singular frames taken from each of the still cameras are then arranged and displayed consecutively to produce an orbiting viewpoint of an action frozen in time or as hyper-slow-motion. This technique suggests the limitless perspectives and variable frame rates possible with a virtual camera. However, if the still array process is done with real cameras, it is often limited to assigned paths.

  18. Interaction Motion Capture – Bullet Time In The Matrix, the camera path was pre-designed using computer-generated visualizations as a guide. Cameras were arranged, behind a green or blue screen, on a track and aligned through a laser targeting system, forming a complex curve through space. The cameras were then triggered at extremely close intervals, so the action continued to unfold, in extreme slow-motion, while the viewpoint moved. Additionally, the individual frames were scanned for computer processing. Using sophisticated interpolation software, extra frames could be inserted to slow down the action further and improve the fluidity of the movement (especially the frame rate of the images); frames could also be dropped to speed up the action. This approach provides greater flexibility than a purely photographic one. The same effect can also be produced using pure CGI, motion capture and universal capture. It is thought that the opening sequence from the late 1960s Speed Racer cartoons partially inspired the Wachowski Brothers to incorporate the bullet time effect into The Matrix.

  19. Interaction Motion Capture http://www.metamotion.com/

  20. Interaction Motion Capture – Data Gloves A Data Glove is a glove-like input device for virtual reality environments. Various sensor technologies are used to capture physical data such as bending of fingers. Often a motion tracker, such as a magnetic tracking device or inertial tracking device, is attached to capture the global position/rotation data of the glove. These movements are then interpreted by the software that accompanies the glove, so any one movement can mean any number of things. Gestures can then be categorized into useful information, such as to recognize Sign Language or other symbolic functions.

  21. Interaction Motion Capture – Data Gloves Expensive high-end wired gloves can also provide haptic feedback, which is a simulation of the sense of touch. This allows a wired glove to also be used as an output device. Traditionally, wired gloves have only been available at a huge cost, with the finger bend sensors and the tracking device having to be bought separately. Wired gloves are often called "datagloves" or "cybergloves", but these two terms are trademarks, belonging to Sun Microsystems (which acquired the patent portfolio of VPL Research Inc. in February 1998) and Immersion Corporation (which acquired Virtual Technologies, Inc. and its patent portfolio in September 2000) respectively. An alternative to wired gloves is to use a camera and computer vision to track the 3D pose and trajectory of the hand, at the cost of tactile feedback.

  22. Interaction Motion Capture – Data Gloves

  23. Interaction Motion Capture – Data Gloves One of the first wired gloves available to home users in 1987 was the Nintendo Power Glove. This was designed as a gaming glove for the Nintendo Entertainment System. It had a crude tracker and finger bend sensors, plus buttons on the back. This was followed by the CyberGlove, created by Virtual Technologies, Inc. in 1990. In addition to the CyberGlove, Immersion Corp also developed three other data glove products: the CyberTouch, which vibrates each individual finger of the glove when a finger touches an object in virtual reality; the CyberGrasp which actually simulates squeezing and touching of solid as well as spongy objects; and the CyberForce device which does all of the above and also measures the precise motion of the user's entire arm. In 2002, the P5 Glove was released. In normal applications, it worked as a 2 dimensional mouse and a few computer games were specially adapted to provide "3D" support for it. Following the P5 Glove is 5th Glove. A data glove and flexor strip kit (5th Glove DFK) sold by Fifth Dimension Technologies. The package uses flexible optical-bending sensing to track hand and arm movement.

  24. Interaction Motion Capture – Data Gloves

  25. Interaction Jaron Lanier

More Related