1 / 60

HCI 530 : Seminar (HCI)

HCI 530 : Seminar (HCI). Interaction. HCI 530: Seminar (HCI). Input Devices Mice Keyboards Scanners Joysticks Position Sensors Special Devices. HCI 530: Seminar (HCI). Input Devices Mice Keyboards Scanners Joysticks Position Sensors Special Devices. Interaction.

cole
Download Presentation

HCI 530 : Seminar (HCI)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. HCI 530 : Seminar (HCI) Interaction

  2. HCI 530: Seminar (HCI) • Input Devices • Mice • Keyboards • Scanners • Joysticks • Position Sensors • Special Devices

  3. HCI 530: Seminar (HCI) • Input Devices • Mice • Keyboards • Scanners • Joysticks • Position Sensors • Special Devices

  4. Interaction Input Devices Recent gains in the performance of 3D graphics hardware and rendering systems have not been matched by a corresponding improvement in our knowledge of how to interact with the virtual environments we create; therefore there is a need to examine these further if we are to improve the overall quality of our interactive 3D systems. This lecture examines some of the interaction techniques which have been developed for object manipulation, navigation and application control in 3D virtual environments. The use of both mouse-based techniques and 3D input devices is considered, along with the role of feedback and some aspects of tools and widgets. Hand, C., A Survey of 3D Interaction Techniques, Journal of the Eurographics Association, Volume 16, number 5, pp 269–281, 1997.

  5. Interaction Input Devices Although many interactive computer graphics systems are now able to render high-quality shaded 3D models in real time, there remains a problem of how to interact with virtual environments in a natural and error-free manner. This Lecture begins to look at some of the many techniques which have been used to perform 3D tasks such as object manipulation and navigation

  6. Interaction Input Devices Virtual environments may be presented to users via many different configurations of computer system. Even the simplest desktop set-up, with a standard monitor and mouse, is capable of presenting interactive 3D graphics to some extent. In domains such as CAD, visualisation or entertainment focused systems we commonly find the desktop system being extended through the use of 3D joysticks or sometimes stereoscopic displays, using shutter glasses for example.

  7. Interaction Input Devices More traditional virtual reality systems may use six degrees-of-freedom (6-DOF) tracking devices to measure the position and orientation of a pointing device and a head-mounted display (HMD), which allows the user’s viewpoint to change interactively as the head is moved. An alternative to the “encumbering”1 technology of the HMD is to use one or more projection displays to create a CAVE2, possibly with tracking being performed using video cameras.

  8. Interaction Input Devices The 2D techniques of the “desktop metaphor”, such as pull-down menus and dialogue boxes, are inappropriate for a large class of applications, particularly where a HMD is worn (since the keyboard and mouse cannot be seen) or where a glove or 6D pointing device is being used. Although the system configuration used (especially the number of degrees of freedom of the input devices) does have an impact on the interaction techniques which are feasible, the main aim of this lecture is to provide an overview of the techniques we can implement in software to make use of whatever input devices are available, rather discussing the best configurations for given tasks.

  9. Interaction Input Devices The design of the human-computer interface should be informed not only by a knowledge of the capabilities of the human sensorimotorsystem, but also by the way in which we conceptualise 3D tasks. By the time we reach adulthood we have perfected many manipulation and navigation tasks to the point where we can perform them without conscious attention. It is this level of naturalness and transparency which virtual environments seek to attain — the interface almost becomes invisible when we can manipulate the virtual objects as if they were really there. Thus the user can focus more on the task, becoming completely engaged with the virtual world and feeling as if they are interacting with the objects directly, with no intermediary. We need to discuss interaction techniques with regard to their ability to provide natural or direct interaction, as well as considering the role of feedback in general for each kind of interaction task.

  10. Interaction Implicit vs Explicit Knowledge Induced vs Deduced

  11. Interaction - Tasks So what things do we do in Virtual Worlds ? What Tasks ?

  12. Interaction - Tasks • So what things do we do in Virtual Worlds ? • Most of these domains have at least three tasks in common: • object manipulation • viewpoint manipulation • application control.

  13. Interaction - Tasks • So what things do we do in Virtual Worlds ? • Most of these domains have at least three tasks in common: • object manipulation • viewpoint manipulation • application control. • The term “viewpoint manipulation” is used here rather than “viewpoint movement” to avoid excluding the control of parameters such as Field of View and Zoom Factor. • In general all three of these sub-tasks will be performed as part of a larger task. • The relationship between object manipulation and viewpoint manipulation is an interesting • one, since some systems treat the latter as the manipulation of an object which represents the camera or a “virtual eye”. • Similarly, application control in some systems is performed by manipulating objects (tools or widgets). However, for now we will treat the three tasks separately.

  14. Interaction - Tasks Object Manipulation A typical task performed in a 3D virtual environment will include the direct manipulation of graphical objects in that environment: What Functions ?

  15. Interaction - Tasks Object Manipulation A typical task performed in a 3D virtual environment will include the direct manipulation of graphical objects in that environment: selecting,

  16. Interaction - Tasks Object Manipulation A typical task performed in a 3D virtual environment will include the direct manipulation of graphical objects in that environment: selecting, scaling, rotating, translating,

  17. Interaction - Tasks Object Manipulation A typical task performed in a 3D virtual environment will include the direct manipulation of graphical objects in that environment: selecting, scaling, rotating, translating, creating, deleting, editing and so on. Some of these techniques correspond directly to actions we perform in the real world (e.g. translation, rotation), while others would be impossible in real life but are required for the sake of working on the computer-based application (e.g. scaling, deletion). We can identify two distinct phases in the development of 3D interaction techniques: the evolution of techniques based on the use of the 2D mouse, and the new ideas generated by the introduction of true 3D input devices.

  18. Interaction - Tasks Viewpoint Manipulation Viewpoint manipulation encompasses the tasks of navigating in the virtual environment by movement of the point-of-view (i.e. the camera or “virtual eye”), as well as controlling the viewing parameters such as Zoom Factor and Field of View. Unfortunately it seems that the design of a viewpoint manipulation technique is often seen as relatively unimportant, and typically little thought is allocated to implementing anything other than a “flying metaphor”.

  19. Interaction - Tasks Viewpoint Manipulation Viewpoint manipulation encompasses the tasks of navigating in the virtual environment by movement of the point-of-view (i.e. the camera or “virtual eye”), as well as controlling the viewing parameters such as Zoom Factor and Field of View. Unfortunately it seems that the design of a viewpoint manipulation technique is often seen as relatively unimportant, and typically little thought is allocated to implementing anything other than a “flying metaphor”. “nobody walks in VR — they all fly” – Blanchard, C. VPL Unconstrained 3D flying is an easy technique to program but is not a natural way of moving around (except perhaps for trained pilots) — if naturalness is one of our aims then alternative techniques must be found. For example, Fellner and Jucknathused a ray-casting technique to intelligently select an appropriate navigation method (walk, fly, fall or step) based on the distance to the closest obstacle.

  20. Interaction - Tasks Application Control The term Application Control describes communication between user and system which is not part of the virtual environment. Changing operating parameters, dealing with error conditions, accessing on-line help and changing mode are all examples of this kind of task. This aspect of the 3D interface is the least reported of the three under discussion here. Perhaps one reason for this is that it is often possible to produce a usable system by carrying over the application control techniques used in 2D interfaces, such as buttons and menus, and to implement them on top of (or despite) the 3D environment.

  21. Interaction - Tasks Application Control One danger here is that by “converting” the task from 2D to 3D it will become much more difficult to perform. For example, it is not uncommon for a system to implement a 3D menu floating in space, so that to choose from the menu the user must to make the 3D cursor intersect the appropriate menu choice. Not only does this change a one dimensional task (choosing from a list) into a three dimensional one, but it also increases the possibility of making errors — if the cursor is not at the correct depth then the menu isn’t even activated.

  22. Interaction - Overview History / Overview http://news.zdnet.com/2422-13569_22-153347.html

  23. Interaction - Devices Devices Specific graphic input devices fall into two broad categories. Firstly there are devices for user interaction with graphics systems e.g. mice, keyboards, trackerballs. Secondly there are devices that produce graphics data from the real world e.g. cameras, sensors, scanners.

  24. HCI 530: Seminar (HCI) • Input Devices • Mice • Keyboards • Scanners • Joysticks • Position Sensors • Special Devices

  25. Interaction - Devices Mice The mouse or mice was invented by Douglas Englebart in 1963, who at the time was working at the Stanford Research Institute, which was a think tank sponsored by Stanford University. The Mouse was originally referred to as an X-Y Position Indicator for a Display System. Xerox later applied the mouse to its revolutionary Alto computer system in 1973. It was first more widely used in the Apple Lisa computer.

  26. Interaction - Devices

  27. Interaction - Devices

  28. Interaction - Devices

  29. Interaction - Devices Mice When the mouse hit the scene attached to the Apple Macintosh in 1984, it was an immediate success and helped to completely redefine the way we use computers. There was something about it that is very natural. Compared to a graphics tablet, mice are extremely inexpensive and they take up very little desk space. In the PC world, mice took longer to gain ground, mainly because of a lack of support in the operating system. Once Windows 3.1 made Graphical User Interfaces (GUIs) a standard, the mouse became the PC-human interface of choice very quickly.

  30. Interaction - Devices Mice Mechanical Mice - Mechanical Mice requires that the mouse be set on a flat surface. The distance and the speed of the rollers inside the mouse determine how far the mouse cursor moves on the screen depending on the software configuration. Optical-Mechanical - The optical-mechanical hybrid consists of a ball which rolls a wheel inside the mouse. This wheel contains a circle of holes and or notches to read the LEDby a sensor as it spins around when the mouse is moved. This mouse is much more accurate than the mechanical mouse. Optical Mice – OlderOptical Micerequired a special mouse pad which had a grid pattern. A sensor inside the mouse determines the movement by reading the grid as the mouse passes over it while emitting a light from an LED or sometimes a laser. Recent Optical Mice no longer have the disadvantages of earlier mice and are capable of being utilised on any surface.

  31. The main goal of any mouse is to translate the motion of your hand into signals that the computer can use. Almost all optomechanical mice today do the translation using five components: Mice: How They Work Damian Schofield

  32. A ball inside the mouse touches the desktop and rolls when the mouse moves. Mice: How They Work Damian Schofield

  33. Two rollers inside the mouse touch the ball. One of the rollers is oriented so that it detects motion in the X direction, and the other is oriented 90 degrees to the first roller so it detects motion in the Y direction. When the ball rotates, one or both of these rollers rotate as well. Mice: How They Work Damian Schofield

  34. The rollers each connect to a shaft, and the shaft spins a disk with holes in it. When a roller rolls, its shaft and disk spin. Mice: How They Work Damian Schofield

  35. On either side of the disk there is an infrared LED and an infrared sensor. The holes in the disk break the beam of light coming from the LED so that the infrared sensor sees pulses of light. The rate of the pulsing is directly related to the speed of the mouse and the distance it travels. Mice: How They Work Damian Schofield

  36. An on-board processor chip reads the pulses from the infrared sensors and turns them into binary data that the computer can understand. The chip sends the binary data to the computer through the mouse's cord. Mice: How They Work Damian Schofield

  37. Mice: How They Work Damian Schofield

  38. With advances it mouse technology, it appears that the venerable wheeled mouse is in danger of extinction. The now-preferred device for pointing and clicking is the optical mouse. Developed by Agilent Technologies and introduced to the world in late 1999, the optical mouse actually uses a tiny camera to take 1,500 pictures every second. Mice: Optical Mice Damian Schofield

  39. Able to work on almost any surface, the mouse has a small, red light-emitting diode (LED) that bounces light off that surface onto a complimentary metal-oxide semiconductor (CMOS) sensor. The CMOS sensor sends each image to a digital signal processor (DSP) for analysis. The DSP, operating at 18 MIPS (million instructions per second), is able to detect patterns in the images and see how those patterns have moved since the previous image. Mice: Optical Mice Damian Schofield

  40. Based on the change in patterns over a sequence of images, the DSP determines how far the mouse has moved and sends the corresponding coordinates to the computer. The computer moves the cursor on the screen based on the coordinates received from the mouse. This happens hundreds of times each second, making the cursor appear to move very smoothly. Mice: Optical Mice Damian Schofield

  41. Optical mice have several benefits over wheeled mice: • No moving parts means less wear and a lower chance of failure. • There's no way for dirt to get inside the mouse and interfere with the tracking sensors. • Increased tracking resolution means smoother response. • They don't require a special surface, such as a mouse pad. • The cameras incorporated into these mice do have a certain amount of limitations however. • The main one is the type of surface that you use the mouse on. • Surfaces that can cause problems are Glass and Mirrors and some 3D mouse pads. Mice: Optical Mice Damian Schofield

  42. Interaction - Devices 3D – Mice http://www.3dconnexion.com/index.php?id=96 http://www.3dconnexion.com/

  43. Interaction - Devices 3D – Mice

  44. Interaction - Devices 3D – Mice http://www.gizmag.com/go/2066/ http://www.gizmag.com/go/7293/

  45. Interaction - Devices 3D – Mice http://www.youtube.com/watch?v=1WuH7ezv_Gs

  46. HCI 530: Seminar (HCI) • Input Devices • Mice • Keyboards • Scanners • Joysticks • Position Sensors • Special Devices

  47. Interaction - Devices Keyboards The computer keyboard is an interface on several levels, though the basic concept as initiated with the typewriter is easier to pigeonhole. It is a system for transforming literate thoughts into standard representations of letters: handwriting without the interpersonal variance and potential information loss. The typewriter started out as a system to produce legible, professional-quality one-off output by hand (as opposed to the typeset printing press). In this sense it was interface between human and paper, and also a middleman in the recording of thoughts, symbols, and characters. Additionally its typewritten output was further an interface between people- - one's words represented unambiguously to another party minutes, days, or years later. As computers have grown in popularity, the interface has become more complex. Keyboards are now an interface between analog human thinking and digital computer operation, storage, and transmission. It's still a typewriter, but it outputs for the world instead of one person, and the audience has grown to include machines.

  48. Interaction - Devices Keyboards While the keyboard is a versatile human to computer interface, it is not the most versatile that exists. Mice, touchscreens, computer vision, speech recognition all contribute to the "sensorium" of a computer, and each have their benefits and drawbacks, and more and less relevant environments and audiences. Suffice it to say that the keyboard is a basic, multifunctional, and simple interface. Plug it into a computer capable of interpreting the signals sent by a keyboard and you have a programmable interface with shades of sense. For instance, one's typematic habits can be used fairly accurately as a form of biometric identification.

  49. Interaction - Devices Keyboards

  50. Interaction - Devices Keyboards

More Related