ARD Presentation. January, 2012. Body P ointer. Team. Members Miri Peretz Moshe Unger Advisors Prof. Yuval Elovici Dr. Rami Puzis Web Site http://thebodypointer.wordpress.com/. Vision.
Physical impairments are affecting people's ability and making them hard to control the computer. Thus, MUCI and BCI, Muscle Computer Interface and Brain Computer Interface, are the perfect solution for those who have any kind of physical impairments and cannot use the computer.
BodyPointer will be based on MindDesktop project, which use Epoc headset in order to detect signals from face movement and thought.
In addition, BodyPointer will interact with the Body Sensors System, called ProComp Infiniti, which covers a full range of objective physiological signals used in biofeedback. With those two devices, the system can measure bodily and face signals and capture data in real time.
This system will cooperate with two input devices:
ProComp Infiniti and Epoc neuroheadset simultaneously.
When the user operates an action in order to use the computer, the devices send signals to the system core.
As the core gets signals from those devices, it processes them by their type (that was set by the user profile) to an action in Windows.
BodyPointer core will control the BodyPointer UI component, which will show the user the output, the result of the Windows action on the screen.
BodyPointer UI will manage 2 main applications:
Special pointing device
UI gives the user alternative windows desktop for executing PC applications.
The user can navigate in this screen by 3 actions:
"left", "backward" and "click".
This keyboard is a special interface that contains the same keys as the physical keyboard and other several buttons in order to improve the system usability.
The user can use this keyboard with 3 actions:
"choose arrow direction", "apply arrow direction" and "click".
Defining body actions
Defining system actions
The devices which being used at the system works only with Windows OS, so our software will be supported only in Windows.
In order to handle signals we need to use several API's: Thought Technology SDK, Emotive SDK & Win32 API.
Thus, implementation of this process will be written in low level language C++.
In order to make simple UI, we chose to work with Flex.
The software requires special hardware that were purchased by DT and Ben-Gurion University for our project.
The system will be tested on several types of users, including people with physical impairments. Therefore the actual data will come from them.