You may have heard of virtual keyboards controlled by thought, brain-powered wheelchairs, and neuro-prosthetic limbs. But powering these machines can be downright tiring, a fact that prevents the technology from being of much use to people with disabilities, among others. Professor Jos del R. Milln and his team at the Ecole Polytechnique Fdrale de Lausanne (EPFL) in Switzerland have a solution: engineer the system so that it learns about its user, allows for periods of rest, and even multitasking.
In a typical brain-computer interface (BCI) set-up, users can send one of three commands left, right, or no-command. No-command is the static state between left and right and is necessary for a brain-powered wheelchair to continue going straight, for example, or to stay put in front of a specific target. But it turns out that no-command is very taxing to maintain and requires extreme concentration. After about an hour, most users are spent. Not much help if you need to maneuver that wheelchair through an airport.
In an ongoing study demonstrated by Milln and doctoral student Michele Tavella at the AAAS 2011 Annual Meeting in Washington, D.C., the scientists hook volunteers up to BCI and ask them to read, speak, or read aloud while delivering as many left and right commands as possible or delivering a no-command. By using statistical analysis programmed by the scientists, Milln's BCI can distinguish between left and right commands and learn when each subject is sending one of these versus a no-command. In other words, the machine learns to read the subject's mental intention. The result is that users can mentally relax and also execute secondary tasks while controlling the BCI.
The so-called Shared Control approach to facilitating human-robot interactions employs image sensors and image-processing to avoid obstacles. According to Milln, however, Shared Control isn't enough to let an operator to rest or concentrate on more than one comman
|Contact: Michael Mitchell|
Ecole Polytechnique Fdrale de Lausanne