||The primary objective of MUSIIC is the development of a multimodally controlled intelligent assistive robot for operation in an unstructured environment.|
|Abstract||Overview||Publications||Staff||Local Use Only|
We describe a method and system which integrates human-computer interaction with reactive planning to operate a telerobot for use as an assistive device. The system is intended to operate in an unstructured environment, rather than in a structured workcell, allowing the user considerable freedom and flexibility in terms of control and operating ease. Our approach is based on the assumption that while the user's world is unstructured, objects within it are reasonably predictable. We reflect this arrangement by providing a means of determining the superquadric shape representation of the scene, and an object-oriented knowledge base and reactive planner which superimposes information about common objects in the world. A multimodal user interface interprets deictic gesture and speech inputs with the objective of identifying the objects in the work space that are of interest to the user. The multimodal interface performs a critical disambiguation function by binding the spoken words to a location in the physical work space. The spoken input is also used to supplant the need for general purpose object recognition using a hierarchical object-oriented representation scheme. The result is an instructible telerobot which integrates speech-deictic gesture control with a knowledge-driven reactive planner and a stereo-vision system to acquire the work space model.
A project of the Rehabilitation Robotics research area of the Applied Science and Engineering Laboratories, a joint program of the duPont Hospital for Children and the University of Delaware.
Last Updated: March 10, 1996 by Zunaid Kazi <firstname.lastname@example.org>