We describe a method and system which integrates human-computer interaction with reactive planning to operate a telerobot for use as an assistive device. The system is intended to operate in an unstructured environment, rather than in a structured workcell allowing the user considerable freedom and flexibility in terms of control and operating ease. Our approach is based on the assumption that while the user's world is unstructured, objects within are reasonably predictable. We reflect this arrangement by providing a means of determining the superquadric shape representation of the scene, and an object-oriented knowledge base and reactive planner which superimposes information about common objects in the world. A multimodal user interface interprets deictic gesture and speech inputs with the objective of identifying the portion of contour that is of interest to the user. The multimodal interface performs a critical disambiguation function by binding the spoken words to a locus in the physical work space. The spoken input is also used to supplant the need for general purpose object recognition. Instead, 3-D shape information is augmented by the users spoken word which may also invoke the appropriate inheritance of object properties using the adopted hierarchical object-oriented representation scheme. The underlying planning mechanism results in a reactive, intelligent and "instructible" telerobot. We describe our approach for an intelligent assistive telerobotic system (MUSIIC) for unstructured environments: speech-deictic gesture control integrated with a knowledge-driven reactive planner and a stereo-vision system.