Multimodal Controlled Intelligent Telerobot for People with Disabilties This paper reports on the current status of The Multimodal User Supervised Interface and Intelligent Control (MUSIIC) project, which is working towards the development of an intelligent assistive telemanipulative system for people with motor disabilities. Our MUSIIC strategy overcomes the limitations of previous approaches by integrating a multimodal RUI (Robot User Interface) and a semi-autonomous reactive planner that will allow users with severe motor disabilities to manipulate objects in an unstructured domain. The multimodal user interface is a speech and deictic (pointing) gesture based control that guides the operation of a semi-autonomous planner controlling the assistive telerobot. MUSIIC uses a stereo-vision system to determine the three-dimensional shape, pose and color of objects and surfaces which are in the environment, and as well as an object-oriented knowledge base and planning system which superimposes information about common objects in the three-dimensional world. This approach allows the users to identify objects and tasks via a multimodal user interface which interprets their deictic gestures and a restricted natural language like speech input. The multimodal interface eliminates the need for general purpose object recognition by binding the users speech and gesture input to a locus in the domain of interest. The underlying knowledge-driven planner, combines information obtained from the user, the stereo vision mechanism as well as the knowledge bases to adapt previously learned plans to perform new tasks and also to manipulate newly introduced objects into the workspace. Therefore, what we have is a flexible and intelligent telemanipulatve system that functions as an assistive robot for people with motor disabilities.