[Prev][Next][Index][Thread]

Re: Two GRASP talks Thursday, November 14, 12:00pm and 2:00pm




Thank you for forwarding me the Grasp talk announcemnet. I had not
come across Graefe's work in my prior searches and I was especially
concerned about his "object-oriented vision" concept that predates
our own object-oriented knowledge based approach. I finally tracked
down a paper of his and it it appears that we are using the same
words to describe different approaches. While my object-oriented
approach refers to the standard (in computer science) definition
of some sort of hierarchucal structure, he is using the term to
actually mean physical objects themselves. His main research has
been focusing on visual navigation of obstacles for an intelligent
robot vehicle, and as far as I understand, his basic premise is
that the world is structured rather than being a monolithic entity,
where only a few objects in the environment affect the control of
a robot in any instant. So, his vision processing schema tries to
identify objects on the road, categorizes them as to relevancy
(whether they constitute an obstacle or not) and build a knowledge
base of those objects that may constitute obstacles for navigation.
The brunt of their work (at the Institute of Measurement Science,
Federal Armed Forces University, Munich) has been towards the
development of an obstacle detection vision system. While, his work
does not adversely affect the novel parts (thankfully, since I am
in a dissertation mode) of our research, some of the ideas and
approaches might be useful for our own vision system. I have forwarded
to Shoupu the announcement and am making him a copy of the paper I
have access to. His second talk may have greater importance for
MUSIIC's vision system. Currently, our vision system is static
in space and needs callibration. However, on a mobile wheelchair
some of these simplifying assumptions are no longer valid and hs
ideas may hold some merit for MUSIIC. I intend to go to the
Grasp lab and if possible talk to the people giving the presentations.

In message <199611121544.KAA10791@tesla.asel.udel.edu>, you wrote:
>
>----- Begin Included Message -----
>
>>From owner-robotics Mon Nov 11 14:26:26 1996
>From: cahn@grip.cis.upenn.edu
>Posted-Date: Mon, 11 Nov 1996 14:21:57 -0500
>To: graspees@grip.cis.upenn.edu, grasplunchers@grip.cis.upenn.edu,
>        cis-faculty@central.cis.upenn.edu
>Subject: Two GRASP talks Thursday, November 14, 12:00pm and 2:00pm
>Sender: owner-robotics
>Precedence: bulk
>Reply-To: robotics
>Content-Length: 4316
>X-Lines: 105
>
>
>
>Both talks take place in the large conference room (318C) of the GRASP Lab.
>The visitors are available in the afternoon; if you would like to meet with
>them, please send me email.
>
>-Ulf
>
>
>---------------------------------------------------------------------------
>
>
>12:00pm
>
>Object-Oriented Vision for a Behavior-Based Robot
>
>Rainer Bischoff, Volker Graefe, Klaus Peter Wershofen
>
>Institute of Measurement Science, Federal Armed Forces University Munich
>85577 Neubiberg, Germany; Phone: +49 89 6004 3589; Fax: +49 89 6004 3074
>e-mail: Rainer.Bischoff@UniBw-Muenchen.de
>
>
>Abstract:
>
>As one realization out of the class of behavior-based robot architectures a
>specific concept of situation-oriented behavior-based navigation has been
>proposed by [Wershofen, Graefe 1992].  Its main characteristic is that the
>selection of the behaviors to be executed in each moment is based on a
>continuous recognition and evaluation of the dynamically changing situation
>in which the robot is finding itself.  An important prerequisite for such
>an approach is a timely and comprehensive perception of the robot's
>dynamically changing environment.  Object-oriented vision as proposed by
>[Graefe 1989] and successfully applied, e.g., in freeway traffic scenes
>[Graefe 1992] is a particularly well suited sensing modality for robot
>control.
>
>Our work concentrated on modeling the physical objects which are relevant
>for indoor navigation, i.e. walls, intersections of corridors, and
>landmarks.  In the interest of efficiency these models include only those
>necessary features for allowing the robot to reliably recognize different
>situations in real time.  According to the concept of object-oriented
>vision recognizing such objects is largely reduced to a knowledge-based
>verification of objects or features that may be expected to be visible in
>the current situation.  The following results have been achieved:
>
>By using its vision system and a knowledge base in the form of an
>attributed topological map the robot could orient itself and navigate
>autonomously in a known environment.
>
>In an unknown environment the robot was able to build, by means of
>supervised learning, an attributed topological map as a basis for
>subsequent autonomous navigation.
>
>The experiments could be performed both under unmodified artificial light
>and under natural light shining through the glass walls of the building.
>
>
>---------------------------------------------------------------------------
>
>
>2:00pm
>
>Manipulator Control by Calibration-Free Stereo Vision
>
>Karl Vollmann and Minh Chinh Nguyen
>
>Institute of Measurement Science, Federal Armed Forces University Munich
>85577 Neubiberg, Germany; Phone: +49 89 6004 3343; Fax: +49 89 6004 3074
>e-mail: Karl.Vollmann@UniBW-Muenchen.de
>
>
>Abstract:
>
>Conventional stereo vision methods for grasping objects require repeated
>calibration of the manipulator and the vision system.  To avoid the
>protracted and thus expensive calibration procedure [Graefe, Ta 1995] have
>proposed an approach to robust, adaptive and calibration-free manipulator
>control.  In their implementation the objects to be grasped were limited to
>cylindrical objects with a vertical axis of symmetry.
>
>This paper will describe an extension of the previous method.  It is still
>based on the concept of "object- and behaviour-oriented stereo vision" and
>contrary to conventional Stereo vision methods it uses an uncalibrated
>camera system and allows a direct transition from image coordinates to
>motion control commands of a robot.
>
>In the newer version it is however able to manipulate elongate objects in
>an arbitrary orientation, in addition to flat cylindrical objects.  An
>additional degree of freedom of the robot is used to accommodate the
>arbitrary object orientation.
>
>The following results were achieved in real-world experiments with an arm
>which characteristics were completely unknown to the system and with
>completely uncalibrated cameras:
>
>An object of elongate shape placed anywhere in that part of the robot's 3-D
>workspace that was observed by the cameras was located and grasped.
>
>Operation of the robot continued without degradation even after the viewing
>direction of the cameras was arbitrarily changed in a way unknown to the
>system.
>
>
>---------------------------------------------------------------------------
>
>
>----- End Included Message -----
>
 Regards,
 - Zunaid

---------------------------------------------------------------------
Zunaid Kazi                                        kazi@asel.udel.edu
AI & Robotics                         http://www.asel.udel.edu/~kazi/
                           http://www.asel.udel.edu/~kazi/bangladesh/
---------------------------------------------------------------------