Image based autodocking without calibration Page: 4 of 8
This article is part of the collection entitled: Office of Scientific & Technical Information Technical Reports and was provided to Digital Library by the UNT Libraries Government Documents Department.
The following text was automatically extracted from the image on this page using optical character recognition software:
Image-Based Autodocking Without Calibration
University of Illinois
Pennsylvania State University
Oak Ridge National Laboratory
The calibration requirements for visual servoing can
make it difficult to apply in many real-world situa-
tions. One approach to image-based visual servoing
without calibration is to dynamically estimate the
image Jacobian and use it as the basis for control.
However, with the normal motion of a robot toward
the goal, the estimation of the image Jacobian de-
teriorates over time. We propose the use of addi-
tional "exploratory motion" to considerably improve
the estimation of the image Jacobian. We study the
role of such exploratory motion in a visual servoing
task. Simulations and experiments with a 6-DOF
robot are used to verify the practical feasibility of
Sensor-based control can play an important role in au-
tonomous robotics. Computer vision, in particular,
could help in providing a flexible feedback mechanism
for overcoming uncertainties. There has been a growing
interest in visual servo control in recent years , partly
because of a decrease in hardware costs and the advances
in computer vision.
Visual servo control, based on the feedback represen-
tation mode, can be classified as being either position
based or image based. An image-based servoing system
[2, 3] observes how differential changes in robot config-
uration space relate to differential changes in image fea-
tures space and then uses this relationship to control the
robot motion to achieve the goal. The matrix that cap-
tures the relationship between the differential changes
in the robot joints and the image features is named the
image Jacobian. Note that image features here refer to
any measurable image parameters that can be used for
control, for example, position, size, distance, or surface
area of objects in the image.
If the robot and camera are completely calibrated
then the image Jacobian can be computed at each robot
configuration and becomes a basis for visual servo con-
trol. However, the process of calibration could be tedious
and error prone and in some situations may be infeasible.
Thus it is desirable to devise control techniques that can
avoid calibration. The image-based approach is appeal-
ing because by selecting a right set of image features it
may be possible to carry out a task without calibration
In this paper we follow the approach of dynamically
estimating the image Jacobian at each step. The esti-
mated image Jacobian then forms the basis of visual con-
trol. However, in following this approach, the estimation
of the image Jacobian deteriorates since the update is
only in the goal direction. To alleviate this situation, we
introduce the idea of exploratory motion to improve the
estimate of the image Jacobian. We consider the issues
and trade-offs involved in using the exploratory motions.
The study was completed by conducting experiments on
a 6-DOF robot and simulations using a computer model
of the same robot. The results establish the utility of
the exploratory motion for visual servoing without cali-
2 Servoing Scheme
2.1 The image Jacobian
Assume a robot manipulator with n joints and n degrees
of freedom. Assume that a camera is mounted on the
end-effector of the robot and that the servoing task is de-
fined in terms of m image features. Let q = [qi,... , q~]'
be the n-dimensional vector that represents a point in
the robot configuration space. Let r = [rl,... , r,]T be
the p-dimensional vector that represents the position of
the end-effector in a Cartesian coordinate system. Let
f = [fi, ... , fm]T be the m-dimensional vector that rep-
resents a point in image feature space. The relation be-
tween joint velocity of the robot q = [41, ... , q~]T and its
corresponding velocity in task space, i = [fi,... , r,]T,
is captured in terms of the robot Jacobian, J, as
i = J -q,
r -q ..
A change in the end-effector position results in a
change in the image parameters. A perspective projec-
tion model can be used to capture this dependence. Thus
the feature velocities f = [fi, ... , fm]T are related to the
task space velocities as follows:
f =Jr i
where Jr is the m x p matrix of the local Jacobian at
Here’s what’s next.
This article can be searched. Note: Results may vary based on the legibility of text within the document.
Tools / Downloads
Get a copy of this page or view the extracted text.
Citing and Sharing
Basic information for referencing this web page. We also provide extended guidance on usage rights, references, copying or embedding.
Reference the current page of this Article.
Sutanto, H.; Sharma, R. & Varma, V. Image based autodocking without calibration, article, March 1, 1997; Tennessee. (https://digital.library.unt.edu/ark:/67531/metadc677442/m1/4/: accessed April 21, 2019), University of North Texas Libraries, Digital Library, https://digital.library.unt.edu; crediting UNT Libraries Government Documents Department.