One way to increase the flexibility of industrial robots in manipulation tasks is to integrate additional sensors in the control systems. Cameras are an example of such sensors, and in recent years there has been an increased interest in vision based control. However, it is clear that most manipulation tasks can not be solved using position control alone, because of the risk of excessive contact forces. Therefore, it would be interesting to combine vision based position control with force feedback. In this thesis, we present a method for combining direct force control and visual servoing in the presence of unknown planar surfaces. The control algorithm involves a force feedback control loop and a vision based reference trajectory as a feed-forward signal. The vision system is based on a constrained imagebased visual servoing algorithm, using an explicit 3D-reconstruction of the planar constraint surface. We show how calibration data calculated by a simple but efficient camera calibration method can be used in combination with force and position data to improve the reconstruction and reference trajectories. The task chosen involves force controlled drawing on an unknown surface. The robot will grasp a pen using visual servoing, and use the pen to draw lines between a number of points on a whiteboard. The force control will keep the contact force constant during the drawing. The method is validated through experiments carried out on a 6-degree-of-freedom ABB Industrial Robot 2000.
17 Figures and Tables
Figure 2.4 Force controller structure.
Figure 3.1 Errors in estimated intrinsic parameters for camera 1, before (dashed) and after (solid line) the final optimization step.
Figure 3.10 Pen trajectory, camera 1.
Figure 3.11 Pen trajectory, camera 2.
Figure 3.12 Pen trajectory, cartesian space.
Figure 3.13 3DOF velocity screw.
Figure 3.14 Measured force F and reference Fr = 2 N.
Figure 3.15 Plane parameters p1, p2 and p4
Figure 3.2 Errors in estimated intrinsic parameters for camera 2, before (dashed) and after (solid line) the final optimization step.
Figure 3.3 Errors in Tc1c2 , translation t and Euler angles θ , before (dashed) and after (solid line) the final optimization step.
Figure 3.4 Errors in Tc1b , translation t and Euler angles θ , before (dashed) and
Figure 3.5 Errors in Ttn, translation t and Euler angles θ , before (dashed) and after (solid line) the final optimization step.
Figure 3.6 Image space trajectories, camera 1 and 2
Figure 3.7 Image space feature positions and references, camera 1 and 2.
Figure 3.8 Joint space trajectories.
Figure 3.9 Cartesian position/orientation of the end-effector.
Figure B.1 Transformation between frames.
Download Full PDF Version (Non-Commercial Use)