Solution to the Hand-Eye Calibration in the Manner of Absolute Orientation Problem Cengiz DENİZ [email protected] Ford Otosan, Body Construction Engineering Department., Golcuk Plant, 41670, Golcuk, Kocaeli, Turkey Mustafa ÇAKIR [email protected] University of Kocaeli, Electronics & Communication Engineering Department, Umuttepe, Kocaeli, Turkey Abstract Purpose – In this study, it is aimed to develop a simple hand-eye calibration method that can be easily applied with the different objective functions. In practical applications, it is shown that the study meets real expectations. Design/methodology/approach – The absolute orientation, known as determination of the rotation between the two sampled set of points, can be used to define the relationship between the sensor and the manipulator position. For this solution, data from three different postures are sufficient. The handeye calibration is solved by closed form absolute orientation equations. Instead of processing all the samples together, all the possible solutions are obtained and the most suitable one that is defined for certain objective function is chosen. Findings – The proposed method is found flexible when compared to the literature presented so far, because; before serving the final result, selection can be made among the solutions to obtain the answer that meets the desired optimization criteria. The mathematical error expression defined by the calibration equations may not be valid in practice, where especially systematic distortions are present. It is shown in the simulations that the solution which results the least mathematical error in systems may have incorrect, incompatible results in the presence of practical demands. Research limitations/implications – The results of the calibration performed with the proposed method are compared with the reference methods in the literature. When reducing the back projection is benchmarked, which is corresponds the point repeatability, the proposed approach is considered the most successful method among all others. With the confidence of its robustness, it is decided to make tooling-sensor calibrations by the recommended method, in the Robotic Non-destructive Testing (NDT) station in Ford-OTOSAN Kocaeli Plant Body Shop Department . Originality/value – In that study, a straightforward, hand-eye calibration method based on the closed form solution, is presented. The proposed method is derived on quaternion algebra and gives more accurate and convenient results than the existing solutions. The proposed solution is not presented in the literature as a standalone hand eye calibration method, although some researchers drop a hint to the relative formulations. Proposed method goes through all minimal solution sets. Final result is chosen after evaluating the solution set for arbitrary objectives. In this stage outliers can be excluded optionally if more accuracy is desired. Keywords Hand-Eye calibration, AX=XB, Absolute Orientation, Registration. 1. Introduction In most of the robotic processes, programming with manual point-teaching method has lost its importance today. In order to do this automatically, instead of the manual point-teaching method, the sensors that will present position information must be placed in the working space. Dense position information can be obtained from the image processing systems instead of the single point measurements obtained with sensors. The increasing trend in image processing systems has also increased the variety in robot applications. This is seen in many applications, primarily robotic pick and place and robotic non-destructive inspection (Robert Bogue, 2011). In (Connolly, 2008) it is stated that the robots equipped with the image processing devices will not only be able to detect position information, but can also take on active tasks in artificial intelligence applications in many future applications. From a single point of view with the sensors, all the points of interest in the working space may not be found. By changing the position and rotation of the sensor, measurements can be made in the areas that are not yet accessible or cannot be covered. The position information obtained with reference to different positions needs to be transferred to a common reference plane. The process of transferring the obtained data to the common coordinate system is known as registration. The different postures of the measurement sensors must be determined relative to each other so that the process can be performed. Rotations and translations of different measurement stops must be found, by using the data of corresponding locations obtained at different postures. The registration process is an important step in obtaining the overall volume structure from the three-dimensional sensor data when combining the images taken at different angles in the image processing to obtain a larger wide angle panoramic image. In the manufacturing industry, transferring the information from the three-dimensional sensors to the robot reference plane is a necessary process for dissemination and efficiency of robotic stations. Determining the positional relationship between the sensor and the robot end-effector is called as hand-eye calibration. For this operation, the equation described in (1) must be solved. When the subject is robotics, 𝐴4𝑥4 , 𝐵4𝑥4, 𝑋4𝑥4 in this equation is the homogeneous coordinate transformation matrices. In the positions 𝑖.and 𝑗., 𝐴𝑖𝑗 is the relationship that defines the manipulator position, and 𝐵𝑖𝑗 is the relationship that defines the change in the fixed point detected at the plane of the sensor. The starting point for hand-eye calibration is the determination of the rotation and transformation between two appropriate data sets. This process, which is referred as absolute orientation, is used to get the extrinsic camera parameters in camera calibration, and to generate 𝐵𝑖𝑗 matrix in hand-eye calibration. Closed-form solutions available with orthogonal matrix (Horn, et al., 1988) and quaternion representations (Horn, 1987) are available. In these publications, the scale, translation and rotation required for the registration are clearly explained. Determination of scale factor without calculating rotation parameters, simplifications when scale term is not needed and data set has some spatial features such as coplanarity are also valuable comments in this manuscript. In (Lorusso, et al., 1995), a comparison was made among the algorithms for estimating 3D rigid transformation including the above-mentioned method. Hand-eye calibration was first described (Shiu and Ahmad, 1989), and it was shown that this equation can be calculated if at least three different posture data is known. (Tsai and Lenz, 1989) has reduced operational complexity by introducing significant simplifications to the previous method. It can be seen that the solution of the rotation matrix in equation (1) can be done without considering translation terms. Separable solutions have proposed to solve the rotation matrix first, then the displacement statement, from here. One of the first efficient method is introduced in (Tsai, 1989) where authors utilized the angle axis notation. In (Chou and Kamel, 1991), the set of linear equations obtained for calibration by quaternion algebra was solved by SVD (singular value decomposition). In (Park and Martin, 1994), with the lie algebra, a simplified closed form solution is presented using the properties of the rotation matrix. (Horaud and Dornika, 1995) proposed a different rotation decomposition and solved the problem with the quaternion representation. (Liang and Mao, 2008) presented a new system of linear equations that reduces noise sensitivity by performing Kronecker multiplication. Simultaneous solutions as well as separable solutions with different rotation representations are common in the literature. (Lu and Chou, 1995) and (Daniilidis, 1999) proposed methods that give rotation and displacement solutions at the same time, reducing the effect of the rotation calculation error over the displacement solution. The above basic methods mentioned here in a very short summary, are analyzed in (Shah, Eastman and Hong, 2012). Code examples for these basic methods are available at (http://math.loyola.edu/~mili/Calibration/, https://github.com/christianwengert/calib_toolbox_addon, http://lazax.com/www.cs.columbia.edu/~laza/html/Stewart/matlab/) The codes in these addresses are used to compare the performance of the method we are offering. Separable, simultaneous and iterative approaches to the solution of the homogeneous transformation matrix 𝑋 , are basic recommendations presented by researchers. Separable solutions recommend parsing out the terms of the rotation and the translation and leads to solve first rotational part then translation. This approach provides a simple and fast solution. However, the rotation information obtained in the first stage must be used when resolving the translation. In this case, the accumulated error in the rotation solution is expanded to the position information. Based on the weighting of the two solutions, it has been shown that more accurate solutions can be produced in the iterative solution methods. However, in this case the initial criteria increase the complexity of the solution process. As a general information, the optimum solution is not a guarantee of convergence due to the choice of inputs and initial preferences in the iterative solution methods, is presented at the beginning of the (Condurache and Burlacu, 2016)'s study. (Khan, et al., 2011) and similar studies have shown that the visual sensor data can be used independently of manipulator kinematics without calibration. The relationship between the hand-eye calibration and the sensor plane and the manipulator base plane requires the calibration of the robot and the sensor. (Wang, et al., 2015), has pointed that robot, hand-eye and camera calibrations can also be simultaneously performed using a stereo camera. Nowadays, researchers are working to increase the sensitivity of the process in calculating hand-eye calibrations. In order to increase the accuracy (Hu and Chang, 2012,2013), recommend the use of line lasers, structured light with or instead of calibration plates. The manipulator cannot be positioned at very different positions due to the need to be within the perspective of the calibration plate. Hu and Chang's method is also useful for the situations where the camera cannot see the robot tip. In addition, the selection of the most cohesive set of measurements was proposed in (Schmidt and Niemann, 2008). The selection of the camera view angle to obtain the data set at specific stops for higher sensitivities are subjected in (Motai and Kosaka, 2008). With these preferences, the noisy data set is improved a bit, but the noise that can be caused by the manipulator is also effective. Proposals on how to adjust the optimization between rotation and displacement noise are also presented in (Franzzek, 2013). (Pan, et al., 2014) can be shown as an example to the work of correcting the distorted accuracy values due to noise. In their work, they pointed out that noise and measurement errors in closed form solutions are very influential on the calibration accuracy, and they added the step of final-rectification to the calibration process. They pointed that the 𝑋 homogeneous transform matrix calculated with noise causes incorrect values. The rectification matrix can be calculated by comparing the actual values with the coordinate values calculated by errors. Calibration can be improved by multiplying 𝑋 with this calculated matrix to improve the accuracy. In the method presented in this article, fundamental point is absolute orientation. Without any new contribution, formulation of (Horn, et al., 1988), is used both for determination of matrix 𝐵𝑖𝑗 and 𝑋. With the advantage of planar calibration board usage, when number of the referenced points are reduced to three, proposed methods leads its alternatives both in accuracy, simplicity and swiftness. With this idea, similar to random sample consensus (RANSAC), it is recommended to produce a large number of possible solutions with a minimum number of samples of three from the all postures. At the last stage, the selection of the most appropriate one among these nominees is carried out. The preferred criterion is to keep re-projection errors to a minimum. The proposed method is the best solution for this criterion. With this criterion, the closest rotation and translation may not be found, but the desired consistency will be ensured for many applications on the industry. This view is supported by the analysis results. It is seen from the comparative studies that the proposed method gives the best results against the other methods in the literature for almost all criteria. The organization of the rest of the article is like this. The methods for hand-eye calibration in part 2 will be further elaborated by formulas. In section 3, the proposed method will be introduced and in the last part comparative results of analyzes made with synthetic data and actual data will be evaluated. Researchers interested in this subject can access sample sets and codes used in this study at http://ehm.kocaeli.edu.tr/dersnotlari_data/?dir=mcakir/MAK_HAND. 2. Hand-Eye Calibration Calibration is not a mandatory application for the use of data obtained at the sensor plane. As in the literature (Khan et al., 2011), there are studies of visual sensor applications independent of robot kinematics, without the need for calibration. In industrial applications where reliability is very important, the relation between the sensor plane and the manipulator flange plane is determined by solving the equation (1). 𝐴𝑋 = 𝑋𝐵 𝑅𝐴 𝑡𝐴 𝑅𝑋 𝑡𝑋 𝑅 𝑡 𝑅 𝑡 [ ][ ] = [ 𝑋 𝑋] [ 𝐵 𝐵] 0 1 0 1 0 1 0 1 𝑅 𝑅 𝑅𝐴 𝑡𝑋 + 𝑡𝐴 𝑅 𝑅 𝑅𝑋 𝑡𝐵 + 𝑡𝑋 [ 𝐴 𝑋 ]=[ 𝑋 𝐵 ] 0 1 0 1 Figure 1 Transformations from base through relative point. (1) In the hand-eye calibration for the equation (1), represented by the homogeneous transformation matrix, the appropriate value of 𝑋which describes the sensor posture is targeted. The first solution to be considered is the optimization of 𝑋 = 𝑚𝑖𝑛𝑋 (∑𝑛𝑖=1∥𝐴𝑖 𝑋 − 𝑋𝐵𝑖 ∥) and can be easily completed. Simultaneous approaches often find iterative solutions to this equation in a function. Rotation and translation in the equation (1) is; 𝑅𝐴 𝑅𝑋 = 𝑅𝑋 𝑅𝐵 𝑅𝐴 𝑡𝑋 + 𝑡𝐴 = 𝑅𝑋 𝑡𝐵 + 𝑡𝑋 (2) This is another way of resolving the rotation independently by discretizing it. (Tsai and Lenz, 1989) By rearranging the equation in (2) as 𝑅𝐴 = 𝑅𝑋 𝑅𝐵 𝑅𝑋𝑇 , the rotation matrices are represented by unit eigen vectors, resulting in the differences of 𝑛⃗𝐴 , 𝑛⃗𝐵 vectors and their sums being orthogonal, 0 = (𝑛⃗𝐴 − 𝑛⃗𝐵 ) ⋅ 𝑛⃗𝑋 0 = (𝑛⃗𝐴 − 𝑛⃗𝐵 ) ⋅ (𝑛⃗𝐴 + 𝑛⃗𝐵 ) (3) established equations, and complete the solution by the following equation. (𝑛⃗𝐴 + 𝑛⃗𝐵 ) × 𝑛 = (𝑛⃗𝐴 − 𝑛⃗𝐵 ), 𝑛⃗ = 𝑛⃗𝑋 tan(𝜃𝑋 ) (4) (Pan, et al., 2014) developed a closed form solution using the quaternion representation in (5) for .𝑅𝑋 by evaluating (3) and (4) in the same way . 𝑄= = 𝑞0 + 𝑞1 𝑖̂ + 𝑞2 𝑗̂ + 𝑞3 𝑘̂ ⃗ 𝑆+𝑉 𝑄𝐴 ∗ 𝑄𝑋 − 𝑄𝑋 ∗ 𝑄𝐵 = (𝑄𝐴 − 𝑄̄𝐵 ) ∗ 𝑄𝑋 = 0 (5) (6) ⃗𝐴 − 𝑉 ⃗ 𝐵 )𝑇 𝑆 − 𝑆𝐵 −(𝑉 ( 𝐴 )𝑄 = 0 ⃗𝐴 − 𝑉 ⃗ 𝐵 ) [(𝑉 ⃗𝐴 + 𝑉 ⃗ 𝐵 )]𝑋 + (𝑆𝐴 − 𝑆𝐵 )𝐼3 𝑋 −(𝑉 (7) (6) is the basis of the Chou's method using the quaternion expression instead of matrix representation. (Malti and Barreto, 2013) found the 𝑄𝑋 expression by solving the equation (7) obtained by expanding the products of the quaternion in a similar way with the SVD. (8) (𝑅𝐴 − 𝐼)𝑡𝑋 = 𝑅𝑋 𝑡𝐵 − 𝑡𝐴 (8) is the linear equations that almost all separable methods use to find the translation value. 2.1 Calibration Expressions based on Quaternion Representation The primary work should be writing the forward kinematic model of the manipulator before transferring the robot tip position and other measured position relative to the flange, to the defined reference coordinates. Before introducing the proposed method, it would be useful to present the kinematic relations of the 6-axis robot manipulator on which we performed the tests. 𝑄= = 𝑅𝑡𝑜𝑄(𝑁, 𝜃) 𝜃 𝜃 (9) ⃗ 𝑁 cos( 2) + sin( 2) ⋅ ∥𝑁⃗∥ ∥ ∥ 𝑃̃ = 𝑟𝑜𝑡(𝑄, 𝑃) = 𝑄 ∗ 𝑃 ∗ 𝑄̄ (10) The following notations are used. 𝐷𝑖 , displacement (link lengths) between the coordinate centers; 𝑁𝑖 , pivot axes pointing to the direction of rotation and ; 𝜃𝑖 , joint angle. 𝐷0 , 𝑁0 is used to associate the robot base with the reference coordinate system. Similarly, 𝐷7 , 𝑁7 are defined for the tool coordinate plane. Through calculations, the base and reference coordinate systems are kept equal by taking 𝜃0 = 0. In the rotation calculations, the quaternion written in (5) is preferred. 𝑄(0) = 𝑄(𝑖) = 𝑅𝑡𝑜𝑄(𝑁(0) , 𝜃(0) ) 𝑄(𝑖−1) ∗ 𝑅𝑡𝑜𝑄(𝑁(𝑖) , 𝜃(𝑖) ) (11) 𝑃(0) = 𝑃(𝑖) = 𝑟𝑜𝑡(𝑄(0) , 𝐷(0) ) 𝑃(𝑖−1) + 𝑟𝑜𝑡(𝑄(𝑖) , 𝐷(𝑖) ) (12) The composite transformations can be calculated with (11) after obtaining the quaternion representation from the angle axis representation by (9). The rotation of the point 𝑃 with 𝑄 can be calculated by (10) where 𝑃̃ is the new position and 𝑄̄ is conjugate of the quaternion. By the operation of the functions (11) and (12), the position and orientation information is reached for any interested kinematic chain. Nearly all of the position measuring devices present measurements based on a fixed reference point, which is the center of their sensors. It is an important requirement in many industrial applications to fix these measurement devices on a moving structure such as a manipulator and to transfer data to the reference plane. The coordinate system of the measurement device connected to the plane of tool_0 is shown in Figure 1. The aim here is to represent the position of the point relative to the base plane. j The measurement setup presents cam𝑃n values relative to the camera reference plane. The lower index 𝑛 is the point number, the upper right indices 𝑗, the position at which the measurement is made, the upper left index 𝑐𝑎𝑚 indicates the reference plane of the measurement 𝑏𝑎𝑠𝑒 𝑃𝑛 = 𝑏𝑎𝑠𝑒 j 𝑃tool_0 j j 𝑗 + 𝑟𝑜𝑡( 𝑏𝑎𝑠𝑒𝑄tool_0 , 𝐷𝑋 ) + 𝑟𝑜𝑡( 𝑏𝑎𝑠𝑒𝑄cam, 𝑐𝑎𝑚𝑃𝑛 ) j (13) The base𝑃n value is calculated using the measurement result cam𝑃n at 𝑗.posture. In addition to robot j configuration information, the 𝐷𝑋 , 𝑏𝑎𝑠𝑒𝑄cam must be known for the calculation to be performed. Rotation of the measuring system is calculated by the expression (14). 𝑏𝑎𝑠𝑒 j 𝑄cam = 𝐷X = 𝑏𝑎𝑠𝑒 j (14) 𝑄tool_0 ∗ 𝑄𝑋 tool_0 j 𝑃cam_org shown in Figure 1 and is the translation of camera coordinate center from robot flange, tool_0. 𝑄X = tool_0 j 𝑄cam is the rotation of camera coordinates relative to tool_0 coordinate j system. Calculations of 𝐷𝑋 , 𝑄𝑋 are performed by hand-eye calibration functions. 𝑏𝑎𝑠𝑒𝑄cam, which is the rotation of the camera plane relative to the base reference plane is shown in Figure 1 and also can ij be calculated by equation (14), Here, 𝑄HES is the rotation that must be applied to move from 𝑖. to 𝑗. j ij ̅i posture. When 𝑄𝐴 = Q ∗𝑄 , 𝑄𝐵 = 𝑄 is declared tool_0 HES = 𝑏𝑎𝑠𝑒 𝑖 𝑄𝑐𝑎𝑚 ∗ 𝑄𝐻𝐸𝑆 𝑖𝑗 𝑄𝑡𝑜𝑜𝑙_0 ∗ 𝑄𝑋 = 𝑏𝑎𝑠𝑒 𝑖 𝑄𝑐𝑎𝑚 ∗ 𝑄𝐵 tool_0 𝑏𝑎𝑠𝑒 𝑏𝑎𝑠𝑒 𝑗 𝑄𝑐𝑎𝑚 𝑗 base base j ̅i Q 𝑄tool_0 ∗ 𝑄X = 𝑄X ∗ 𝑄B tool_0 ∗ (15) 𝑄A ∗ 𝑄X = 𝑄X ∗ 𝑄B can be written. This is the rotational part of equation (1) which is widely known in the literature as 𝐴𝑋 = 𝑋𝐵 and is usually evaluated in the form of a matrices. 2.2. Optimization preferences and metrics 𝛿𝑅3x3 = [𝑅𝐴 𝑅𝑋 − 𝑅𝑋 𝑅𝐵 ] 𝛿𝑡3x1 = [(𝑅𝐴 𝑡𝑋 + 𝑡𝐴 ) − (𝑅𝑋 𝑡𝐵 + 𝑡𝑋 )] (16) 𝛿𝑅 𝛿𝑡 ∥ ∥ 𝐸 = 𝐴𝑋 − 𝑋𝐵 = ∥∥ ∥ 0 1∥ Almost all methods in the literature have been designed to reduce the error term in (16). 𝐸𝑚𝑥𝑛 represents the difference between the two data sets, as a general notation, ∥•∥ frobenius norm 𝑛 2 (euclidean norm, also referred as 𝐿2 ), the criterion defined by ∥𝐸∥ = √∑𝑚 𝑖=1 ∑𝑗=1 ∣ 𝑒𝑖𝑗 ∣ is used not only for distance but also for all kinds of numerical comparisons. Evaluating all elements of 𝐸in which numeric values can be grouped in different units, to the same weight can lead to errors in comments. To define a posture, the position and orientation must be known against the reference coordinate plane. In order to evaluate two or more postures relatively, it is necessary to measure how close these positions are to each other. Positional evaluation between postures 𝑃𝑟 , 𝑃𝑠 is usually, 𝑝 1⁄𝑝 𝑑(𝑡𝑟: , 𝑡𝑠: ) = (∑𝑑𝑖𝑚 𝑘=1∣𝑡rk − 𝑡sk ∣ ) (17) carried out with a metric of the distance measurement. This function, known as the 𝐿𝑝 norm, can be used to control the value and complexity of the large outliers by selecting 𝑝 = 1,2, . . . ∞ . When orientation is the subject, it is quite difficult to determine the metric. Rotation matrix and unit quaternion for orientation are particularly preferred display formats for robotic applications. In (Huynh, 2009), the resolution, linearity, and processing complexity of the rotation metrics are presented by the authors. 𝑑𝑔𝑒𝑜𝑑. (𝑅𝑟 , 𝑅𝑠 ) = ∥ log(𝑅𝑟−1 𝑅𝑠 ) ∥ 𝑑ℎ𝑦𝑝𝑒𝑟. (𝑅𝑟 , 𝑅𝑠 ) = ∥ log(𝑅𝑟 ) − log(𝑅𝑠 ) ∥ 𝑑𝑓𝑟𝑜𝑏. (𝑅𝑟 , 𝑅𝑠 ) = ∥ 𝑅𝑟 − 𝑅𝑠 ∥ (18) above expressions are based on the rotation matrix. When 𝜃 = cos −1 (∣ 𝑞𝑟 ⋅ 𝑞𝑠 ∣) is defined 𝑑1 (𝑞𝑟 , 𝑞𝑠 ) = 𝑑2 (𝑞𝑟 , 𝑞𝑠 ) = = 𝑑𝑔𝑒𝑜𝑑. (𝑞𝑟 , 𝑞𝑠 ) = 1−∣ 𝑞𝑟 ⋅ 𝑞𝑠 ∣ 𝑚𝑖𝑛{∥ 𝑞𝑟 − 𝑞𝑠 ∥, ∥ 𝑞𝑟 + 𝑞𝑠 ∥} √2(1−∣ 𝑞𝑟 ⋅ 𝑞𝑠 ∣) 2𝜃 (19) These are the metrics produced from the quaternion representation defined in the interval 𝑑1 ∈ [0,1], 𝑑2 ∈ [0√(2)], 𝑑𝑔𝑒𝑜𝑑 ∈ [0, 𝜋] . The 𝑑𝑔𝑒𝑜𝑑 is the same as the geodesic metric given by the rotation matrix (Huynh, 2009). In (20), as shown in the iterative methods, a relationship between rotation and translation errors is sought. (20) 𝐽 = 𝛿𝑅 + 𝜆𝛿𝑇 In practice, there are a number of local minimums in the search space with the noise contribution in the matrices 𝐴, 𝐵. 𝐴𝑖𝑗 is the matrix describing the relationship between different orientations of the robot. In modern industrial robots, measurements that are reference to this matrix can be made very precise. Beside the measurement accuracy, this matrix is negatively affected from the rigidity of the solid body, which can be regarded as a systematic error source. The matrix 𝐵𝑖𝑗 defines the relationship between two different positions of the sensor relative to the reference point. It contains much more noise than measurements made on the robot kinematics, which can include many extra false values due to erroneous detection. It is possible to improve the hand-eye calibration performance by accepting and sorting outliers of these over-erroneous matrices. The advantage of having multiple sample points can be appreciated when determining 𝐵𝑖𝑗 . It can be filtered. Calculations can be made with the lowest error if the samples are normally distributed. However, this does not apply to 𝐴𝑖𝑗 . For a typical industrial robot with a 6-axis, a faulty reading of a joint angle would make the entire matrix useless. The metrics (18) and (19) may be misleading because of the noise. The closest result to the true value of 𝑅𝑋 , 𝑡𝑋 may not be in the front row in the metrics and may also occur in the opposite case. What should be decided here is what kind of application is considered. If the states of the sensor measurement results depend on each other as in many applications, it would be more appropriate to use the projections instead of the metrics given above. 1 𝑏𝑎𝑠𝑒 ̅̅̅̅̅̅̅̅ 𝑃 = (∑𝑁 𝑏𝑎𝑠𝑒𝑃𝑖 ) (21) 𝑛 𝑁 𝑖=1 𝑛 1 𝑏𝑎𝑠𝑒 ∥̅̅̅̅̅̅̅̅ 𝑑𝑜𝑟𝑔 (𝑅, 𝑡) = 𝑁 (∑𝑁 𝑃𝑛 − 𝑏𝑎𝑠𝑒𝑃𝑛𝑖 ∥∥) 𝑖=1 ∥ (22) 2 𝑗 𝑁 ∥ 𝑏𝑎𝑠𝑒 𝑖 𝑑𝑏𝑎𝑐𝑘 (𝑅, 𝑡) = 𝑁(𝑁−1) (∑𝑁−1 𝑃𝑛 − 𝑏𝑎𝑠𝑒𝑃𝑛 ∥∥) 𝑖=1 ∑𝑗=𝑖∥ (23) 𝑏𝑎𝑠𝑒 𝑃𝑛 , representation of the fix point reference to base frame, can be calculated by forward 𝑏𝑎𝑠𝑒 kinematics by use of 𝑅𝑋 , 𝑡𝑋 values. The center (mean) of the points ̅̅̅̅̅̅̅̅ 𝑃𝑛 can be calculated with 𝑏𝑎𝑠𝑒 averaging 𝑃𝑛 . Then the distance of each point calculated from (21) can be regarded as a metric. The measurement made at a specified posture can be projected to the other postures with kinematic relations and the error in this case can be measured with (23). 3. Proposed Method The method we propose is based on the expression of equation (1) as 𝑄𝐴 = 𝑄𝑋 ∗ 𝑄𝐵 ∗ 𝑄̄𝑋 . When the scalar parts of the quaternions 𝑄𝐴 , 𝑄𝐵 are equal and the norm of the vectors are the same, the ⃗ 𝐴 = 𝑄𝑋 ∗ 𝑉 ⃗ 𝐵 ∗ 𝑄̄𝑋 refers to the known quaternion transformation. From there, 𝑄𝑋 can be description 𝑉 ⃗ 𝐵 to solved using the method presented in (Horn, 1987) to find the rotation that will bring the vector 𝑉 ⃗ 𝐴. the position 𝑉 In this article, it is recommended that for each three postures selected out of all postures, a large number of possible solutions sets of 𝑄𝑋 should be firstly acquired. Then corresponding 𝐷𝑋 should be obtained. Among the (𝑄𝑋 , 𝐷𝑋 )nominees using either (22) or (23) the best fitted one can be approved as final result. 3.1. Preparatory work Continuing the calibration process with all perceived noisy data will force the process load due to the size of the matrices. Since propagation of the error can not be controlled in subsequent processes, it is recommended that these points be refined first. This process starts with obtaining the plane equation from the measurement points provided by the camera. It is clear that the deviation from the true value for the points that are projected to the nearest point of the plane from which the equation is obtained can be reduced. In the proposed method, the plane equation is first obtained from the points we know to be on the plane. Then three points were selected on this plane. One of the points is determined at the center of the plane by taking the average of all the points. The other two were chosen for the long and short edge of the calibration plate. 3.2. Obtaining the Rotation Definition By using three points, to find the rotation, the simplest definition in (Horn, 1987) become applicable. Coplanarity is guaranteed with the choice of three points as a reference. To ignore the effect of the translate, the following calculations are presented where 𝑟𝑙,𝑖 , 𝑟𝑟,𝑖 indicate the 𝑖. point in the left and right images, and the 𝑟̄𝑙 , 𝑟̄𝑟 , indicate the centers of these (Horn, 1987). 𝑟 −𝑟̄ 𝑟𝑙,𝑖 = ∥𝑟𝑙,𝑖 −𝑟̄𝑙∥ 𝑟𝑟,𝑖 = ∥ 𝑙,𝑖 𝑙 ∥ 𝑟𝑟,𝑖 −𝑟̄𝑟 ∥∥𝑟𝑟,𝑖 −𝑟̄𝑟 ∥∥ (24) The rotation can be found using the relevant points. The normal vectors of the planes defined by the 3-point left and right clusters are calculated by 𝑛⃗𝑙 = 𝑟𝑙,2 × 𝑟𝑙,1 , 𝑛⃗𝑟 = 𝑟𝑟,2 × 𝑟𝑟,1 . The angle between normals, 𝜃𝑎 = 𝑎𝑐𝑜𝑠(𝑛⃗𝑙 ⋅ 𝑛⃗𝑙 ) = 𝑎𝑠𝑖𝑛(∥ 𝑛⃗𝑙 × 𝑛⃗𝑙 ∥) (25) can be calculated. After the intersection line 𝑛⃗𝑎 = 𝑛⃗𝑙 × 𝑛⃗𝑙 of the planes have been detected, the planes can be overlapped with the rotation 𝑄𝑎 = 𝑅𝑡𝑜𝑄(𝑛⃗𝑎 , 𝜃𝑎 ). The vectors 𝑟̃𝑙,𝑖 = 𝑟𝑜𝑡(𝑄𝑎 , 𝑟𝑙,𝑖 ) and 𝑟𝑟,𝑖 are now on the same plane. Now finding the angle on the plane is straightforward. 𝐶= ∑3𝑖=1(𝑟𝑟,𝑖 ⋅ 𝑟̃𝑙,𝑖 ) 𝑆= (∑3𝑖=1(𝑟𝑟,𝑖 (26) × 𝑟̃𝑙,𝑖 )) ⋅ ⃗⃗⃗⃗ 𝑛𝑟 after above definitions are made, 𝜃𝑝 = 𝑎𝑐𝑜𝑠(± 𝐶 ⁄√𝑆 2 + 𝐶 2 ) = 𝑎𝑠𝑖𝑛(± 𝑆⁄√𝑆 2 + 𝐶 2 ) (27) and the angle of rotation on the plane can be found. In this case the rotation is defined by .𝑄𝑝 = 𝑅𝑡𝑜𝑄(𝑛⃗𝑟 , 𝜃𝑝 ) .The rotation relationship between 𝑟𝑙,𝑖 , 𝑟𝑟,𝑖 will provide the 𝑟𝑟,𝑖 = 𝑟𝑜𝑡(𝑄𝑟𝑙 , 𝑟𝑙,𝑖 ) relation. 𝑄𝑟𝑙 = 𝑄𝑝 ∗ 𝑄𝑎 (28) can be found with (28) by using absolute orientation derivations. If needed, the translation, 𝑡 = 𝑟̄𝑟 − 𝑟𝑜𝑡(𝑄𝑟𝑙 , 𝑟̄𝑙 ) (29) can be found with this equation This derivation and details can be found in (Horn, 1987). 3.3. Solution of the AX=XB Equation 𝑄𝐴 = 𝑄𝑋 ∗ 𝑄𝐵 ∗ 𝑄̄𝑋 = = ⃗⃗⃗⃗𝐵 ∗ 𝑄̄𝑋 𝑄𝑋 ∗ 𝑆𝐵 ∗ 𝑄̄𝑋 + 𝑄𝑋 ∗ 𝑉 ⃗⃗⃗⃗𝐵 ∗ 𝑄̄X 𝑆𝐵 + 𝑄𝑋 ∗ 𝑉 (30) From the equation (30), it is clear that 𝑆A = 𝑆B , ⃗⃗⃗⃗ 𝑉A = 𝑄X ∗ ⃗⃗⃗⃗ 𝑉B ∗ 𝑄̄X should be employed. With a function 𝑄𝑋 = 𝑎𝑏𝑠𝑅𝑂𝑇(𝑉𝐵 , 𝑉𝐴 ) which represents absolute orientation 𝑄𝑋 can be obtained and j j 𝑗 respectively 𝑏𝑎𝑠𝑒𝑄cam in (14), 𝑟𝑜𝑡( 𝑏𝑎𝑠𝑒𝑄cam, 𝑐𝑎𝑚𝑃𝑛 ) in (13) can be calculated. Quaternion j 𝑗 representation of 𝑏𝑎𝑠𝑒𝑄tool_0 can be converted to 𝐴3x3 rotation matrix. In this case 𝐴𝑗 ∗ 𝐷𝑋 = j 𝑟𝑜𝑡( 𝑏𝑎𝑠𝑒𝑄tool_0 , 𝐷𝑋 ) can be written and to find 𝐷𝑋 , the equation (13), 𝐵𝑗 = 𝑏𝑎𝑠𝑒 j j 𝑗 𝑃𝑛 − 𝑏𝑎𝑠𝑒𝑃tool_0 − 𝑟𝑜𝑡( 𝑏𝑎𝑠𝑒𝑄cam, 𝑐𝑎𝑚𝑃𝑛 ) j = 𝑟𝑜𝑡( 𝑏𝑎𝑠𝑒𝑄tool_0 , 𝐷𝑋 ) = 𝐴𝑗 ∗ 𝐷𝑋 (31) can be arranged in this way. However, if definitions 𝐴𝑖𝑗 = 𝐴𝑖 − 𝐴𝑗 , 𝐵𝑖𝑗 = 𝐵𝑖 − 𝐵𝑗 are made because 𝑏𝑎𝑠𝑒 𝑃𝑛 is not known, 𝐵𝑖𝑗 = 𝐴𝑖𝑗 ⋅ 𝐷𝑋 (32) 𝐷𝑋 can easily be calculated from the equation. The flow chart of the presented method is given in Algorithm 1. Algorithm 1. Hand-eye calibration from the outset 〈 𝟏. ∣ compact sensed points〉 𝑃𝑖 = cam {𝑃0 , 𝑃1 , 𝑃2 }𝑖𝑛 〈 𝟐. ∣ acquire relation between postures〉 𝑖,𝑗 𝑄𝐴 𝑖 → 0, … , 𝑛, 𝑗 → 0, … , 𝑛{ j i = ( 𝑏𝑎𝑠𝑒𝑄̄tool_0 ) ∗ ( 𝑏𝑎𝑠𝑒𝑄 tool_0 ) 𝑖,𝑗 𝑄𝐵 = 𝑎𝑏𝑠𝑅𝑂𝑇(𝑃𝑖 , 𝑃𝑗 ) } 〈 𝟑. ∣ estimate𝑸𝑿 〉 𝑖 → 0, … , (𝑛 − 2), 𝑗 → 𝑖 + 1, … , (𝑛 − 1), 𝑘 → 𝑗 + 1, … , 𝑛{ ⃗𝐴𝑗,𝑖 , 𝑉 ⃗ 𝐴𝑘,𝑖 , 𝑉 ⃗𝐴𝑘,𝑗 } 𝑉𝐴 = {𝑉 ⃗𝐴𝑖,𝑗 , 𝑉 ⃗ 𝐴𝑖,𝑘 , 𝑉 ⃗𝐴𝑗,𝑘 } 𝑉𝐵 = {𝑉 𝑄𝑋 = 𝑎𝑏𝑠𝑅𝑂𝑇(𝑉𝐵 , 𝑉𝐴 ) } ̂𝒊 〉 〈 𝟒. ∣ calculate 𝒃𝒂𝒔𝒆𝑸𝒊cam , 𝑷 𝑏𝑎𝑠𝑒 𝑖 → 0, … , 𝑛{ 𝑖 𝑄tool_0 ) ∗ (𝑄𝑋 ) → 𝑏𝑎𝑠𝑒 𝑏𝑎𝑠𝑒 𝑖 𝑖 𝑄cam =( 𝑅cam 𝑏𝑎𝑠𝑒 𝑏𝑎𝑠𝑒 𝑖 i 𝑖 𝑃̂𝑖 = 𝑃tool_0 + 𝑟𝑜𝑡( 𝑄cam, 𝑃 ) } 〈 𝟓. ∣ estimate𝑫𝑿 〉 𝑖 → 0, … , (𝑛 − 2), 𝑗 → 𝑖 + 1, … , (𝑛 − 1), 𝑘 → 𝑗 + 1, … , 𝑛{ 𝑅𝑗 − 𝑅𝑖 𝑃̂ 𝑖 − 𝑃̂ 𝑗 𝐴 = { 𝑅 𝑘 − 𝑅 𝑖 } , 𝐵 = { 𝑃̂𝑖 − 𝑃̂𝑘 } 𝑅𝑘 − 𝑅𝑗 𝑃̂ 𝑗 − 𝑃̂𝑘 𝐷𝑋 = 𝐴−1 ⋅ 𝐵 } 〈 𝟔. ∣ select(𝑸𝑿 , 𝑫𝑿 )〉 4. Practice In this study, 8x13 checkerboard was used as reference plate. Artificial sensor data for known grandtruth 𝑄𝑋 , 𝐷𝑋 , was obtained by using the angle values of 30 different positions where we position the robot in the real sensor test. 𝜃𝑛𝑜𝑖𝑠𝑒 = 𝑁(0, 𝜎𝑟𝑜𝑏 ), the normally distributed with zero mean value is added to the manipulator angle 𝑃⃗ values in the range 𝜎𝑟𝑜𝑏 = 0, … , 0.40 . For the stereo camera data, 𝑃⃗𝑛𝑜𝑖𝑠𝑒 = 𝑁(0, 𝜎𝑐𝑎𝑚 ) 𝑈 is added ∥∥𝑃⃗𝑈 ∥∥ by limiting the norm of the vector with 𝜎𝑐𝑎𝑚 = 0, … ,20𝑚𝑚 , {𝑈[−1,1]𝑥̂, 𝑈[−1,1]𝑦̂, 𝑈[−1,1]𝑧̂ } conforms uniform distribution. where ⃗⃗⃗⃗ 𝑃𝑈 = The labels used for the algorithms that are compared in the graphs in the following sections are as follows. 𝑇𝑅𝑈𝑇𝐻reflects the results obtained with real and undistorted 𝑄𝑋 , 𝐷𝑋 values. 𝑀𝐶𝑅 for 𝛿𝑅3x3 rotational error as a priority, 𝑀𝐶𝑇 for 𝛿𝑡3x1 translational error as a priority, 𝑀𝐶𝐵 for metric 𝑑𝑏𝑎𝑐𝑘 (𝑅, 𝑡) in (23), 𝑀𝐶𝑂 for metric 𝑑𝑜𝑟𝑔 (𝑅, 𝑡) in (22), 𝑀𝐶𝑌 again for (22) but using truth, undistorted 𝑏𝑎𝑠𝑒 𝑏𝑎𝑠𝑒 𝑃 instead of ̅̅̅̅̅̅̅̅ 𝑃 are the other variants of the proposed method where only selection criteria 𝑛 𝑛 at the sixth step of the Algorithm 1 is changed. 𝑀𝐶 label representation is based on generalized (Horn, 1987)'s least squares optimization where instead of three, all postures are included. So in 𝑀𝐶there is no selection step to obtain 𝑄𝑋 = 𝑎𝑏𝑠𝑅𝑂𝑇(𝑉𝐵 , 𝑉𝐴 ). The practical tests were done with ABB IRB 1400 series industrial robot and PtGrey Bumblebee XB3 stereo camera. Figure 2 Hand-eye Calibration Setup 4-1. Test with Synthetic Data One of the difficulties of hand-eye calibration is that there is no real data to show success. For this reason, optimization can only be done between perceived and computed quantities. The use of synthetic data is necessary when evaluating the performance of the proposed algorithms. Thus, performance at different noise levels can be tested with different sample numbers. Figure 3 presents the characteristics of synthetic data used in this study. Figure 3 Noise penetration into different indexes The goal in hand-eye calibration is to detect 𝑄𝑋 , 𝐷𝑋 as the nearest true rotation and displacement values of the sensor . However, when measurement and detection errors are added, adverse situations can be encountered in practical applications. Either the position information detected in the sensor or angle values used in the kinematic calculations may not show the normal distribution with an average value of zero, as expected. When systematic errors such as rigid body flexibility exist behind the random error sources, even truth 𝑄𝑋 , 𝐷𝑋 values are used, kinematic calculations may likely be inconsistent In Figure 3 the graphs on the left side show the noise effect on the manipulator side, while the graphs on the right side show the noise effect on the points where the positions are measured by the camera. There is no prospect to filter the 𝑄0, 𝑃0 data in any way. A statistically significant number of repeated measurements is required for the same posture to be able to filter. In this case, it will be necessary to take hundreds of records. In modern industrial robots, the angle reading error is too low to be compared with what is shown here. However, in some postures and in manipulators with different tooling, there will be differences between the calculated position and the actual position due to the rigidity of the body. On the right side, the average value of a total of 104 evaluation points on the reference plate under the 𝑂𝑅𝐺 heading is presented. However, the added synthetic noise is produced from the normal distribution function with zero mean value and although many points are included because they are statistically incomplete, the values differ from the zero mean value. In this study, only one data set was recorded for each postures and a total of 30 different stances were collected. If graphics are carefully checked, it will be seen that there are a large number of data in the high aberration except for the standard deviation range. Especially the noise on the manipulator is a problem for the algorithms presented in the whole literature and also the algorithm proposed in this article. This data may be misleading as we have designed to test algorithm performance. The dominant noise source in the practical application comes from the camera perception. The results obtained for 𝜎𝑟𝑜𝑏 = 00 , 𝜎𝑐𝑎𝑚 = 0, … ,20𝑚𝑚 values should also be evaluated separately in this respect. The euclidean distance calculated by (17) is used as the performance index in Figure 4. The vertical axis represents different noise levels. For the graphs on the left side, the line shown by 20 indicates noisy data sets for 𝜎𝑟𝑜𝑏 = 0.40 , 𝜎𝑐𝑎𝑚 = 20𝑚𝑚, and 𝜎𝑟𝑜𝑏 = 0.40 , 𝜎𝑐𝑎𝑚 = 0𝑚𝑚 for the right ones. In each condition, only the numerical results for the three best values are written. The numbers written for the rotations are multiplied by 1000. Separable hand-eye calibration methods seem to give good results for 𝛿𝑅3x3 in equation (16) because they are primarily optimized for rotation. The simultaneous methods presented in (Lu and Chou, 1995) and (Daniilidis, 1999) have best performance against 𝛿𝑡3x1 as expected. From the graphics under the title of 𝐶𝑜𝑚𝑝𝑎𝑟𝑒𝐴𝑋𝑋𝐵 − 𝑇 these can be commented. Although optimizations are performed for 𝛿𝑅3x3 , 𝛿𝑡3x1 , the difference between the values calculated with real, error-free 𝑄𝑋 , 𝐷𝑋 , , which is a reference to the synthetic data, is presented under the label 𝐶𝑜𝑚𝑝𝑎𝑟𝑒𝑂𝑅𝐺𝑄 , 𝐶𝑜𝑚𝑝𝑎𝑟𝑒𝑂𝑅𝐺𝐷 . In the Figure 5, tests are presented with synthetic data that are more common in practice. The reason for including the variants 𝑀𝐶𝑅 and 𝑀𝐶𝑇 is that, the proposed method can be used for the optimization criteria in the literature if demanded. The fact that the solutions using real, non-distorted values are behind the other methods for the errors 𝛿𝑅3x3 , 𝛿𝑡3x1 , lead to the questioning of these metrics. If performance evaluation is done in Figure 4 and Figure 5, it is necessary to compare the solutions with 𝑄𝑋 , 𝐷𝑋 . From the graphs under the 𝐶𝑜𝑚𝑝𝑎𝑟𝑒𝑂𝑅𝐺𝑄 , 𝐶𝑜𝑚𝑝𝑎𝑟𝑒𝑂𝑅𝐺𝐷 labels, it is clear that 𝑀𝐶𝐵 , which selects with (23), is the most successful algorithm. Figure 4 Comparison of the Calibration algorithms with standard metrics. Figure 5 Comparison with standard metrics for metaverse data.𝜎𝑟𝑜𝑏 = 00 , 𝜎𝑐𝑎𝑚 = 20𝑚𝑚 C-2. Genuine Execution The expected result from the solution of equation (1) is getting the 𝑄𝑋 , 𝐷𝑋 , rotation and translation values that are part of 𝑋4𝑥4 homogeneous transformation matrix. Although the hand-eye calibration is obtained by solving this equation, in practice the goal is to transfer the measurements made at different robot stops to the reference coordinates. For this it is necessary to have the same values without deviations when the measurements at different positions are made for the same observation point. It can be seen in Figure 6. that even with real, error-free 𝑄𝑋 , 𝐷𝑋 ,values when working with noisy measurement values, this result can not be achieved. The charts labeled 𝐶𝑜𝑚𝑝𝑎𝑟𝑒𝑌𝑃𝑌 , shows the difference between real, undistorted 𝑏𝑎𝑠𝑒𝑃𝑛 and calculations. The 𝑇𝑅𝑈𝑇𝐻 tag was obtained by using the error-free 𝑄𝑋 , 𝐷𝑋 , values in kinematic relations. In this case, metrics and reasonableness to the goal should be questioned. The practical goal in calibration is to ensure that the projections for the same point of measurement values are the same. The metric that must be used for this reason must be the reduction of the distance calculated by the equations (22) or (23). In order to evaluate the performance of the algorithm, 𝐶𝑜𝑚𝑝𝑎𝑟𝑒𝑌𝑃𝑌, 𝑇𝑅𝑈𝑇𝐻, 𝑀𝐶𝑌 calculations can be made but it is not possible to have the corresponding data in real cases. These test are only synthetic fiction for evaluating the performance. The position of the reference plate is unknown at the beginning. Therefore, in practice it is inevitable to use (22) or (23) instead of the real, undistorted 𝑏𝑎𝑠𝑒𝑃𝑛 to make the projection at least faulty. (16-19) metrics may not be matched for practical purposes, even if they have a mathematical meaning. Figure 6 Optimization based on projective geometry Figure 6 and Figure 7 show the results of the projection based optimization. In Figure 4 and Figure 5, the method we propose in the comparison with other literature methods based on the same objective function yielded better results than the others. However, in some data sets, other methods were observed to be the best. In Figure 7, the test results are evaluated with the more likely noise distribution in practice. The clear performance of the method we propose in these tests over the others is obvious. Figure 7 Comparison with projective metrics for metaverse data.𝜎𝑟𝑜𝑏 = 00 , 𝜎𝑐𝑎𝑚 = 20𝑚𝑚 In Figure 7 and Figure 8 performance is presented for different noise levels only for the camera perception. The point to note in this graph is that the proposed algorithm is high in noise immunity. Other algorithms in the literature to compare are very sensitive to data sets where outlier measures may be high, but the results of the proposed method are stable and outlier immunity is high. The results show consistent variations with the standard deviation of the noise distribution. Figure 8 Bench-marking with different noise levels Figure 9 shows the comparison results for the same 𝜎𝑟𝑜𝑏 = 00 , 𝜎𝑐𝑎𝑚 = 10𝑚𝑚. The graphs on the right side are obtained from the bumblebee XB3 stereo camera, with a configuration of 24 cm baseline, and connected to the ABB IRB 1400 manipulator. This chart confirms previous considerations. In practice, noise from the camera is dominant over the manipulator . The results are almost compatible with the synthetic data set. When this graph is evaluated, it is seen that the equations (22) and (23) give approximately the same result. (22) is more useful because of its low processing complexity. Figure 9 Comparison of the algorithms with different metrics 5. Conclusion The equation 𝑃̃3x1 = 𝑅3x3 ⋅ 𝑃3x1 = 𝑄 ∗ 𝑃 ∗ 𝑄̄ is a well-known expression that gives the point 𝑃̃3x1 when the point 𝑃3x1 is rotated by the rotation matrix 𝑅3x3 or quaternion 𝑄. For known 𝑃̃3x1 and 𝑃3x1 , rotation 𝑅3x3 or 𝑄 can be easily calculated either from the linear equation system or from the geometric sense as in (Horn, et al., 1988). At first 𝑅𝐴 = 𝑅𝑋 𝑅𝐵 𝑅𝑋𝑇 may not draw the attention but It is obvious that the right side can be evaluated as the quaternion rotation in equation (10) when the equation (15) is arranged as 𝑄𝐴 = 𝑄𝑋 ∗ 𝑄𝐵 ∗ 𝑄̄𝑋 . From this point of view, it has been shown in this article that the hand-eye calibration can be considered as an absolute orientation problem. Outputs of this study were compared with other methods by processing synthetic data sets and actual measurement results. It is recommended to calculate all nominees from triple sets of postures. Then the final solution can be chosen by selecting the most appropriate one among the set of solutions formed from this cluster. With this approach, the disturbing domination of the outliers on the solution is reduced and the reliability and immunity of the noise are increased. However it is mathematically valid approach, evaluation of rotation and translation errors among the homogeneous transformation matrix, may not serve the purpose of hand-eye calibration. From the results of hand-eye calibration, it is expected that there should be no differences between the overlapping parts of the two different positions during the recording of the measurements at different desired stops in the reference coordinates. The objective function to achieve this demand can be done by reducing the distance between the projections of the same measurement points. When assessed with this criterion, graphs show that the results of the method proposed in the article are obviously superior to other comparative methods. References Robert Bogue, (2011),"Imaging technology opens up new robotic applications", Industrial Robot: An International Journal, Vol. 38 Iss 4 pp. 343 – 348. Jack C. K. Chou, M. Kamel, (1991), “Finding the Position and Orientation of a Sensor on a Robot Manipulator Using Quaternions”, The International Journal of Robotics Research, Vol. 10, No. 3, pp. 240-254. D. Condurache, A. Burlacu, (2016), “Orthogonal dual tensor method for solving the AX=XB sensor calibration problem”, Mechanism and Machine Theory, 104, pp. 382-404. Christine Connolly, (2008),"Artificial intelligence and robotic hand-eye coordination", Industrial Robot: An International Journal, Vol. 35 Iss 6 pp. 496 – 503. Konstantinos Daniilidis, (1999), “Hand-Eye Calibration Using Dual Quaternions”, The International Journal of Robotics Research, Vol. 18, No. 3, pp. 286-298. Radu Horaud, Fadi Dornika, (1995), “Hand-Eye Calibration”, The International Journal of Robotics Research, Vol. 14, No. 3, pp. 195-210. Marek Franaszek, (2013), “Registration of Six Degrees of Freedom Data with Proper Handling of Positional and Rotational Noise”, Journal of Research of the National Institute of Standarts and Technology, Vol. 118, pp. 280-291. Berthold K. P. Horn, (1987), “Closed-form solution of obsolote orientation using unit quaternions”, Journal of the Optical Society of America, Vol. 4, No 4, page 629-642. Berthold K. P. Horn, Hugh M. Hilden, Shahriar Negahdaripour, (1988), “Closed-form solution of obsolote orientation using orthonormal matrices”, Journal of the Optical Society of America, Vol. 5, page 1127-1135. Jwu-Sheng Hu, Yung-Jung Chang, (2013), “Automatic Calibration of Hand-Eye-Workspace and Camera Using Hand-Mounted Line Laser”, IEEE/ASME Transactions on Mechatronics, Vol. 18, No. 6, pp. 1778-1786. Jwu-Sheng Hu, Yung-Jung Chang, (2012),"Eye-hand-workspace calibration using laser pointer projection on plane surface",Industrial Robot: An International Journal, Vol. 39 Iss 2 pp. 197 – 207. Du Q. Huynh, (2009), “Metrics for 3D Rotations: Comparison and Analysis”, Journal of Mathematical Imaging and Vision, Vol. 35, No. 2, pp 155-164. Umer Khan, Ibrar Jan, Naeem Iqbal, Jian Dai, (2011),"Uncalibrated eye-in-hand visual servoing: an LMI approach", Industrial Robot: An International Journal, Vol. 38 Iss 2 pp. 130 – 138. Rong-hua Liang, Jian-fei Mao, (2008), “Hand-Eye Calibration with a New Linear Decomposition Algorithm”, Journal of Zhejiang University Science A, 9 (10), pp. 1363-1368. A. Lorusso, D.W. Eggert, R.B. Fisher, (1995), “A Comparison of Four Algorithms for Estimating 3-D Rigid Transformations”, Proceeding BMVC '95 Proceedings of the 1995 British conference on Machine vision, Vol. 1, 237-246 (doi:10.5244/C.9.24). Ting-Cherng Lu, Jack C. K. Chou, (1995), “Eight-space Quaternion Approach for Robotic HandEye Calibration”, IEEE International Conference on Systems, Man and Cybernetics. Abed Malti, Joao Pedro Barreto, (2013), “Hand-eye and radial distortion calibration for rigid endoscopes”, The International Journal of Medical Robotics and Computer Assisted Surgery, Vol. 9, pp. 441-454. Yuichi Motai, Akio Mosaka, (2008), “Hand-Eye Calibration Applied to Viewpoint Selection for Robotic Vision”, IEEE Transactions on Industrial Electronics, Vol. 55, No. 10, pp. 3731-3741. Hui Pan, Lina Wang, Shiyin Qin, (2014),"A closed-form solution to eye-to-hand calibration towards visual grasping", Industrial Robot: An International Journal, Vol. 41, Iss. 6, pp. 567 – 574. Frank C. Park, Bryan J. Martin, (1994), “Robot Sensor Calibration: Solving AX=XB on the Euclidean Group”, IEEE Transactions on Robotics and Automation, Vol. 10, No. 5, pp. 717721. Mili Shah, Roger D. Eastman, Tsai Hong, (2012), “An Overview of Robot-Sensor Calibration Methods for Evaluation of Perception Systems”, 2012 Performance Metrics for Intelligent Systems Workshop, College Park, MD,March 20-22. Yiu Cheng Shiu, Shaheen Ahmad, (1989), “Calibration of Wrist-Mounted Robotic Sensors by Solving Homogeneous Transform Equations of the Form AX=XB”, IEEE Transactions on Robotics and Automation, Vol. 5, No. 1, pp. 16-29. Jochen Schmidt, Heinrich Niemann, (2008), “Data Selection for Hand-eye Calibration: A Vector Quantization Approach”, International Journal of Robotic Research, Vol. 27, No. 9, pp. 10271053. Roger Y. Tsai, Reimar K. Lenz, (1989), “A New Technique for Fully Autonomous and Efficient 3D Robotics Hand/Eye Calibration”, IEEE Transactions on Robotics and Automation, Vol. 5, No. 3, pp. 345-358. Haixia Wang, Xiao Lu, Zhanyi Hu, Yuxia Li , (2015),"A vision-based fully-automatic calibration method for hand-eye serial robot", Industrial Robot: An International Journal, Vol. 42 Iss 1 pp. 64 – 73.
© Copyright 2026 Paperzz