2016 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI 2016) Kongresshaus Baden-Baden, Germany, Sep. 19-21, 2016 Towards force sensing based on instrument-tissue interaction Christoph Otte, Jens Beringhoff, Sarah Latus, Sven-Thomas Antoni, Omer Rajput and Alexander Schlaefer itself and the tissue [6]. This force in turn causes deformation of tissue and instrument. Although the deformation of the instrument is typically small, it is the underlying principle of measurement for common force sensors, e.g., based on strain gauges or fiber bragg gratings [7], [8], [9]. These sensors relate a signal vector ~z corresponding to a small deformation of a known material to a load vector ~m, where the respective mapping C is typically obtained by careful calibration of each individual sensor [10]. In contrast, the principle underlying the proposed method is a systematic and stable relationship between the 3D deformation Ψ : ℜ3 → ℜ3 in the local neighborhood of a surgical instrument and the resulting force ~F, whereas Λ provides the respective mapping. The tissue surface ~S and therefore Ψ can be measured during the surgery. Fig. 1 illustrates this approach, denoting a surface area deformed by motion of the surgical instrument. While a dependency between force and surface deformation is rather obvious, obtaining the actual function is still difficult. Tool-tissue interaction models utilizing finite element analysis and heuristic approaches have previously been proposed to obtain a mapping Λ implicitly and explicitly [11], [12], [13]. While FEM analysis is computational expensive, heuristic approaches are often very specific in terms of the underlying models and parameters and are thus limited due to several simplifications and assumptions [14]. Clearly, the proposed method is based on a highly accurate tracking of surface deformation in the proximity of the instrument. Abstract— The missing haptic feedback in minimally invasive and robotic surgery has prompted the development of a number of approaches to estimate the force acting on the instruments. Modifications of the instrument can be costly, fragile, and harder to sterilize. We propose a method to estimate the forces from the tissue deformation, hence working with multiple instruments and avoiding any modification to their design. Using optical coherence tomography to get precise deformation estimates, we have studied the deformations for different instrument trajectories and mechanical tissue properties. Surface deformation profiles for three different soft tissue phantoms and the resulting forces where monitored. Our results show a systematic and constant relationship between deformation and interaction force. Different tissue elasticities result in different but consistent deformation-force mappings. For a series of independent measurements the rootmean-square-error between estimated and measured force was below 3 mN. The results indicate that it is possible to estimate the force acting between tissue and instrument based on the deformation caused by the instrument. Given that in robotic surgery the pose of the instrument head is known and hence the respective tissue deformation caused by the instrument can be measured in a well-defined relative position, the method allows for force estimation without any changes to the instruments. I. INTRODUCTION Minimally invasive and robot assisted surgery promise improvements with respect to the treatment outcome and efficiency of surgery while reducing side effects [1]. One current limitation is the missing haptic feedback to the surgeon [2]. Typical telerobotic systems provide insufficient support for the integration of force measuring instruments [3]. One approach is to measure at the instrument shaft, which is more easily implemented and does not require special instruments. Yet, estimation of the actual force is rather difficult, e.g. despite a mechanically complex design an accuracy of 0.5 N has been reported [4], [5]. In contrast measuring forces directly at the instrument head is still difficult to realize, especially for microsurgical instruments and considering the need for sterilization. As an alternative, we propose to estimate the force from the tissue deformation caused by the instrument. This could be realized without actual contact between sensor and tissue and thus without modifying the instrument itself. Surface reconstruction Online surface tracking is an ongoing challenge in minimally invasive surgery and has previously been used to navigate the instrument with respect to anatomical and functional data obtained in preoperative CT or MRI scans [15], [16], [17]. Common approaches utilizing vision based 3Dreconstruction algorithms require artificial markers, prominent anatomical features or structured light patterns with drawback of vast computational requirements [18], [19]. Time-of-Flight (TOF) cameras directly access depth information, but the spatial resolution is limited to a few mm. We propose optical coherence tomography (OCT) as a sensor to measure the tissue surface [20]. OCT is a near infrared A-scan based imaging measuring the TOF by interferometry. In contrast to commonly used surface tracking techniques OCT can measure the deformation of tissue in the range of several micrometers not only on but also directly below the tissue surface. Force estimation from surface deformation Generally, moving the instrument to displace or deform tissue results in some force acting on both the instrument All authors are with Institute of Medical Technology, Hamburg University of Technology, 21073 Hamburg, Germany [email protected] 180 Instrument Instrument Instrument Camera Camera S' F S Camera S'' F Fig. 1. An illustration of an instrument (solid line) and an OCT scan monitoring the tissue deformation at the instrument head. If the pose of the instrument is known, e.g., for robotic surgery, the scan area can be placed in the proximity of the instrument head such that the deformation caused by the instrument can be measured. The left figure shows an initial shape S of the surface, while the center and right figures illustrate tissue deformation and the reaction force F (blue). This motivates the question, whether OCT can be utilized to measure the tissue deformation due to the instrument and if the complex relationship between tissue deformation and interaction forces can be efficiently learned by means of a regression. To illustrate the proposed approach we investigate whether neural network regression can be used to predict the interaction force between tissue and instrument for different tissue mechanical properties and trajectories. In particular, we want to address the following questions: 1) How reproducible is the relationship between surface deformation and interaction force? 2) Is it possible to describe this relationship using machine learning? 3) Which features are particularly suitable to describe the relationship? 4) To what extend depends the approach on tissue mechanical properties? networks that consists of a single pattern and summation layer [21], [22]. In contrast to other neural network implementations they work with a small number of training samples, while always approaching the global minimum of the error function. However, GRNN suffer from irrelevant input data and thus require a careful preprocessing and feature selection [21]. A. Data Acquisition Fig. 2 depicts the experimental setup. A hexapod system (H.820, PI) moves the 3d printed instrument with respect to the pivot point to deform the surface of the phantom . The forces are measured with a force sensor (Nano43, ATI) below the gelatin phantom. The OCT scanner (Telesto, Thorlabs) measures the deformation of the probe. The whole system is calibrated using the tracking camera. Cylindrical shaped soft tissue phantoms made of gelatin were used to evaluate the proposed method. Three different elastic moduli have been realized by varying the gelatin concentration of the samples and have been measured independently. The optical scattering properties were enhanced by adding 1 g/L of TiO2 powder to the gelatin. The samples had a diameter of 5 cm and a height of 2 cm. The Young’s modulus was 18.1 kPa, 38.0 kPa and 82.7 kPa respectively. Two different phantoms were made for each modulus which were frequently exchanged during the experiments to avoid temperature depended tissue changes. The instrument tip was initially placed at a random position on the sample surface denoted as pose P0 . The hexapod was programmed to drive the instrument deeper into the surface by moving it to 10 subsequent poses Pi with respect to the initial pose. Thereby the instrument was moved 0.1 mm in robot coordinates, which is shown in Fig. 3. After returning to the initial position the instrument was moved to the next random surface location. This procedure was repeated several times considering different trajectories as indicated in Fig. 3. While downwards describes a movement along the negative z-axis, forward, left and right are not completely in x and y direction, but under a certain angle α around x-axis and y-axis. II. METHODS To reconstruct the interaction force we need to obtain the function Λ which maps the surface deformation Ψ on the force vector ~F. The measured tissue surface can be described by the mapping ~S(x, y) = (x, y, f (x, y)) : ℜ2 → ℜ3 , where x and y are the lateral coordinates in the OCT scan field and f (x, y) is the respective depth location of the measured tissue surface. The scan field is discretized with a finite number of elements, whereas the physical size of each element is determined by the OCT scan pattern. We assume that the surface deformation changes both, the surface height and the respective surface normal vector ~n. Given that ~S0 and ~Si describe the tissue surface before and after deformation Ψi , we calculate the differential surface as ∆~Si = ~Si −~S0 and the differential normal vector ∆~ni =~ni −~n0 . In the following we use ∆~Si and ∆~n as surrogates for the deformation. Assuming that a point on the surface moves mainly in z-direction, we write the differential surface as scalar field ∆S(x, y). We utilize generalized regression neural network (GRNN) to obtain the mapping Λ. GRNN are radial basis function 181 Fig. 2. Experimental setup showing hexapod (a) mounted between hexapod and tool (b), OCT scanner (c), trocar in simulated abdominal wall (d, only on photography), gelantine phantom on top of second force-torque sensor (e) and tracking camera (g, only in schematic). TABLE I E ACH SET CONSISTS OF 10 INDIVIDUAL POSES . E XPERIMENTS WERE REPEATED 3 TIMES WITH VARYING ELASTIC MODULI OF 18.1 K PA , 38 K PA AND 82.7 K PA Trajectory number of sets Forward: 5◦ , 10◦ , 15◦ 5 Left 10 Right 10 downwards 10 C. Experimental Validation For the experimental validation the datasets are split in training and test data. Therefore we consider two different stratification techniques. Firstly, we use repeated random sub-sampling cross validation (RRSV) to study the reproducibility and stability of the proposed method. Secondly, we use leave-one-out cross validation (LOOCV), to study the generalization capabilities of the utilized neural network regression. GRNN training is mainly influenced by a single parameter, namely the spread. To find the optimal spread for the given input data, a global parameter estimation considering all input features and data sets was performed. Therefore the data was subdivided into training and test set with ratio 3:1 using RRSV. We varied the spread from 0.2 to 5 in steps of 0.2 and repeated the experiment 1000 times. For later experiments we chose the best spread over all repetitions. To investigate to which extend the proposed input features contribute to the force estimation and we used three different feature vector configuration including only the absolute indentation (A), only the differential surface normals (B) and both together (C). Again training and testing was performed using RRSV. The generalization capabilities were evaluated with respect to unknown trajectories as well as with respect to unknown mechanical properties based on LOOCV. Fig. 3. Left: Instrument placed on the surface of the gelatin phantom. The vector denotes the movement direction with respect to the robot base frame. Right: Tissue surface before (b) and after indentation (c). For each position P an OCT-Volume-Scan was obtained and the force data was acquired. The dimension of the complete OCT scan was 128x128x512 voxel with a voxelsize of 64 µm in lateral and 5.19 µm in axial direction. B. Data Processing The data processing can be divided into three steps, which are instrument segmentation, surface segmentation, and feature extraction. For each Sequence P the instrument was automatically segmented in the reference pose P0 without deforming the tissue and then localized in subsequent poses by using a normalized cross correlation approach. For automatic segmentation we firstly calculated a maximum intensity projection in z-direction and applied a Gaussian filter kernel (Size 5 px and σ = 0.5). We used a contour based segmentation algorithm to segment the instrument tip and conducted a PCA analysis to find the orientation of the instrument tip in the x-y plane. A region of interest with a size of 60x60 px was extracted in front of the instrument as indicated in Fig. 3. Within the ROI, we applied a histogram based surface segmentation. The segmentation was smoothed using a 2D Gaussian filter (size 11 px and σ = 1). III. RESULTS Fig. 4 shows the measured reference force vector for three different trajectories. Note that the force is given with respect 182 to the force sensor base-frame, not w.r.t. the instrument movement direction. The xy-projection of the corresponding differential surface normals for left and right indentation are shown in the right diagram. Clearly, the depicted pattern appears diametrical for both directions. The results of the parameter estimation using RRSV for all data set are shown in Fig. 6. The black curve shows the median Force error depending on the spread parameter The blue shading indicates the 25 % and 75 % quantiles respectively. Maximum and minimum values are green. Clearly, the median value approaches a minimum for 1.6, which was used as spread parameter for subsequent experiments. Force estimation results utilizing repeated random subsampling validation considering all data sets but different feature vector configurations are shown in Fig. 7. The RMSE between estimated and reference force is 2.66 mN for the normal vector, 5.34 mN for indentation and 2.97 mN for both. Fig. 9 shows the result of repeated random sub-sampling validation considering data sets with similar elastic properties. The error is smallest for soft gelatin and increases with young’s modulus. Median RMSE are 1.92 mN for soft gelatin, 2.26 mN for medium gelatin, and 2.69 mN for hard gelatin. The upper diagram in Fig. 8 shows the result of the leave-one-trajectory-out validation. The error is smallest if trajectories from the center are chosen as validation set, which corresponds to an interpolation problem. The error is highest, if the deformation in the validation set is larger than the deformation in the training sets. The smallest error obtained here is 1.66 mN, while the largest error is 6.98 mN. The lower diagram in Fig. 8 shows the result of the leaveone-modulus-out validation. Similar to the upper diagram, the error appears higher for extrapolation than for interpolation. The median RMSE are 6.54 mN for 18.1 kPa, 5.04 mN for 38.1 kPa and 6.77 mN for 82.7 kPa. Fig. 4. The images show the xy-projection of the differential normal vectors for a indentation direction to the left (top) and to the right (bottom) 20 Force in mN 15 10 5 IV. DISCUSSION ideal characteristics predicted force 0 Primarily the results indicate a systematic and stable relationship between the tissue deformation and the corresponding interaction force. The pattern of the differential normal vectors in Fig. 4 supports the assumption of unique surface deformation depending the direction of motion. Clearly, the surface normal vectors contribute more to the force estimation than the absolute surface indentation, as shown in Fig. 7. Using both features does not further improve the results. One explanation could be a redundancy of information in both features. In general GRNN suffer from inputs that are irrelevant [21]. Indeed, soft tissue phantoms with well defined mechanical properties can not be assumed for a clinical scenario, but the results indicate that the proposed relationship could be learned for a large amount of different mechanical properties. This assumption corresponds to the findings of other groups relating robot manipulation data to the tissue mechanical properties [12]. However, these results should be interpreted with care. Since in regression overfitting the model is a 0 2 4 6 8 10 12 Force in mN 14 16 18 20 Fig. 5. Characteristic of the regression model. The green line shows the ideal characteristic, while the predicted force is indicated by blue circles. For high force values less measurements are available. realistic problem, much more data sets are necessary to further investigate generalization capabilities of the proposed approach. Given that in robotic surgery the pose of the instrument head is known and hence the respective tissue deformation caused by the instrument can be measured in a well-defined relative position, the proposed method allows for force estimation without any changes to the instruments itself. Further steps involve the investigation of multiple instrument geometries trajectories and a large number of different movement trajectories. Also scenarios with heterogeneous tissue 183 Force RMSE in mN 6 4 2 0 1 2 3 4 5 6 7 8 9 10 Force RMSE in mN Validation Set 8 6 4 2 0 18.1 kPa 38 kPa 82.7 kPa Youngs modulus of validation set Fig. 6. Global parameter Estimation: The black line shows the median RMSE error with respect to the spread parameter of the GRNN. The region of 25 percent and 75 percent quantiles and extrem values are highlighted blue and green respectively. Fig. 8. Leave-one-out cross validation considering unknown trajectories(top) and unknown tissue mechanical properties (bottom). 0.16 0.16 Normals: Median = 2.66 Indentation: Median = 5.34 Both: Median = 2.97 0.14 0.12 Relative Frequency 0.12 Relative Frequency Soft 18.1 kPa: Median = 1.92 Medium 38 kPa: Median = 2.26 Hard 82.7 kPa: Median = 2.68 0.14 0.1 0.08 0.06 0.1 0.08 0.06 0.04 0.04 0.02 0.02 0 2 2.5 3 3.5 4 4.5 5 5.5 6 6.5 0 7 1 RMSE Force in mN 1.5 2 2.5 3 3.5 4 4.5 5 RMSE Force in mN Fig. 7. Force estimation results utilizing repeated random sub-sampling validation considering all data sets but different feature vector configurations. Fig. 9. Result of repeated random sub-sampling validation considering data sets with similar elastic properties only. elastic properties will be addressed in future experiments. [5] R EFERENCES [6] [1] M. Diana and J. Marescaux, “Robotic surgery.” Br J Surg, vol. 102, no. 2, pp. e15–e28, Jan 2015. [2] C. R. Wagner, N. Stylopoulos, P. G. Jackson, and R. D. Howe, “The benefit of force feedback in surgery: Examination of blunt dissection,” Presence-Teleop Virt, vol. 16, no. 3, pp. 252–262, 2007. [3] A. M. Okamura, “Haptic feedback in robot-assisted minimally invasive surgery.” Curr Opin Urol, vol. 19, no. 1, pp. 102–107, Jan 2009. [4] J. J. v. d. Dobbelsteen, R. A. Lee, M. v. Noorden, and J. Dankelman, “Indirect measurement of pinch and pull forces at the shaft of [7] [8] 184 laparoscopic graspers.” Med Biol Eng Comput, vol. 50, no. 3, pp. 215–221, Mar 2012. S. Shimachi, S. Hirunyanitiwatna, Y. Fujiwara, A. Hashimoto, and Y. Hakozaki, “Adapter for contact force sensing of the da vinci robot.” Int J Med Robot, vol. 4, no. 2, pp. 121–130, Jun 2008. S. Giannarou, M. Ye, G. Gras, K. Leibrandt, H. J. Marcus, and G.Z. Yang, “Vision-based deformation recovery for intraoperative force estimation of tool-tissue interaction for neurosurgery.” Int J Comput Assist Radiol Surg, Mar 2016. X. He, M. Balicki, P. Gehlbach, J. Handa, R. Taylor, and I. Iordachita, “A novel dual force sensing instrument with cooperative robotic assistant for vitreoretinal surgery.” in IEEE Int Conf Robot Autom, vol. 2013, Dec 2013, pp. 213–218. R. Haslinger, P. Leyendecker, and U. Seibold, “A fiberoptic force- [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] torque-sensor for minimally invasive robotic surgery.” in IEEE Int Conf Robot Autom, 2013, pp. 4390–4395. K. M. Kennedy, S. Es’haghian, L. Chin, R. A. McLaughlin, D. D. Sampson, and B. F. Kennedy, “Optical palpation: optical coherence tomography-based tactile imaging using a compliant sensor.” Opt Lett, vol. 39, no. 10, pp. 3014–3017, May 2014. D. Braun and H. Wrn, “Techniques for robotic force sensor calibration,” Proc fo the 13th Int WS on Comp Science and Inf Tech CSIT’2011, pp. 218–223, 2011. Y.-J. Lim, D. Deo, T. P. Singh, D. B. Jones, and S. De, “In situ measurement and modeling of biomechanical response of human cadaveric soft tissues for physics-based surgical simulation.” Surg Endosc, vol. 23, no. 6, pp. 1298–1307, Jun 2009. P. Boonvisut, R. Jackson, and M. C. Cavuolu, “Estimation of soft tissue mechanical parameters from robotic manipulation data.” in IEEE Int Conf Robot Autom, vol. 2012, Dec 2012, pp. 4667–4674. A. Takacs, P. Galambos, U. Rudas, and T. Haidegger, “Nonlinear soft tissue models and force control for medical cyber-physical systems,” in IEEE int conf on systems, man and cybernetics (SMC2015), no. 10457, Hong Kong, 2015. Á. Takács, I. J. Rudas, and T. Haidegger, “Surface deformation and reaction force estimation of liver tissue based on a novel nonlinear mass-spring-damper viscoelastic model.” Med Biol Eng Comput, Dec 2015. B. Lin, Y. Sun, X. Qian, D. Goldgof, R. Gitlin, and Y. You4, “Videobased 3d reconstruction, laparoscope localization and deformation recovery for abdominal minimally invasive surgery: a survey,” Int J Med Robotics Comput Assist Surg, pp. 1478–596X, 2015. L. Maier-Hein, P. Mountney, A. Bartoli, H. Elhawary, D. Elson, A. Groch, A. Kolb, M. Rodrigues, J. Sorger, S. Speidel, and D. Stoyanov, “Optical techniques for 3d surface reconstruction in computerassisted laparoscopic surgery.” Med Image Anal, vol. 17, no. 8, pp. 974–996, Dec 2013. A. Schoob, D. Kundrat, L. A. Kahrs, and T. Ortmaier, “Comparative study on surface reconstruction accuracy of stereo imaging devices for microsurgery,” Int J of Computer Assisted Radiology and Surgery, vol. 11, no. 1, pp. 145–156, 2016. E. Wild, D. Teber, D. Schmid, T. Simpfendörfer, M. Müller, A.C. Baranski, H. Kenngott, K. Kopka, and L. Maier-Hein, “Robust augmented reality guidance with fluorescent markers in laparoscopic surgery,” Int J Comput Assist Radiol Surg, pp. 1–9, 2016. J. Lin, N. T. Clancy, and D. S. Elson, “An endoscopic structured light system using multispectral detection.” Int J Comput Assist Radiol Surg, vol. 10, no. 12, pp. 1941–1950, Dec 2015. J. G. Fujimoto, Optical Coherence Tomography - Technology and Applications, W. Drexler and J. G. Fujimoto, Eds. Springer, 2015. S. Ren and L. Gao, “Resolve of multicomponent mixtures using voltammetry and a hybrid artificial neural network method,” in Second J Conf on Artificial Int and Computational Int, AICI 2011, M. D. L. J. W. F. E. Deng, H., Ed., Taiyuan, China, September 2011. T. Anwar, Y. M. Aung, and A. A. Jumaily, “The estimation of knee joint angle based on generalized regression neural network (grnn),” in 2015 IEEE Int Symp on Robotics and Int Sensors (IRIS), Oct 2015, pp. 208–213. 185
© Copyright 2026 Paperzz