Article
Towards learning robots in surgery: First experience with a cognition-guided camera-robot in laparoscopy
Search Medline for
Authors
Published: | April 21, 2016 |
---|
Outline
Text
Background: Nowadays, surgery suffers from limited personnel resources and camera guidance in laparoscopic surgery is frequently performed by inexperienced surgical members of staff. Both may have a negative effect on the quality of surgery and the outcome of the patient. So research into camera guiding robots (CGR) began over 15 years ago. However, until recently these CGRs were either guided by speech, gesture or joysticks, increasing the surgeon’s mental workload. Furthermore they did not adapt to the surgeon’s needs in a flexible way. The aim of this study was to develop a learning robot, which would enable CGRs to provide flexible and trained human-like assistance.
Materials and methods: A cognitive software architecture was used for the learning robot. It perceived its environment (endoscopic video, position of surgical instruments), interpreted the camera guidance quality (CGQ) by using a knowledge base which contained experience from previous interventions and moved the CGR accordingly. Furthermore, the surgeon was able to correct the camera guidance by repositioning the robot manually (“hands-on-mode”) or remotely by use of a tablet computer or smartphone.
For training the robot n=20 surgeries (laparoscopic rectal resection) were performed in the OpenHELP phantom with human camera guidance. Endoscopic video as well as optically tracked instrument positions were recorded and annotated regarding CGQ as being good, medium or poor for each time point. Based on this experience CGQ-classifiers were trained by machine learning algorithms (random forests) and used to guide two different robots (Light weight robot IV, KUKA AG, Augsburg, Germany and ViKY, TRUMPF Medizin Systeme GmbH Co. KG, Saalfeld, Germany) for n=5 surgeries.
Again, on the experience from n=5 surgeries with 0ViKY robot, CGQ-classifiers were trained. Based on this, camera guidance could be performed with the KUKA robot. Finally, experience was combined from different trials with the KUKA robot (n=10) and used for learning. Then, another n=1 surgery was performed with the KUKA robot (Figure 1 [Fig. 1]).
Results: The robot moved the camera to the optimal position in real time based on CGQ-classifiers. Surgeries could be performed on two robotic platforms. The CGQ increased with additional learning from 27,04%/66,99%/5,96% (percentage of good/medium/poor) during manual camera guidance to 43,18%/49,88%/6,94% for the first five robotic experiments to 56,19%/40,99%/2,82% for the last experiment. Also after training the robot also learned new movements such as zooming into the small pelvis during mobilization of the rectum.
Conclusion: We developed the first learning camera guiding robot in laparoscopic surgery. The robot could be trained on procedures performed by humans and could learn from procedures performed on its own. Additionally, we showed that the experience obtained by one robot could also be used with a different robot. So in the future, the robot could learn from different surgeons in different hospitals. Based on this, it is planned to teach the robot other types of surgical interventions.
Acknowledgements: This work was carried out with the support of the German Research Foundation (DFG) as part of projects I05 and A01 in the SFB/TRR 125 Cognition-Guided Surgery and with support of the Medical School of Heidelberg University with a Physician-Scientist-Fellowship for Martin Wagner.