banners
beforecontenttitle

Project CISOBOT Collaborative

After content title
Before content body
Chunks
Chunks

  •  
  • Autonomous robot for Minimally Invasive Surgery


    Funding Organization: Junta de Andalucía. Proyectos de Excelencia

    Referencie: P07-TEP-2897

    Participants: Universidad de Málaga

    Period: From 01/01/2008 to 31/12/2012

    Main Researcher: Víctor Fernando Muñoz Martínez

 

 

System Abilities

 

GroupAbilityLevelDescription
ConfigurabilityMechatronic Configuration1Start-up Configuration. The configuration files, or the mechatronic configuration can be altered by the user prior to each task in order to customise the robot system in advance of each cycle of operation.
InteractionHuman-Robot5Task sequence control. The system is able to execute sub-tasks autonomously. On completion of the sub-task user interaction is required to select the next sub-task resulting in a sequence of actions that make up a completed task.
Human-Robot Feedback2Vision data feedback. The system feedbacks visual information about the state of the operating environment around the robot based on data captured locally at the robot. The user must interpret this visual imagery to assess the state of the robot or its environment.
Human-Robot Safety1Basic Safety. The robot operates with a basic level of safety appropriate to the task. Maintaining safe operation may depend on the operator being able to stop operation or continuously enable the operating cycle. The maintenance of this level of safety does not depend on software.
DependabilityDependability2Fails Safe. The robot design is such that there are fail safe mechanisms built into the system that will halt the operation of the robot and place it into a safe mode when failures are detected. This includes any failures caused by in-field updates. Dependability is reduced to the ability to fail safely in a proportion of failure modes. Fail safe dependability relies on being able to detect failure.
MotionUnconstrained4Position constrained path motion. The robot carries out predefined moves in sequence where each motion is controlled to ensure position and/or speed goals are satisfied within some error bound.
ManipulationGrasping1Simple pick and place. The robot is able to grasp any object at a known pre-defined location using a single predefined grasp action. The robot is then able to move or orient the object and finally un-grasp it. The robot may also use its Motion Ability to move the object in a particular pattern or to a particular location. Grasping uses open-loop control.
Holding1Simple holding of known object. The robot retains the object as long as no external perturbation of the object occurs.
Handling1Simple release. The robot is able to release an object at a known pre-defined location, but the resulting orientation of the object is unknown. The object should not be prematurely released.

 

 

Abstract

 

This project is focused on the latest line carried out by the applicant team of researchers in the field of solo-surgery on the framework of minimally invasive surgery. This line of research is devoted to the total substitution of the human assistant by a robot during the intervention, so that this latter can help in the surgical procedure in direct interaction with the surgeon. This kind of systems must have the versatility required in both the communication system with the machine and in the kind of movements so that the surgeon gets the feeling of being as comfortable as it is with the human assistant. So, the robot will have to decide, in an autonomous way, when and how it will provide help for a specific surgical manoeuvre in order to be effectively performed. It does not deal with the fact that the surgeon continuously sends orders, but with the robotic system which is autonomously capable of interpreting the surgeon in order to act accordingly.

To achieve this, it is required to design an interface which can cover all required interaction between the robot and the human based on the data collection through different information sources. And so, this kind of interface will have a multimodal character and it will use cognitive techniques for the autonomous decision-making and the next robot’s action. This question will be put into practise by means of a motion planner that presents, in an organized way and according to the information sent by the interface, the results of the required displacement of the surgical tools handled by the robot. Ultimately, a robotic assistant demonstrator will be constructed meeting the developments carried out along the project and showing the operating capacity of the achievements reached through in-vitro experiments.

 

 

Proposed goals and achievements

 

1. Design and construction of a complete kinematic and motorized structure of a robotic system for solo-surgery

It is expected that only one surgeon can perform the laparoscopic interventions without any assistant. This structure must have a specific kinematic functionality appropriated for carrying out the surgical manoeuvres of the tools. Likewise, this kinematic design must be adapted to the spatial requirements of the surgical area and it should not obstruct the surgeon’s movements.

Achievements:

  • Second version of the CISOBOT prototype which takes into account a study of configurations that considers the ergonomics in the operating room.
  • Kinematic calibration of the robot arms as well as an estimation of its dynamic behaviour.
  • Dynamic stability system of the structure avoiding turns due to the centre of gravity movement.

2. Design of a multimodal interface based on voice commands, surgical gestures recognition and the image supplied by the laparoscopic camera

Due to the complexity of the assistant robotic systems in surgery, it is required the use of different input sources of surgeon’s orders in order to increase the efficacy of the human-machine interaction. In this case, it is expected that the robot autonomously uses specific data inputs in order to make certain motion decisions.

Achievements:

  • Recognition system of surgical manoeuvres based on neural networks, which uses the combined information of two 3D motion tracking sensors. This way, shadows, that may appear if some object or the surgeon himself is put between the sensor and the surgical tools, can be avoided.
  • Recognition of the surgical tools within the laparoscopic image field and the estimation of the three-dimensional position of the tool end.
  • It is considered the possibility of recovering the logic faults, such as the accidental occlusion of the tool, a fact that would affect the recognition of the surgical manoeuvres.

3. Running of surgical scenarios and robot assistance in surgical manoeuvres

Identification of specific situations where the robot can assist the surgeon by means of autonomous movements performed through the developed multimodal interface, and based on the motion planner systems of the surgical tools that are developed in previous robotic prototypes.

Achievements:

  • Mathematical modelling of the suture procedure according to the Rosen model in which it is identified the interaction between the robot and human, the actions carried out by the robot, by the surgeon, and the specific function of the developed multimodal interface.
  • Integration of multimodal interface functions combined with the motion planner functions of the surgical tools in the system of the robot’s motion control.
  • Training of the robot’s gesture recognition system for the manoeuvres performed in suture procedures.
  • Validation through the autonomous motion system experiment developed for the suture scenario.

 

 

Main Results

 

Gesture RecognitionCISOBOT: Autonomous Guidance

 

 

 

After content body