In this work, we present SuFIA, the first framework for natural language-guided augmented dexterity for robotic surgical assistants. SuFIA incorporates the strong reasoning capabilities of large language models (LLMs) with perception modules to implement high-level planning and low-level control of a robot for surgical sub-task execution. This enables a learning-free approach to surgical augmented dexterity without any in-context examples or motion primitives. SuFIA uses a human-in-the-loop paradigm by restoring control to the surgeon in the case of insufficient information, mitigating unexpected errors for mission-critical tasks. We evaluate SuFIA on four surgical sub-tasks in a simulation environment and two sub-tasks on a physical surgical robotic platform in the lab, demonstrating its ability to perform common surgical sub-tasks through supervised autonomous operation under challenging physical and workspace conditions.
In this task, the robot hands over a suture needle from one arm to the other. SuFIA queries a language model to understand the surgeon's request and plan the handover motion and also communicates its intentions to the surgeon. SuFIA directly devises and executes the low-level robot actions and trajectories for the handover motion. The robot can adapt to the surgeon's preferences and adjust the handover motion accordingly.
In this task, a spring clamp assembly holds a soft vessel phantom from two points. The dVRK arm is required to grip the vessel rim from a third point facing the robot and dilate the vessel by pulling backward.
|