Axiomtek now offers NVIDIA Super Mode
Robot successfully performs surgery without human intervention
Seminar helps plot route to Linux-based MPUs

Robot successfully performs surgery without human intervention

A robot that was trained on videos of surgeries successfully performed a lengthy phase of a gallbladder removal without human intervention.
Robot successfully performs surgery without human intervention Robot successfully performs surgery without human intervention

It operated for the first time on a lifelike patient and during the operation, responded to and learned from voice commands from the team – like a novice surgeon would with a mentor.

It performed consistently across trials and with the expertise of a skilled human surgeon, even during unexpected scenarios which are typical in real life medical emergencies.

The federally funded work, led by researchers from John Hopkins University, is an advancement in surgical robotics, where robots can perform with both mechanical precision and human-like adaptability and understanding.

“This advancement moves us from robots that can execute specific surgical tasks to robots that truly understand surgical procedures,” said Axel Krieger, medical roboticist. “This is a critical distinction that brings us significantly closer to clinically viable autonomous surgical systems that can work in the messy, unpredictable reality of actual patient care.”

Using machine learning to train the robot

This continues on from work in 2022, where Krieger’s Smart Tissue Autonomous Robot (STAR), performed the first autonomous robotic surgery on a live animal; a laparoscopic surgery on a pig.

But that robot required specially marked tissue, operated in a highly controlled environment, and followed a rigid, predetermined surgical plan.

But this new system is different. According to Krieger, it “is like teaching a robot to navigate any road, in any condition, responding intelligently to whatever it encounters”.

Hierarchical surgical robot transformer, (SRT-H) performs surgery, adapts to individual anatomical features in real time, makes decisions on the fly, and self-corrects when situations don’t go as expected.

It is built with the same machine learning architecture that powers ChatGPT. It is also interactive, able to respond to spoken commands and corrections, learning from this feedback.

“This work represents a major leap from prior efforts because it tackles some of the fundamental barriers to deploying autonomous surgical robots in the real world,” said lead author Ji Woong ‘Brian’ Kim, a former postdoctoral researcher at Johns Hopkins who’s now with Stanford University. “Our work shows that AI models can be made reliable enough for surgical autonomy—something that once felt far-off but is now demonstrably viable.”

Last year Krieger’s team used the system to train a robot to carry out three foundational surgical tasks: manipulating a needle, lifting body tissue, and suturing. Those tasks took only a few seconds each.

The gallbladder removal procedure is notably more complex and is made up of a minutes-long string of 17 tasks. As part of the surgery, the robot had to identify certain ducts and arteries and grab them precisely, strategically place clips, and sever parts with scissors.

It learned how to do the gallbladder work by watching videos of John Hopkins surgeons doing it on pig cadavers. The researchers reinforced visual training with captions describing the tasks. After watching the videos, it performed the surgery with 100% accuracy.

Although the robot took longer to perform the work compared with a human surgeon, the results were comparable to an expert surgeon.

“Just as surgical residents often master different parts of an operation at different rates, this work illustrates the promise of developing autonomous robotic systems in a similarly modular and progressive manner,” says Jeff Jopling, surgeon, John Hopkins, and a co-author.

The robot performed flawlessly across anatomical conditions that weren’t uniform and during unexpected detours – such as when the researchers changed the robot’s starting position and when they added blood-like dyes that changed the appearance of the gallbladder and surrounding tissues.

“To me it really shows that it’s possible to perform complex surgical procedures autonomously,” said Krieger. “This is a proof of concept that it’s possible and this imitation learning framework can automate such [a] complex procedure with such a high degree of robustness.”

Coming up, the team would like to train and test the system on more types of surgeries and expand its capabilities to perform a complete autonomous surgery.

Keep Up to Date with the Most Important News

By pressing the Subscribe button, you confirm that you have read and are agreeing to our Privacy Policy and Terms of Use
Previous Post
Axiomtek now offers NVIDIA Super Mode

Axiomtek now offers NVIDIA Super Mode

Next Post
Seminar helps plot route to Linux-based MPUs

Seminar helps plot route to Linux-based MPUs