A University of Oklahoma researcher and an Oklahoma City-based OU Health surgeon had the vision to use AI to visualize 3D CT data superimposed and anatomically aligned during surgery. The mission was to increase each surgery.
“Compared to a pilot piloting a plane or even a regular Google Maps user when going to work, surgeons have the instruments grouped behind them hanging on the wall,” said student Mohammad Abdul Mukit of MS in Electrical and Computer Engineering from the University of Oklahoma, and a graduate fellow and research assistant. His research focuses on computer vision applications, extended reality and AI in medical surgeries.
“The Google Maps user or pilot receives constant, real-time updates on where they are, what to do next, and other vital data that helps them make fractional-second decisions,” he explained. “They don’t have to plan the trip for days or memorize every turn and detail of every landmark on the way. They just do it.”
On the other hand, current surgeons need to do rigorous surgical planning, memorize the specific details of each unique case, and know all the steps needed to ensure the safest surgery possible. They then engage in complex procedures for several hours, with no guidance or self-monitoring devices or head-mounted screens to help them.
“They have to feel their way to their goal and expect everything to come out as they had planned,” Mukit said. “Through our research, we aim to change this process forever. We are developing Google Maps for Surgery.”
To make this vision a reality, MUKIT and OU Health plastic and reconstructive surgeon Dr. Christian El Amm have been working together since 2019. This journey, however, began in 2018, with the collaboration of El Amm with energy technology company Baker Hughes.
BH specializes in the use of augmented reality / mixed reality scans and computed tomography to create 3D reconstructions of rock specimens. For geologists and oil and gas companies, this visualization is extremely useful as it helps them to efficiently plan and execute drilling operations.
“When you change the way you see the world, you change the world you see.”
Mohammad Abdul Mukit, University of Oklahoma
This technology caught the attention of El Amm. He imagined that this technology combined with AI could allow him to visualize 3D tomography data superimposed and anatomically aligned during surgery. This could also be used to see the reconstruction steps he had planned during the surgery without ever losing sight of the patient.
However, several key challenges had to be addressed to achieve a prototype mixed reality system ready for use in surgery.
MEETING OF THE CHALLENGE
“During the one-year collaboration, the BH team created solutions for those challenges that, until then, were unresolved,” Mukit recalled. “They implemented a client / server system. The server (a high-end PC) equipped with RGBD cameras would do all the computer vision work to estimate the six DoF postures of the patient’s head.
“It would then transmit the CT scan data stored on the client device, a Microsoft Hololens-1, for anatomically aligned display,” he continued. “BH developed a proprietary compression algorithm that allowed them to transmit a large volume of CT scan data. BG also integrated a proprietary AI engine to do posture estimation.”
It was a complex engineering project carried out in a very short time. After completing this prototype, the team better understood the limitations of this configuration and the need for a better system.
“The prototype system was a bit impractical for a surgical environment, but it was essential to better understand our needs,” Mukit said. “First, the system could not estimate the position of the head in surgical settings when most of the patient’s body was covered with clothing except the head. Then the system needed some camera calibration steps that required a lot of time each time we left the app.
“This was a problem, because in our experience, surgeons only accept those devices that only work from the first moment,” he continued. “They don’t have time to play with technology while concentrating on life-altering procedures. We also deeply feel the need for options to control the system through voice commands. This is an essential element when it comes to surgery. surgeons will always have their hands busy. “
Surgeons will not contaminate their hands by touching a computer to monitor the system or removing the device to recalibrate it. The team realized that a new, more comfortable and perfect system was essential.
“I started working to build a better system from scratch in 2019, once the official collaboration with BH ended,” Mukit said. “Since then, we’ve moved most of the essential tasks to the edge, the screen mounted on the head. We’ve also taken advantage of CT scan data to form and deploy machine learning models, which are more robust in estimating of the position of the head than before.
“We have developed‘ marker-free tracking ’that allows computed tomography or other images to be superimposed using artificial intelligence instead of annoying markers to guide the way,” he added. “We then eliminated the need for any manual calibration of the camera.”
Finally, they added voice commands. All of these moves made applications and the system connect to surgeons, Mukit said.
“Because of their convenience and usefulness, the applications were very well received by OU Medicine surgeons,” he noted. “Suddenly, the ideas, feature requests, and consultations from different medical experts were just pouring in. Then I realized we had something really special on our hands and we had just scratched the surface. We started developing these functions for each unique genre of surgery. “
Gradually, this made the system enriched with various useful features and led to unique innovations, he added.
The Amm has begun using the device during surgical cases to improve the safety and efficiency of complex reconstructions. Many of his patients come to him for craniofacial reconstruction after a traumatic injury; others have congenital deformities.
So far, he has used the device for several cases, including reconstructing a patient’s ear. The system took a mirror image of the patient’s other ear and then superimposed the device on the other side, allowing The Amm to accurately pin a reconstructed ear. In the past, I cut a template out of the ear and aimed at precision with the naked eye.
In another surgical case, which required a reconstruction of the face in 18 steps, the device coated the computed tomography of the patient on top of his actual bones.
“We had to cut and move each of these bones in a precise direction,” El Amm said. “The device allowed us to see the bones individually, and then showed each of the cuts and each of the movements, which allowed the surgeon to verify that he had gone through all these steps. It’s basically about walking the steps of virtual reality surgery. . “
ADVICE FOR OTHERS
“When you change the way you see the world, you change the world you see,” Mukit said. “It became a mixed reality. MR is the next general-purpose computer. Powerful technology will no longer be in your pockets or desktops.
“Through MR, you will integrate with your human self,” he continued. “It will change the way we solve problems, which in turn will lead to new creative ways to solve problems with AI. I think in the coming years we will see another technological revolution. Especially after a mixed reality presented in 2023, which is called which is lighter than any other visor on the market “.
Currently, almost all industries integrate mixed reality headphones into their businesses, rightly so, as the gains are evident, he added.
“This technology is now mature enough for countless possible applications in almost every industry and especially in healthcare,” he concluded. “Mixed reality has not yet fully entered this industry. We have only scratched the surface and in a few months we have seen such an overwhelming tsunami of expert ideas. Ideas that can now be easily implemented.
“These application scenarios range from education and training to surgeries that are safer, faster and cheaper for both surgeons and patients. Now it’s time to jump into mixed reality.”