Posted on : Thursday 21st January 2021 10:07 AM
Maybe you have needed an IV and had to pass through many pricks before the nurse could find a vein? Technology to refrain from that painful trial and error is in the works. Fujifilm’s ultrasound diagnostics arm SonoSite revealed that it had partnered with a startup company to develop artificial intelligence that can interpret ultrasound images on a mobile phone.
The companies say the first target for their AI-enabled ultrasound will be locating veins for IV (intravenous) needle insertion. The technology would enable technicians to hold a simple ultrasound wand over the skin while software on a connected mobile device discovers the vein for them.
For this project, Fujifilm SonoSite tapped the Allen Institute for Artificial Intelligence (AI2), which has an incubator for AI startup companies. “Not only do we have to come up with a very accurate model to analyze the ultrasound videos, but on top of that, we have to make sure the model is working effectively on the limited resources of an android tablet or phone,” says Vu Ha, technical director of the AI2 Incubator.
In an interview with IEEE Spectrum, Ha did not disclose the name of the startup that will be taking on the task, saying the fledgling company is still in “stealth mode.”
Ha reports the AI2 startup will take on the project in two stages: First, it will train a model on ultrasound images without any resource limits, with the purpose of making it as specific as possible. Then, the startup will go through a routine of experiments to simplify the model by reducing the number of hidden layers in the network, and by trimming and compressing the network until it is not so difficult to operate on a mobile phone. The trick will be to shrink the model without sacrificing too much accuracy, Ha says.
If successful, the device could help clinicians minimize the number of unsuccessful attempts at finding a vein, and enable less trained technicians to start IVs as well. Hospitals that do a huge volume of IVs often have very well trained staff capable of eyeballing ultrasound videos and using those images to help them to find small blood vessels. But the number of these highly trained clinicians is very little, says Ha.
“My hope is that with this technology, a less trained person will be able to find veins more reliably” using ultrasound, he says. That could broaden the availability of portable ultrasound to rural and resource-poor areas.
SonoSite and AI2 are homes to two of the countless groups of researchers putting AI to work on medical imaging and diagnostics. The U.S. Food and Drug Administration (FDA) has authorized for commercial use a deep learning algorithm to analyze MRI images of the heart, an AI system that looks for signs of diabetic retinopathy in the images of the retina, an algorithm that analyzes X-ray images for signs of wrist fracture, and software that looks for indicators of stroke in CT images of the brain, to name a few.
In particular, the FDA in 2017 also approved for commercial use smartphone-based ultrasound technology made by Butterfly. The device, which costs less than $2000, can be used to take sonograms for 13 various clinical applications, including blood vessels. Butterfly has announced publicly that it is developing deep learning–based AI that will assist clinicians with image interpretation. But the company has not yet commercially launched the technology.
About four other portable or mobile device–based ultrasound technologies have been permitted by the FDA, including that of Fujifilm SonoSite, and the Lumify from Philips. But the adoption of these devices has been relatively slow. As Eric Topol, director of the Scripps Research Translational Institute, told Spectrum recently, the smartphone ultrasound is a “brilliant engineering advance” that’s “hardly used at all” in the health care system. Complex challenges such as reimbursement, training, and the old habits of clinicians often hinder the uptake of new gadgets, despite engineers’ best efforts.