Updates on New Technologies and Imaging Agents in Healthcare
Announcer Intro
Welcome to Project Oncology on ReachMD. Today, we’re welcoming Dr. Tyler Bradshaw to speak about his presentation from the Society of Nuclear Medicine and Molecular Imaging, or SNMMI, 2024 Annual Meeting on new technologies and new imaging agents. Dr. Bradshaw is an Associate Professor in the Department of Radiology at the University of Wisconsin-Madison. Let’s hear from him now.
Dr. Bradshaw:
Our research lab, which is here at the University of Wisconsin in the Department of Radiology, we focus on medical imaging technologies, things that can improve the generation, the interpretation, the quantification of medical imaging, and we’ve done a lot of work on enhancing images through things like denoising or automatically detecting disease and segmenting it. But in recent years, we focused a lot more on how can we improve the downstream interpretation of the images as they’re coming in and enhance physicians’ workflows. And so one thing that we’ve worked a lot on is large language models.
So several years ago, we became interested in these technologies, mostly because we saw what was happening in the computer sciences, and we thought about how those improvements and then those technologies could improve radiology overall. And so use large language models to interpret radiology reports, to summarize them, and we’re developing models that can integrate both language, as well as images. So these are called vision language models or large multimodal models, and we’re finding ways to try to integrate them together so that this incredible technology that is made available through large language models can actually operate on the images as well and help out physicians, technologists, and maybe even physicists.
I guess the things you should know is that we’re still at the very early stages of this. I think large language models have been around for a while, probably seven years or so, but it really wasn’t until ChatGPT came out that it really captured everyone’s attention and really demonstrated the power of the models that are really trained on next word prediction. And so that’s only been a few years that we’ve really had this powerful tool, and I think everyone’s still trying to explore how we can use it in medicine. And right now it’s clear that the bigger and larger models that are trained on more data can do more and more amazing things, yet they’re still quite limited in the medical space.
And even though there’s publications coming out constantly showing the promise of the large language models and their use in clinics, there’s still a lot of errors that they make. They’re really not optimized for that type of usage, and so I think we’re still at the very beginning phases of that, and I’m certain that we will see how these technologies impact patients eventually. It will most likely be in the form of patients interacting with large language models for information or scheduling, or physicians using large language models to optimize their workflows to be able to complete tasks faster, to be able to do more during their work hours, and those are likely the most likely use cases that we foresee.
There are limited technologies right now where you can use a large language model in clinical practice. I’d say the one that I’m aware of that you can purchase that has received the appropriate approvals is report summarization, where if you’re a radiologist and you’re dictating a report, you can have a large language model help you create a summary of that report automatically. I think we were seeing that some of the medical record vendors are starting to use large language models for navigating and asking questions about the patient’s medical record. I know that’s been piloted in several organizations and is now becoming more widely available, and so we’re definitely going to see a lot of that because we know that physicians, including nuclear medicine physicians and radiologists, spend a lot of time going through the medical record and trying to find information that a large language model could provide them instantly if they were just to ask the right questions. So this is being explored and deployed, but I still think we’re very much in the early stages of this.
I think the number of publications on large language models, vision language models, large multimodal models, it’s growing exponentially. I think almost every AI lab working in medicine right now—and the number of those labs is also growing exponentially—I think they’re all very much interested in how can we incorporate the large language models into whatever domain they’re working in, whether it’s MRI, whether it’s nuclear medicine, whether it’s image reconstruction. I think everyone sees the value of having these very large foundation models that are pretrained on a large amount of data, on a diverse data set, and can be adapted to be used in various applications. And so we’re seeing this type of technology be incorporated into a lot of different domains. I say the most exciting research I think that we’re seeing right now, which is going to have downstream impact on all of medicine is the incorporation of multiple modalities. And I use modalities loosely in the sense that it could mean multiple imaging modalities, like MRI, nuclear medicine, ultrasound, and so forth, but also the modalities in sensory modalities, such as vision images, audio, video, language. So all of these could potentially be used to train models that can operate in different spaces and produce different outputs, not just text outputs but potentially image outputs, video outputs. Once we see these models trained on enough data and validated in different use cases, then I think we’re going to see these models deployed everywhere, and it’s really going to take the world by storm. I don’t know how long that’s going to take.
I think there’s a lot of excitement for large language models, large multimodal models, and just AI in general in nuclear medicine. Nuclear medicine is at somewhat of a disadvantage because we don’t have as much data generated at each center as is generated in other imaging modalities, and that’s just the nature of nuclear medicine. And what that means is that it’s harder to collect large data sets, and data is very important in training these models, so that is a challenge that we have to face. Not only that but nuclear medicine is generally considered a pretty complicated domain for these AI technologies. For example, in oncology and PET, oncologic PET imaging, we scan the whole body, and the reports that we generate are very lengthy because there can be lots of findings. We do a lot of different types of imaging as well, and so it’s a complicated modality. The data sets and the reports are quite large, and yet the data are somewhat at a disadvantage due to the low volumes that we see. So I think that’s a challenge, but it’s also an opportunity for the nuclear medicine community to try to pull data together, to try to find ways to share data with each other, and I think this is going to be critical in the emerging era of theranostics and targeted radionuclide therapy, and try to create better practices around theranostics.
Announcer Close
That was Dr. Tyler Bradshaw speaking about his research on current technologies and imaging agents. To access this and other episodes in this series, visit Project Oncology on ReachMD.com, where you Be Part of the Knowledge. Thanks for listening.
Ready to Claim Your Credits?
You have attempts to pass this post-test. Take your time and review carefully before submitting.
Good luck!
Recommended
Betsy O'Donnell, MD
- Project Oncology®
Optimizing the Screening of Precursor Diseases in Multiple Myeloma
Betsy O'Donnell, MD
Betsy O'Donnell, MD
Charles Turck, PharmD, BCPS, BCCCP
Christopher Chambers, MD
Adam H. Buchanan, MS, MPH