Automation in Medical Imaging

Explore top LinkedIn content from expert professionals.

Summary

Automation in medical imaging refers to the use of artificial intelligence (AI) and advanced algorithms to assist or perform tasks like image analysis, reporting, and diagnosis, reducing the need for manual review and speeding up workflows. By mimicking expert strategies and integrating cutting-edge AI models, automation is reshaping how clinicians interpret scans and make medical decisions.

  • Streamline workflows: Adopt AI-driven tools for tasks such as measuring heart function or filtering pathology slides to save time and let clinicians concentrate on decision-making.
  • Build trust: Use systems that highlight which parts of an image were most important for AI decisions, making results easier for medical experts to understand and verify.
  • Stay current: Explore open-source models and keep up with regulatory guidance to ensure safe, reliable integration of automation into everyday practice.
Summarized by AI based on LinkedIn member posts
  • View profile for Heather Couture, PhD

    Making vision AI work in the real world • Consultant, Applied Scientist, Writer & Host of Impact AI Podcast

    15,789 followers

    Processing whole slide images typically requires analyzing 18,000+ tiles and hours of computation. But what if AI could work like a pathologist? The computational bottleneck: Current AI approaches face fundamental inefficiency. Whole slide images are massive gigapixel files divided into thousands of tiles for analysis. Most systems process every tile regardless of diagnostic relevance, averaging 18,000 tiles per slide. This brute-force approach demands enormous resources and creates clinical adoption barriers. Experienced pathologists don't examine every millimeter uniformly. They strategically focus on diagnostically informative regions while quickly scanning normal tissue or artifacts. Peter Neidlinger et al. developed EAGLE (Efficient Approach for Guided Local Examination), mimicking this selective strategy. The system combines two foundation models: CHIEF for identifying regions meriting detailed analysis, and Virchow2 for extracting features from selected areas. Key metrics: - Speed: Processed slides in 2.27 seconds, reducing computation time by 99% - Accuracy: Outperformed state-of-the-art models across 31 tasks spanning four cancer types - Interpretability: Allows pathologists to validate which tiles informed decisions The authors note that "careful tile selection, slide-level encoding, and optimal magnification are pivotal for high accuracy, and combining a lightweight tile encoder for global scanning with a stronger encoder on selected regions confers marked advantage." Practical implications: This efficiency addresses multiple adoption barriers. Reduced computational requirements eliminate dependence on high-performance infrastructure, democratizing access for smaller institutions. The speed enables real-time workflows integrating into existing diagnostic routines rather than separate batch processing. Most importantly, the selective approach provides interpretability - pathologists can examine specific tissue regions influencing AI analysis, supporting validation and trust-building. Broader context: EAGLE represents a shift from computational brute force toward intelligent efficiency in medical AI. Rather than scaling hardware requirements, it scales down computational demands while improving performance. This illustrates how understanding domain expertise can inform more effective AI architectures than purely data-driven approaches. How might similar efficiency-focused approaches change AI implementation in your field? paper: https://lnkd.in/eR_Hj7ip code: https://lnkd.in/eX8wEfy6 #DigitalPathology #MedicalAI #ComputationalPathology #MachineLearning #ClinicalAI #FoundationModels

  • View profile for J. David Giese

    Rapid, fixed-price FDA software and cyber docs for 510(k)s

    6,260 followers

    𝗧𝗶𝘁𝗹𝗲: Reducing the Workload of Medical Diagnosis through Artificial Intelligence: A Narrative Review 𝗔𝘂𝘁𝗵𝗼𝗿𝘀: Jinseo Jeong, Sohyun Kim, Lian Pan, Daye Hwang, Dongseop Kim, Jeongwon Choi. 𝗗𝗢𝗜: https://hubs.li/Q0372G3H0 𝗢𝘃𝗲𝗿𝘃𝗶𝗲𝘄: This narrative review examines how AI is reshaping diagnostics by reducing diagnostic time and data volume across specialties. It analyzes 51 studies (from January 2019 to February 2024) that compared AI-enhanced workflows with traditional methods. The paper categorizes AI applications based on their role in supporting or even independently performing diagnoses. It provides valuable regulatory insights—referencing FDA guidance for SaMD and AI/ML-based devices—to ensure safe clinical integration. 𝗞𝗲𝘆 𝗧𝗮𝗸𝗲𝗮𝘄𝗮𝘆𝘀: 1) The review evaluated 51 studies to assess AI’s impact on reducing clinician workload and improving efficiency. 2) AI applications were classified into four groups: • Category A: Providing supporting materials (e.g., annotated images) to assist clinicians. • Category B: Reducing the volume of data that clinicians must review. • Category C: Allowing AI to perform independent diagnoses. • Category D: Reducing data volume without measured change in diagnostic time. 3) In radiology, AI reduced diagnostic scan time by over 90% in instances like CT lesion detection and contrast-enhanced mammography. 4) Pathology benefits included significant workload reduction by automating tasks such as slide filtering and aiding cancer detection. 5) The review highlights how digitized, standardized imaging in radiology facilitates higher levels of AI performance compared to other fields with more variable data formats. 6) While AI holds promise in addressing workforce shortages and improving accuracy, challenges remain regarding integration into clinical workflows. 7) Some studies noted delays (e.g., data upload times) and workflow inefficiencies that need further optimization. 8) Ethical, data standardization, and regulatory issues are discussed, emphasizing the need for adherence to FDA guidance on SaMD and AI/ML products. 9) The review suggests successful AI integration requires continuous collaboration between clinicians and technologists. 10) Future research should consider expanding AI’s application beyond diagnostics to treatment decisions, patient management, and real-time decision support. 𝗗𝗶𝘀𝗰𝘂𝘀𝘀𝗶𝗼𝗻 𝗣𝗼𝗶𝗻𝘁𝘀: • How can we further streamline AI integration into existing clinical workflows without compromising data security or patient safety? • What strategies might address variability and standardization challenges, specifically in fields like pathology? • How will evolving FDA guidance impact the safe, effective introduction of AI/ML technologies into healthcare? #AIinHealthcare #DigitalHealth #MedTech #SaMD #HealthcareInnovation #RegulatoryAffairs #MedicalDevices #DiagnosticEfficiency

  • View profile for Mathias Goyen, Prof. Dr.med.

    Chief Medical Officer at GE HealthCare

    69,634 followers

    Wisdom & Workflow Wednesday: AI in Cardiac Imaging - Beyond Detection Cardiac imaging is one of the most technically complex and clinically impactful areas of radiology. From cardiac MRI to echocardiography and CT, these scans inform life-changing decisions: whether a patient receives surgery, medication, or device therapy. But complexity also brings challenges: Variability between readers and institutions Time-consuming manual measurements The need for precise, reproducible quantification to guide therapy This is where #AI can transform the workflow. Not by simply “detecting” abnormalities, but by: Automatically quantifying ventricular volumes, ejection fraction, and strain Reducing inter-reader variability for more reliable clinical decisions Accelerating reporting so cardiologists and radiologists can focus on interpretation, not manual tasks As Chief Medical Officer at GE HealthCare, I’ve seen how clinicians embrace AI in cardiac imaging not because it replaces expertise, but because it amplifies precision and efficiency. Reliable measurements mean more confidence in therapy decisions and ultimately, better patient outcomes. The lesson? In cardiac care, every number counts. And when AI helps ensure those numbers are both fast and accurate, patients win. Colleagues: In your experience, what’s the biggest workflow challenge in cardiac imaging: acquisition, measurement, or reporting? #WorkflowWednesday #CardiacImaging #Radiology #AIinHealthcare #Leadership #GEHealthcare

  • View profile for Jan Beger

    Healthcare needs AI ... because it needs the human touch.

    85,607 followers

    This paper provides a primer for physicians on LLMs and LMMs in medical imaging, explaining their principles, applications, and challenges. 1️⃣ LLMs are transforming healthcare by improving medical imaging workflows, including radiology reporting, error detection, and decision support. 2️⃣ LMMs extend LLMs by integrating text with images, enabling applications such as automated radiology reporting, visual question-answering, and image-based differential diagnosis. 3️⃣ The paper details key LLM components like tokenization, transformers, self-attention, and fine-tuning, which allow models to process and generate medical text effectively. 4️⃣ LMMs use contrastive learning, cross-attention, or early fusion techniques to integrate images with language, enhancing their ability to interpret medical scans. 5️⃣ Applications in radiology include clinical summarization, structured reporting, medical record navigation, and chatbot-based education for both physicians and patients. 6️⃣ Despite their promise, LLMs and LMMs face challenges such as confabulation, bias, and the need for extensive computational resources and clinical validation. 7️⃣ Future developments may include multimodal agents capable of synthesizing vast patient data for personalized diagnostics and treatment planning. ✍🏻 Tyler J. Bradshaw, Xin Tie Joshua Warner, Junjie Hu, Quanzheng Li, Xiang Li. Large Language Models and Large Multimodal Models in Medical Imaging: A Primer for Physicians. J Nucl Med. 2025. DOI: 10.2967/jnumed.124.268072

  • View profile for Vidith Phillips MD, MS

    Imaging AI Researcher, St Jude Children’s Research Hospital | Healthcare AI Strategist | Committee Member, American Neurological Association

    16,164 followers

    📌 Open-Source Medical Imaging AI Models (2024–2025) This curated list highlights the latest open-source AI models transforming medical imaging, from generalist vision-language foundations to specialized tools for segmentation, diagnosis, and report generation. Explore models across radiology, oncology, and multimodal analysis. Full links and details below. 👇 📌 Foundation & Multimodal Models • Rad-DINO – Self-supervised ViT trained on 1M+ chest X-rays • RayDINO – Large-scale DINO-based transformer for multi-task chest X-ray learning • Med-Gemini – Gemini-based model fine-tuned for multi-task chest X-ray applications • Merlin – Large 3D vision–language model for CT interpretation and reporting • RadFound – Radiology-wide VLM for report generation and question answering • LLaVA-Rad – Vision–language model for chest X-ray finding generation 📌 Segmentation Models • MedSAM2 – Promptable 3D segmentation model extending Segment Anything to medical imaging • FluoroSAM – SAM variant trained from scratch on synthetic X-ray/fluoro images • ONCOPILOT – Interactive model for CT-based 3D tumor segmentation in oncology 📌 Task-Specific / Tuned Models • MAIRA-2 – Enhanced CXR report generator with finding localization • CheXagent – Instruction-tuned multimodal model for chest X-ray tasks • RadVLM – Dialogue assistant for chest X-ray interpretation and reporting • Mammo-CLIP – CLIP-based model for mammogram classification and BI-RADS prediction • CheXFound – ViT model using GLoRI architecture for disease localization in X-rays Know a model that got missed? Drop it in the comments, let’s build this resource list together. 🤔 _________________________________________________ #ai #imaging #radiology #oncology #machinelearning

  • View profile for Anish Shah

    CEO | Fractional COO, CPO, CSO | Key Leader - Moderna COVID Response | Healthcare, Biotechnology, Pharmaceuticals, Payer, AI, SaaS, RCM | Growth, M&A, Operations Transformation | McKinsey | Entrepreneur | Keynote Speaker

    2,898 followers

    Think AI will replace radiologists? The truth is much weirder—and way more interesting... Radiology is about to go through an identity crisis. Because for over 100 years, medical imaging has been designed for one thing: the human eye. High-res, high-contrast, just the right number of slices—like a very expensive Instagram filter for tumors. But AI doesn’t care what the picture looks like. It doesn’t “see.” It detects. It can pull signal from noise, find disease in raw data, or predict cancer risk in a breast that looks totally normal to a human. So here’s the real shift: We’ve been building imaging machines for people to look at. Now we need totally different machines optimized for AI to think with. And that’s already happening. •Jonathan Rothberg’s Hyperfine | AI-Powered Portable MRI is so low-powered it’s unreadable by humans—but AI interprets it just fine. •Nanox Vision is making cloud-connected X-ray machines meant for AI-only triage at scale. •Qure.ai, founded by Prashant Warier, is screening for TB across rural India using chest X-rays and AI—with no radiologist in sight. •Lunit Cancer Screening, led by Brandon B. Suh, is getting FDA clearance for autonomous cancer detection and running large-scale real-world deployments. • And at MIT, Regina Barzilay’s Mirai model predicts breast cancer risk five years out—before anything shows up on a scan. So what happens to the radiologist? They don’t vanish. They evolve. Into a role that’s less about manually scanning slices and more about: • Synthesizing AI insights with clinical judgment • Validating outputs across multiple data sources and modalities • Guiding diagnostic strategy when the answer isn’t obvious—and the stakes are high • Overseeing the safety, bias, and reliability of AI tools in real-world care They may never even look at the image. There may not even be a meaningful image to look at (much like QR codes today) Because the image isn’t the diagnosis anymore—it’s just a receipt. This shift unlocks massive change: • Imaging gets faster, cheaper, safer (goodbye excess radiation) • Screening moves upstream—AI finds risk in heartbeat patterns, retinal scans, voice recordings • Diagnosis becomes multi-modal, AI-native, and decentralized So no—AI isn’t replacing radiologists. It’s replacing the idea of radiology.

  • View profile for Luke Yun

    building AI computer fixer | AI Researcher @ Harvard Medical School, Oxford

    32,836 followers

    MIT and Harvard Medical School researchers just unlocked interactive 3D medical image analysis with language! Medical imaging AI has long been limited to rigid, single-task models that require extensive fine-tuning for each clinical application. 𝗩𝗼𝘅𝗲𝗹𝗣𝗿𝗼𝗺𝗽𝘁 𝗶𝘀 𝘁𝗵𝗲 𝗳𝗶𝗿𝘀𝘁 𝘃𝗶𝘀𝗶𝗼𝗻-𝗹𝗮𝗻𝗴𝘂𝗮𝗴𝗲 𝗮𝗴𝗲𝗻𝘁 𝘁𝗵𝗮𝘁 𝗲𝗻𝗮𝗯𝗹𝗲𝘀 𝗿𝗲𝗮𝗹-𝘁𝗶𝗺𝗲, 𝗶𝗻𝘁𝗲𝗿𝗮𝗰𝘁𝗶𝘃𝗲 𝗮𝗻𝗮𝗹𝘆𝘀𝗶𝘀 𝗼𝗳 𝟯𝗗 𝗺𝗲𝗱𝗶𝗰𝗮𝗹 𝘀𝗰𝗮𝗻𝘀 𝘁𝗵𝗿𝗼𝘂𝗴𝗵 𝗻𝗮𝘁𝘂𝗿𝗮𝗹 𝗹𝗮𝗻𝗴𝘂𝗮𝗴𝗲 𝗰𝗼𝗺𝗺𝗮𝗻𝗱𝘀. 1. Unified multiple radiology tasks (segmentation, volume measurement, lesion characterization) within a single, multimodal AI model. 2. Executed complex imaging commands like “compute tumor growth across visits” or “segment infarcts in MCA territory” without additional training. 3. Matched or exceeded specialized models in anatomical segmentation and visual question answering for neuroimaging tasks. 4. Enabled real-time, interactive workflows, allowing clinicians to refine analysis through language inputs instead of manual annotations. Notably, I like that the design includes native-space convolutions that preserve the original acquisition resolution. This addresses a common limitation in medical imaging where resampling can degrade important details. Excited to see agents being introduced more directly into clinician workflows. Here's the awesome work: https://lnkd.in/ggQ4YGeX Congrats to Andrew Hoopes, Victor Ion Butoi, John Guttag, and Adrian V. Dalca! I post my takes on the latest developments in health AI – 𝗰𝗼𝗻𝗻𝗲𝗰𝘁 𝘄𝗶𝘁𝗵 𝗺𝗲 𝘁𝗼 𝘀𝘁𝗮𝘆 𝘂𝗽𝗱𝗮𝘁𝗲𝗱! Also, check out my health AI blog here: https://lnkd.in/g3nrQFxW

  • View profile for Vineet Agrawal
    Vineet Agrawal Vineet Agrawal is an Influencer

    Helping Early Healthtech Startups Raise $1-3M Funding | Award Winning Serial Entrepreneur | Best-Selling Author

    50,804 followers

    A new AI model by UCLA researchers can analyze medical scans 5,000x faster than human doctors with the same accuracy. By using transfer learning from 2D medical data, SLIViT(Slice Integration by Vision Transformer) overcomes the challenge of limited 3D datasets, making it capable of analyzing complex 3D scans with incredible speed and precision. What once took 8 hours now takes just 5.8 seconds. Here’s how it works: 1. Transfer learning SLIViT is pre-trained on extensive 2D medical imaging datasets, enabling it to effectively analyze 3D scans despite the limited availability of 3D datasets. 2. Fast & accurate analysis Using a ConvNeXt backbone for feature extraction and a Vision Transformer (ViT) module for combining these features, SLIViT matches the accuracy of clinical specialists. 3. Flexibility across modalities SLIViT can analyze scans from multiple modalities, including OCT, MRI, ultrasound, and CT, making it adaptable to emerging imaging techniques and diverse clinical datasets. This AI can work with smaller datasets, making it accessible even to hospitals with limited resources. It means: -Rural clinics can offer expert-level diagnostics -Life-threatening conditions are caught earlier -Millions of patients get faster care In healthcare, speed isn’t just about efficiency - it’s about survival. And if SLIViT lives up to its claims in real-world scenarios, it could be a superpower to help save more lives, faster. Could this AI breakthrough reshape the future of medical diagnostics? #ai #innovation #healthtech

  • View profile for Dr. Prasun Mishra

    Innovation Executive | Venture Capital | Technology | Healthcare | Precision Medicine | Drug Discovery & Development

    25,797 followers

    Advancements in Artificial Intelligence Revolutionizing Neuro-Oncology🧠 Gliomas, a class of brain tumors that pose significant global health challenges, have been the focus of AI-driven innovations. From imaging analysis to genomic interpretation, AI is enhancing tumor detection, categorization, outcome prediction, and treatment planning efficiency and accuracy. Here's how AI is revolutionizing every step of the journey for neuro-oncologists, radiation oncologists, neuroradiologists, neurosurgeons, neuropathologists, and molecular pathologists: a) Empowering Neuro-Oncologists and Radiation-Oncologists: AI augments capabilities by integrating diagnosis, offering deeper insights into the disease, predicting precise prognoses, and tailoring treatment plans to individual patient needs. b) Supporting Neuroradiologists: Leveraging MRI images, AI automates detection and tumor segmentation, identifies molecular subtypes, provides quantitative measurements, and ensures diagnostic accuracy, distinguishing tumors from necrotic regions. c) Assisting Neurosurgeons: AI provides real-time diagnosis information and guidance during surgery, enhancing precision and patient outcomes, particularly in surgical margin assessment. d) Aiding Neuropathologists: From fresh to FFPE samples, AI automates feature measurement, aids in tumor classification and grading, improves detection, and offers comprehensive histo-molecular analysis of cellular and tissue structures. e) Empowering Molecular Pathologists: AI handles diverse data types, including mutation data, single-cell information, methylation patterns, and RNA sequencing. It supports biomarker identification, treatment response prediction, variant identification, and streamlining molecular analysis processes. With AI's assistance, we're entering an era of personalized, precise, and efficient cancer care. Together, we strive toward better outcomes and brighter futures for patients worldwide. Let's continue pushing boundaries and harnessing the potential of AI in healthcare! 🌟 Reference: Khalighi et al., NPJ Precis. Onc. 8, 80 (2024). #NeuroOncology #ArtificialIntelligence #PrecisionMedicine #HealthcareInnovation #BrainTumorResearch American Association for Precision Medicine (AAPM) #aapmhealth #aapm_health #AI #News

  • View profile for Pranav Rajpurkar

    Co-founder of a2z Radiology AI. Harvard Associate Professor.

    13,518 followers

    Excited to share our latest research on generalist AI for medical image interpretation! 🩺🖥️ In collaboration with an incredible team, we developed MedVersa - the first multimodal AI system that learns from both visual and linguistic supervision to excel at a wide variety of medical imaging tasks. By leveraging a large language model as a learnable orchestrator, MedVersa achieves state-of-the-art performance in 9 tasks, sometimes outperforming top specialist models by over 10%. To train and validate MedVersa, we curated MedInterp, one of the largest multimodal dataset for medical image interpretation to date, consisting of over 13 million annotated instances spanning 11 tasks across 3 modalities. This diverse dataset allowed us to create a truly versatile and robust AI assistant. MedVersa's unique architecture enables it to handle multimodal inputs and outputs, adapt to real-time task specifications, and dynamically utilize visual modules when needed. This flexibility and efficiency highlight its potential to streamline clinical workflows and support comprehensive medical image analysis. We believe this work represents a significant milestone in the development of generalist AI for healthcare. By demonstrating the viability of multimodal generative medical AI, we hope to pave the way for more adaptable and efficient AI-assisted clinical decision-making. We're excited to engage in discussions about how generalist models like MedVersa could shape the future of healthcare! 🏥🔮 Hong-Yu Zhou, Subathra Adithan, Julián Nicolás Acosta, Eric Topol, MD Read our paper: https://lnkd.in/d2cEKh6Q

Explore categories