Imagine AI models working together like a team of doctors, each contributing their expertise to solve complex medical cases. This is what "MDAgents: An Adaptive Collaboration of LLMs for Medical Decision-Making" explores, as recently presented at NeurIPS 2024. Working: MDAgents brings a novel approach to using Large Language Models (LLMs) in medicine by dynamically creating a collaborative environment tailored to the complexity of each medical query: 1) Complexity check: Each medical question is evaluated for complexity, determining whether it necessitates a basic, moderate, or advanced collaborative response. 2) Expert recruitment: Based on complexity, MDAgents "recruits" AI agents to act as specialists, forming either a solo practitioner model, a Multi-disciplinary Team (MDT), or an Integrated Care Team (ICT). 3) Analysis and synthesis: The agents engage in collaborative reasoning, using techniques like Chain-of-Thought (CoT) prompting to draw insights and resolve disagreements for more nuanced cases. 4) Decision-making: Synthesizing diverse inputs, the framework reaches a final decision, informed by external medical knowledge and structured discussions among the AI agents. Achievements: 1) MDAgents outperformed both solo and group LLM setups in 7 out of 10 medical tasks, enhancing decision accuracy by up to 11.8%. 2) Demonstrated the critical balance between performance and computational efficiency by adapting the number of participating agents based on task demands. Link to the full paper -> https://lnkd.in/gR7Zwm7t #AI #Healthcare #NeurIPS2024 #MedicalAI #Collaboration #InnovationInMedicine #ResearchInsights
How to Use Large Language Models in Healthcare
Explore top LinkedIn content from expert professionals.
Summary
Large Language Models (LLMs) are a type of artificial intelligence capable of understanding and generating human-like text. In healthcare, they are revolutionizing how professionals process data, make decisions, and communicate complex medical information, paving the way for more personalized and efficient care.
- Streamline medical tasks: Use LLMs to summarize patient records, generate concise discharge summaries, and even aid in creating clear surgical consent forms to save time and enhance patient understanding.
- Support decision-making: Leverage LLMs to collaborate like medical specialists, synthesize clinical data, and provide nuanced insights for complex diagnoses and treatment plans.
- Enhance patient communication: Integrate LLMs in creating patient-friendly explanations of medical data, translating technical information into clear language to improve patient engagement and understanding.
-
-
Large Language Models (LLMs) like ChatGPT have showcased their prowess and versatility across various industries, despite being introduced to the public just a year ago. This blog, authored by the Engineering team at Oscar Health, details their use of ChatGPT 4 in developing an insurance claim assistant function. This assistant is designed to answer customer queries about their claims effectively. In tackling this project, the team employed several unique strategies and solutions. Firstly, they translated complete claim information into a domain-specific language termed “Claim Trace,” enabling ChatGPT to convert structured data into natural language. To enhance the model's performance, they implemented a method akin to providing a table of contents, which aids ChatGPT in better understanding the structure of Claim Trace. Another strategy involved a chain-of-thought approach with function calling, directing ChatGPT to break down a complex problem into smaller, more manageable segments. Additionally, they incorporated an iterative retrieval function, prompting ChatGPT to seek further information in cases of high uncertainty, thereby ensuring more accurate responses. These three methodologies combined to yield great results. The team reported a 100% accuracy rate in simpler cases and over 80% accuracy in more complex scenarios. This achievement boosted the company's operational efficiency and demonstrated how to fine-tune LLMs like ChatGPT to effectively meet specific business objectives. – – – Check out the "Snacks Weekly on Data Science" podcast and subscribe, where I explain in more detail the concepts discussed in this and future posts: -- Apple Podcast: https://lnkd.in/gj6aPBBY -- Spotify: https://lnkd.in/gKgaMvbh #datascience #chatgpt #llm #finetuning #largelanguagemodels #engineering #healthcare https://lnkd.in/gRnf_KmV
-
🚀 7 Types of Healthcare AI Agents You Need to Know About 🤖 These agents are modular, proactive, emotionally aware, and capable of transforming care from reactive to precision-driven. 1️⃣ ReAct + RAG AI Agents 💡 Hybrid reasoning meets real-time knowledge retrieval These agents combine stepwise, logic-based reasoning (structured) with language-based inference (unstructured), enhanced by Retrieval-Augmented Generation (RAG). They can search the latest clinical guidelines, synthesize patient history, and explain nuanced decisions in real-time. 📍 Use Case: disease diagnosis, symptom triage, clinical decision support. 2️⃣ Self-Learning AI Agents 🧠 Continuously adapt based on clinical outcomes and feedback Self-learning agents evolve with each patient encounter. They track what works, learn from mistakes, and refine their reasoning through closed-loop reinforcement. These agents are ideal for conditions where outcomes vary widely and treatment is highly personalized. 📍 Use Case: mental health support, diabetes management, lifestyle-based interventions. 3️⃣ Memory-Enhanced AI Agents 📚 Deliver personalized, longitudinal care with contextual memory These agents maintain persistent, structured memory across time—tracking medication responses, preferences, clinician interactions, and care goals. They ensure that every decision builds on historical context. 📍 Use Case: Alzheimer's care, multimorbidity management, personalized chronic care pathways. 4️⃣ LLM-Enhanced AI Agents 🗣️ Understand, generate, and communicate complex medical language Built on large language models, these agents excel in interpreting clinical notes, summarizing diagnostic results, and generating patient-friendly explanations. 📍 Use Case: discharge instructions, note summarization, virtual assistant triage. 5️⃣ Tool-Enhanced AI Agents 🛠️ Activate external tools and APIs for actionable outcomes Think of these as intelligent conductors that coordinate apps, EHRs, monitoring dashboards, and calculators. They know what to do, and they have the tools to do it. Highly effective in acute care, automation, and digital orchestration. 📍 Use Case: Remote monitoring, care pathway automation, digital twin control. 6️⃣ Self-Reflecting AI Agents 🔍 Evaluate, audit, and improve their own reasoning over time Equipped with metacognition, these agents can assess if their previous decisions were effective—and why. They revise flawed reasoning, update their models, and learn like a seasoned clinician undergoing reflective practice. 📍 Use Case: ICU decision review, surgical support. 7️⃣ Environment-Controlling AI Agents 🌡️ Create responsive healing environments through smart infrastructure These agents manage physical spaces—adjusting temperature, light, noise, or humidity based on patient biometrics or emotional cues. 📍 Use Case: Smart ICU, NICU support, post-op recovery. #Healthcare #AIAgents #AIinHealthcare #DigitalHealth
-
I asked a simple prompt—just something along the lines of “generate a picture of a person working in a lab”—and this image appeared, generated entirely by Grok’s Aurora. Seeing this level of detail and complexity emerge from a single sentence makes me pause and think about what it means for healthcare. I spend each day making decisions that hinge on understanding intricate patient data. If an AI can turn a few words into a vivid, coherent scene, imagine what it could do with comprehensive patient histories, imaging studies, and lab results. We are seeing this today. Large language models (LLMs), like the one behind Grok’s Aurora, hold the potential to fundamentally shift how we approach diagnosis and care. Traditionally, identifying a complex condition might involve sifting through pages of clinical notes, analyzing multiple imaging modalities, and combining the expertise of several specialists. But now LLM's can integrate these inputs—radiology reports, operative notes, genomic data—into a narrative that highlights subtle patterns and correlations a busy clinical team might overlook. This could mean quicker recognition of rare diseases, a sharper understanding of surgical complexity before an operation, or more precise postoperative recommendations for patient follow-up. Beyond diagnosis, these models may help us provide clearer explanations to patients and their families. Just as Grok’s Aurora turned my simple text prompt into a detailed image, an LLM could translate complex medical data into language and visuals that patients can easily understand, helping them become more active participants in their care. In the same way, these tools may assist medical trainees by presenting difficult concepts in more accessible forms, accelerating learning and clinical readiness. The future here isn’t about replacing the human touch—no algorithm can replicate the intuition, compassion, and moral judgment of a physician. Instead, it’s about amplifying our capabilities, making the diagnostic journey more precise, the learning curve less steep, and the connection between patient and provider more transparent. How do you see LLMs changing the way we discover, explain, and treat the conditions that challenge us most? #HealthcareAI #MedicalImaging #ClinicalDecisionSupport #PatientOutcomes #DataDrivenCare #Diagnosis #FutureOfMedicine #PrecisionMedicine #AIinHealthcare #MedicalTechnology #grok #LLMS #AI #Artificialintelligence
-
This JMIR study introduces the first large-language-model (LLM)–assisted surgical consent forms used in Korean liver resection procedures, offering a unique blend of clarity and innovation for clinicians and digital health educators. Key Takeaways - LLM edits significantly simplified sentence structures and vocabulary by reducing text complexity and enhancing accessibility. - Expert ratings showed a meaningful drop in risk descriptions (from 2.29 to 1.92, β₁=–0.371; P=.01) and overall impression (2.21 to 1.71, β₁=–0.500; P=.03). - Qualitative feedback felt the text was “overly simplified” and “less professional,” suggesting nuance was lost amid gains in clarity. - This is one of the first non-English studies, highlighting the challenge of applying LLMs across linguistic and cultural contexts. My thoughts... As a clinical educator deeply invested in AI, healthcare quality, and patient-centered communication, this study underscores some vital lessons. - Enhanced readability ≠ sufficient informed consent. - Balance is key. We must safeguard medical and legal integrity while making content accessible, especially for multilingual, multicultural patient populations. - Clinicians and digital-health leaders must be trained to use LLM-generated content critically. https://lnkd.in/ej8UnXaJ
-
Our perspective into large language models and foundation models in cardiovascular care in Lancet Digital Health: https://lnkd.in/gQ8jPwVg With supervised learning, we could analyze one data type at a time (unimodal) to solve a very specific task, such as the prediction of atrial fibrillation from an ECG in normal sinus rhythm https://lnkd.in/gvNmnUfP, or the reconstruction of a 12-lead ECG from the observation of just a few leads https://lnkd.in/gmWZdHUm. For this, we needed a large dataset with all the outcomes we wanted to predict. With foundation models, based on self-supervised learning, we can learn the structure of the data even in the absence of labelled data. This structure can then be re-applied to specific prediction tasks, after fine-tuning on a small, labelled dataset. A great example is retinal fundus images that can be used also for the prediction of cardiovascular outcomes. Large language models open up new opportunities involving text data, potentially handling tasks such as summarizing text, or extracting and organizing patient information from electronic health records. They can also directly help the patient, translating complex medical information into easily understandable text, or into different languages. Human supervision remains important as these tools are known to fill in knowledge gaps. From a regulatory perspective, these tools are multipurpose, since they can be reused for tasks different than the one they have been tested for. A different regulatory pathway may be needed. By using #LLM and adding different data modalities, we can learn more complex models that are able to adapt to the presence of different data types, which contribute to a better prediction of specific clinical outcomes. With Eric Topol, MD Scripps Research Digital Trials Center Scripps Research
-
In the inpatient setting, documentation remains one of the most burdensome tasks for #clinicians. While essential for continuity of care, H&Ps and discharge summaries are often delayed or left unsigned due to competing demands. As a Primary Care Physician #PCP, many of my recently discharged patients had their discharge summaries still unavailable or unsigned. This can disrupt post-discharge planning, delay medication reconciliation, and contribute to readmissions. A recent study in JAMA Internal Medicine evaluated whether large language models (#LLMs) can help. In a cross-sectional analysis of 100 inpatient encounters, the overall quality was comparable between LLM- and physician-generated notes (3.67 vs. 3.77; P = .21). However, the LLM summaries were more concise and coherent (4.01 vs. 3.70, P < .001; 4.16 vs. 4.01, P = .02), but less comprehensive (3.72 vs. 4.13, P < .001). Now the main issue we worry about is errors; interestingly enough, LLMs had more unique errors (2.91 vs. 1.82 per summary) yet similar low potential for harm compared to a physician-generated note (mean harm score: 0.84 vs. 0.36; P < .001). This suggests that LLMs can augment clinician #workflows by discharging #patients and ensuring continuity of care with the appropriate oversight. However, this raises an important concern: Reviewing LLM-generated content could itself become a new source of clinician burden!!! With LLMs capable of producing vast volumes of documentation, the oversight process must be stratified by risk scores or harm scores — using scoring mechanisms to flag summaries that require close attention while letting low-risk ones pass with minimal intervention. Without such an approach, we risk replacing one form of burnout with another. Link: https://lnkd.in/g-mpzzuw #HealthcareonLinkedin #LLM #AI #ClinicalInformatics #Physiciansburnout #Workflow #HealthIT #HealthInnovation
-
Finally we have a large language model (LLM) specifically trained for Radiation Oncology ready. In our recent paper, we present RadOnc-GPT, a LLM specialized for radiation oncology through advanced tuning methods. RadOnc-GPT was finetuned based on LLAMA2 (an open source LLM deveoped by Meta) using a large dataset of radiation oncology patient records and clinical notes from the Mayo Clinic. The model employs instruction tuning on three key tasks - generating radiotherapy treatment regimens, determining optimal radiation modalities, and providing diagnostic descriptions/ICD codes based on patient diagnostic details. Evaluations conducted by comparing RadOnc-GPT impressions to general LLM impressions showed that RadOnc-GPT generated outputs with significantly improved clarity, specificity, and clinical relevance. The study demonstrated the potential of using LLMs fine-tuned using domain-specific knowledge like RadOnc-GPT to achieve transformational capabilities in highly specialized healthcare fields such as radiation oncology. For more details, please refer to the following link: https://lnkd.in/dAXRvwgu #LLM #radiationoncology #cancer #radiation #largelanguagemodel #cancerresearch