Deep learning applied to liver MRI scan data can predict the development of cardiovascular disease. That sounds unusual, if not unbelievable, at first glance, doesn't it? In a study published online by JHEP Reports last week (—> https://lnkd.in/eP26rwDs ), Dr. Jakob Nikolas Kather from the Medical Oncology Department of the NCT Heidelberg - Nationales Centrum für Tumorerkrankungen (NCT) Heidelberg and colleagues investigated the application of transformer neural networks using liver MRI data from the U.K.Biobank’s collection to determine their efficacy in cardiovascular risk prediction. Cardiovascular disease is frequently linked to underlying metabolic conditions. Since the liver plays a central role in metabolism, it could serve as a marker for metabolic shifts that precede cardiovascular disease - particularly major adverse cardiac events (MACEs). Developing noninvasive, imaging-based biomarkers to assess cardiovascular risk, especially in individuals who have not yet shown symptoms, could support earlier detection; however, this approach remains difficult to implement. The team used transformer neural networks, a newer, more flexible type of neural network, to develop a liver-MRI foundation model trained through self-supervised learning on 44,672 U.K. Biobank single-slice liver MRIs. Of these scans, those used for the training included a combination of all of those with a recorded occurrence of MACE before the liver MRI exam (974), along with the majority of participants with no history of MACE before the MRI (43,698). An additional 750 (all 214 participants with first-time MACE after the MRI, and 536 randomly selected participants with no history of MACE both before and after the MRI) were used for external validation. In all, there were 45,422 participants. The researchers assessed the predictive ability of the model by comparing predicted risk scores with the actual cardiovascular outcomes. The team evaluated the results of subgroups based on identified risk factors from SCORE2 (e.g., diabetes, cholesterol, systolic blood pressure, sex, and smoking status) within our model’s prediction scores to “provide insight into which cardiovascular risk factors are being captured in a more pronounced way by our model.” The results showed that the model has “significant discriminatory capacity” for predicting MACE and cardiovascular-related mortality, even outperforming methods such as SCORE2. Nevertheless, the authors acknowledged that despite the model’s potential as an imaging-based biomarker for cardiovascular risk, using MRI broadly for screening is unrealistic due to its high cost and limited accessibility. Instead, they recommended its use be focused on high-risk populations or in cases where relevant imaging data already exists such as in patients with known metabolic disorders or those who have undergone liver imaging for other clinical reasons.
Neural Networks in Medical Research
Explore top LinkedIn content from expert professionals.
Summary
Neural networks in medical research are computer systems that learn to recognize patterns in complex medical data, helping researchers and doctors make better predictions about health outcomes and disease risks. By analyzing information from images, blood samples, and patient records, these AI models are opening new possibilities for diagnosis, treatment, and understanding of diseases.
- Explore diverse data: Use neural networks to process different types of medical data, such as scans, genetic information, and clinical notes, to uncover useful trends and predictions.
- Improve diagnosis: Consider AI tools that can predict disease risk or treatment response, especially for conditions that are hard to detect early or require personalized care.
- Support research: Apply neural networks to create noninvasive biomarkers and simulate medical scenarios, which can accelerate studies and help discover new treatment approaches.
-
-
This paper explores the applications of large-scale AI models in medicine, focusing on Medical Large Models (MedLMs), including LLMs, Vision Models, 3D Large Models, and Multimodal Models. 1️⃣ LLMs process clinical text, aiding in electronic health records (EHR) analysis, medical question-answering, and treatment planning. Examples include MedPaLM and MedGPT, which support medical education and diagnostics. 2️⃣ Vision models based on CNNs assist in medical imaging tasks like cancer detection and anomaly detection, achieving dermatologist-level accuracy in skin cancer diagnosis. Vision-Language Models (VLMs) enhance zero-shot learning for medical images. 3️⃣ 3D large models analyze volumetric medical data, aiding in tumor segmentation, virtual surgery simulations, and anatomical modeling for prosthetics. 4️⃣ Multimodal models integrate clinical text, imaging, and genomic data to improve diagnostic accuracy and personalized treatment planning, particularly in oncology. 5️⃣ Graph large models (LGMs) use graph neural networks (GNNs) in medical knowledge graphs, drug discovery, and genomics, aiding in disease risk prediction and biomarker identification. 6️⃣ Drug discovery is accelerated by MedLMs such as AlphaFold and GraphDTA, which predict protein structures and drug-target interactions, improving efficiency in molecular design. 7️⃣ AI-driven models assist in summarizing patient records, generating diagnostic reports, and enhancing clinical documentation, reducing physician workload. 8️⃣ Biomedical image generation using GANs and diffusion models produces high-quality synthetic medical images for data augmentation, improving AI training in pathology and radiology. 9️⃣ AI-driven models enhance precision medicine by integrating multi-source patient data, enabling individualized diagnosis and treatment strategies. 🔟 Challenges include high computational costs, ethical concerns, and potential inaccuracies (AI hallucinations), which limit real-world implementation. ✍🏻 YunHe Su, Zhengyang Lu, Junhui Liu, Ke Pang, Haoran Dai, Sa Liu, Yuxin Jia, Lujia Ge, Jing-min Yang. Applications of Large Models in Medicine. arXiv 2025. DOI: 10.48550/arXiv.2502.17132v1
-
Foundation models like GPT have revolutionized how machines understand and generate human language. What if we could apply similar principles to understand the complex language of disease biology to predict which patients will respond best to new cancer treatments? Our latest research, out now in Nature Communications, titled “Pretrained transformers applied to clinical studies improve predictions of treatment efficacy and associated biomarkers” explores this question. In it, we propose the *Clinical Transformer*, a deep neural network survival prediction framework based on transformers that: 🧠 Leverages transfer learning from large data repositories (like TCGA and GENIE) to build foundation models that can be fine-tuned to tasks like predicting immunotherapy responses in early-stage clinical trials 🔗 Captures complex relationships between molecular, clinical, and demographic data 🔍 Explains its predictions by showing which features drive risk or response. In our studies, the Clinical Transformer: 📊 Outperformed state-of-the-art methods to predict survival for over 150,000 patients across 12 different cancer types 🔬 Predicted survival in small, early-stage clinical trials for immunotherapy 💡 Identified new biomarkers of immunotherapy response and resistance through in silico perturbation experiments. We're excited about the potential of foundation models like the Clinical Transformer to drive innovation in precision medicine and help improve patient outcomes. Read the full paper here: https://lnkd.in/dUFcg_4B Thanks to all the co-authors: Gustavo Arango, Elly Kipkogei, Ross Stewart, Gerald Sun, Arijit Patra, Ioannis Kagiampakis #PrecisionMedicine #ClinicalAI #AIinHealthcare #AIinLifeSciences
-
Beyond single cells: AI now predicts gene expression across entire brain tissues from a blood sample. Researchers from Emory University developed gemGAT, an AI tool that uses Graph Attention Networks (GATs) to predict gene expression in 47 tissues - including key brain regions - from whole blood data. This breakthrough scales up prediction from individual cells to comprehensive tissue-level insights. Why this matters: Accessing brain tissue is invasive, but blood offers a window into otherwise inaccessible areas, transforming how we study and potentially treat neurological diseases like Alzheimer's. Highlights from the paper: • Model outperforms existing models: Superior in 83% of tested tissues • Validated findings: Successfully identified known Alzheimer's-associated genes and pathways • Scalable and precise: Captures nonlinear gene interactions to predict expression across entire tissues • Real-world validation: Results supported by the Alzheimer's Disease Neuroimaging Initiative Paper: "Cross-tissue Graph Attention Networks for Semi-supervised Gene Expression Prediction" Authors: Shiyu Wang, Mengyu He, Muran Qin, Yijuan Hu, Liang Zhao, Zhaohui Qin Affiliations: Emory University, UC San Diego, Peking University Read the paper: https://lnkd.in/eiXu4h_f How do you think AI could transform research in other hard-to-access tissues? Is the next step full organism? #AI #Biotech #Neuroscience #rAIvolution #FutureOfMedicine Illustration EMxID
-
Kyle Williams, Stephen Rudin MD, Daniel Bednarek, Ammad Baig, Adnan Siddiqui, MD, PhD, Ciprian Ionita, and our team conducted a recent study that introduces a novel approach to neurovascular diagnostics that integrates artificial intelligence with physics modeling using Physics-informed Neural Networks (PINNs). This method leverages patient-specific vascular models, providing a significant improvement over traditional computational fluid dynamics (CFD). 🧠 Study Insights: ◾ Efficiency and Accuracy: Our PINNs method calculates high-resolution velocity and pressure fields in blood vessels without the manual data processing required by conventional CFD, thereby enhancing diagnostic efficiency and accuracy. ◾ Application: Successfully applied to cases such as aneurysms and carotid bifurcations, this technique supports more precise and personalized treatment planning for patients with neurovascular pathologies. By combining AI with detailed physical models, our approach streamlines the diagnostic process and enhances the accuracy of neurovascular assessments. This innovation paves the way for more advanced and patient-specific therapeutic strategies. We see the potential this brings to neurovascular healthcare and invite collaboration and discussion from peers in the field. View the abstract here: https://bit.ly/3W5aMsk #NeurovascularDiagnostics #AI #MachineLearning #HealthTech #MedicalInnovation #Neurology
-
🟥 The Latest Tool for Tumor Diagnosis! CrossNN Makes Tumor Identification Fast and Accurate DNA methylation is a way for us to "understand tumors". Just like adding a layer of "chemical label" to the genetic code, it helps doctors determine which type of tumor belongs to, and can even identify some rare types that cannot be seen by traditional technology. In the past, we relied on specific chip technology to detect this label. But here comes the problem - as sequencing technology continues to update, the data generated by the new platform varies greatly, and the original analysis tools "cannot recognize" it, which brings new obstacles to clinical diagnosis. At this time, crossNN is on the scene! crossNN is a new tool based on neural networks. It can still accurately identify the type of tumor from the "fragmented and incomplete" DNA methylation data obtained from different sequencing platforms. Whether it is microarray, nanopore sequencing, or targeted bisulfite sequencing, crossNN can easily cope with it. It is no longer limited to "fixed templates", but like a "cross-language translator", it automatically adapts to various data accents and gives accurate diagnostic conclusions. Moreover, it not only has a high accuracy rate (up to 99.1% for brain tumor models and 97.8% for all cancer models), but also has the ability to explain - unlike the traditional "black box AI" which is mysterious, crossNN can tell you "why I judge this way". This is especially important in the medical field, because doctors need to understand the basis of AI before they can use it with confidence. A research team from Germany used crossNN to train a classifier that can identify more than 170 types of tumors, covering almost all organ sources. The verification on more than 5,000 tumor samples proved its robustness and scalability. In short, crossNN not only improves the speed and accuracy of diagnosis, but also promotes a new era of "cross-platform" precision medicine. Keywords: crossNN methylation sequencing tumor classification neural network cross-platform diagnosis Reference [1] Dongsheng Yuan et al., Nature Cancer 2025 (https://lnkd.in/ebNXcvnb)
-
New in Nature Biomedical Engineering, we present an #AI method that achieves flexible and accurate inference for #brain-computer interfaces (BCIs). It can flexibly infer brain states from neural signals causally in real time, non-causally, and even with missing neural samples, which can happen in wireless #BCI. For #BCI's, we need deep learning models of brain signals that not only are accurate, but also enable flexible inference. Our method, a neural network called DFINE, addresses this challenge, achieving both accuracy and flexibility. It does so by jointly training a nonlinear manifold and linear dynamics on top of it to optimize the prediction of future data. Its inference is also recursive and thus efficient for real-time implementation in #neurotechnology. In addition to enabling flexible inference, on multiple neural population datasets, DFINE outperformed benchmark linear and nonlinear dynamical models of brain data. DFINE's improvements were larger with missing neural samples, further showing its robustness for wireless BCIs. Congratulations to my PhD students and co-first authors, Salar Abbaspour and Eray Erturk! Thanks also to our collaborator Bijan Pesaran. Paper: https://lnkd.in/gcGPYtxj Free view-only full-text: https://rdcu.be/dth8Y Code: https://lnkd.in/gjkXKsFH Nature Communities Behind the Paper story: https://lnkd.in/gTBVUXhX USC Viterbi School of Engineering news: https://lnkd.in/gRZJiAPi https://lnkd.in/gcGPYtxj
-
Voice patterns could soon be a game-changer in diagnosing lung diseases! 💬 In an innovative study, researchers have tapped into the power of voice analysis paired with artificial neural networks to spot lung diseases. By comparing the voices of patients with lung conditions to those of healthy individuals, this method promises a non-invasive, cost-effective way to support clinical decisions. Key Findings: 1️⃣ The neural networks hit an impressive accuracy, identifying 85% of male and 83% of female patients correctly based on their voice patterns. 2️⃣ Advanced techniques like fast Fourier transform and discrete wavelet transform were used to dive deep into the voice recordings. 3️⃣ The study found notable spectral differences in specific frequency ranges between healthy individuals and those with lung diseases, particularly in men. 4️⃣ Recognizing the importance of gender-specific patterns, separate neural networks were created for male and female participants. 5️⃣ Data augmentation techniques boosted the dataset to 751 samples, enhancing the performance of the neural networks and lowering the risk of overfitting. 6️⃣ The potential here is massive, especially for telehealth and emergency medicine, making it easier to diagnose lung conditions without invasive procedures. This study opens up new avenues for using voice analysis in healthcare, especially in settings where quick, non-invasive diagnostics are crucial. (Ref: Bringel KA, Leone DCMG, Firmino JVL de C, Rodrigues MC, de Melo MDT. Voice Analysis and Neural Networks as a Clinical Decision Support System for Patients With Lung Diseases. Mayo Clin Proc Digit Health. 2024;2(3):367-374. DOI: 10.1016/j.mcpdig.2024.06.006) #lungsdieses #healthcare #treatments #research #disease
-
Article alert! 🔬 Towards AI-enabled "virtual cell", and what tools we have to achieve that 👇 I just came across a super cool artcle in Cell, by Charlotte Bunne, Yusuf Roohani, Yanay Rosen, Emma Lundberg, and others outlining a bold roadmap for so called AI Virtual Cell (AIVC). As you know, cells are the fundamental units of life, but despite decades of modeling, we still struggle to predict or simulate their behavior in health and disease. Traditional mechanistic models fall short because many reasons, including multi-scale complexity (molecular → tissue), massive component interactions and nonlinear dynamics. 💡 What’s new: In this article authors propose an AIVC, a multi-modal, multi-scale neural network-based simulator that can model the structure, function, and behavior of cells from molecules to tissues, across conditions, perturbations, and time. It synthesizes insights from recent advances in such areas as: ✔️ Foundation models (e.g., transformers, diffusion models, GNNs) ✔️ Omics data (transcriptomics, proteomics, spatial imaging) ✔️ High-throughput perturbation screens ✔️ Active learning and uncertainty-aware predictions Some key concepts to note from the paper: ⚙️ Universal representations (URs): learned embeddings of biological states across scales and modalities. ⚙️ Virtual instruments (VIs): neural interfaces that simulate experiments, predict perturbation outcomes, or decode states into interpretable outputs. ⚙️ Manipulators and decoders: predict how cells evolve over time, respond to drugs, or differentiate — even under unseen conditions. Possible applications: 🧫 Simulating rare or inaccessible cell types (e.g., neurons or β-cells) 🧫 Virtual phenotypic drug screens 🧫 Personalized diagnostics & digital twins 🧫 Cell therapy and synthetic biology design 🧫 Precision oncology through modeling tumor microenvironments Of course, there are still many challenges to overcome, including need for designing rigorous benchmarking frameworks for generalization and fidelity. Also, balancing black-box performance with biological interpretability is a tricky part. And ensuring self-consistency across biological contexts, scales, and modalities... Finally, the need with inclusive, diverse datasets and ethical frameworks, and enabling open, collaborative infrastructure for model development and deployment. I highly recommend reading the full article for anyone working at the intersection of biology, AI, and systems modeling. It is also packed with nice diagrams, and is just pleasant to read over a coffee. ☕ Image source: Bunne, Charlotte et al. Cell, Volume 187, Issue 25, 7045 - 7063
-
Modern clinical trials can capture tens of thousands of clinicogenomic measurements per individual. Discovering predictive biomarkers, as opposed to prognostic markers, remains challenging. To address this, we present a neural network framework based on contrastive learning—the Predictive Biomarker Modeling Framework (PBMF)—that explores potential predictive biomarkers in an automated, systematic, and unbiased manner. Applied retrospectively to real clinicogenomic datasets, particularly for immuno-oncology (IO) trials, our algorithm identifies biomarkers of IO-treated individuals who survive longer than those treated with other therapies. We demonstrate how our framework retrospectively contributes to a phase 3 clinical trial by uncovering a predictive, interpretable biomarker based solely on early study data. Patients identified with this predictive biomarker show a 15% improvement in survival risk compared to those in the original trial. The PBMF offers a general-purpose, rapid, and robust approach to inform biomarker strategy, providing actionable outcomes for clinical decision-making. New publication detailing the Predictive Biomarker Modeling Framework (PBMF) a neural network framework that uses contrastive learning to identify potential predictive biomarkers from real-world clinicogenomic data. Congrats to Gustavo Arango Damian Bikiel Gerald Sun Etai Jacob and larger team on this work! https://lnkd.in/ejeB2WP3