Deep Learning in Pathology

Explore top LinkedIn content from expert professionals.

Summary

Deep learning in pathology uses artificial intelligence to analyze medical images and molecular data, helping doctors diagnose diseases and predict patient outcomes more accurately and efficiently. By training computers to recognize patterns in pathology slides and related information, this technology can reveal insights that may be missed by the human eye and support more personalized healthcare.

  • Integrate multiple data sources: Combine traditional pathology slides with molecular data to unlock richer diagnostic and prognostic insights for patients.
  • Streamline clinical workflows: Use automated image analysis to save time and minimize manual errors in tissue and cell assessments.
  • Support decision-making: Apply AI-driven models to guide treatment choices and predict disease progression, especially in challenging or resource-limited scenarios.
Summarized by AI based on LinkedIn member posts
  • View profile for Luke Yun

    building AI computer fixer | AI Researcher @ Harvard Medical School, Oxford

    32,836 followers

    Research from Harvard & MIT used AI to unlock molecular insights in cancer pathology. Foundation models are revolutionizing computational pathology. But, most struggle to analyze entire whole-slide images (WSIs) and incorporate molecular data. 𝗧𝗛𝗥𝗘𝗔𝗗𝗦 𝗶𝗻𝘁𝗿𝗼𝗱𝘂𝗰𝗲𝘀 𝗮 𝗺𝘂𝗹𝘁𝗶𝗺𝗼𝗱𝗮𝗹 𝗳𝗼𝘂𝗻𝗱𝗮𝘁𝗶𝗼𝗻 𝗺𝗼𝗱𝗲𝗹 𝘁𝗵𝗮𝘁 𝗹𝗲𝗮𝗿𝗻𝘀 𝗳𝗿𝗼𝗺 𝗯𝗼𝘁𝗵 𝗵𝗶𝘀𝘁𝗼𝗽𝗮𝘁𝗵𝗼𝗹𝗼𝗴𝘆 𝘀𝗹𝗶𝗱𝗲𝘀 𝗮𝗻𝗱 𝗺𝗼𝗹𝗲𝗰𝘂𝗹𝗮𝗿 𝗽𝗿𝗼𝗳𝗶𝗹𝗲𝘀. • 𝗣𝗿𝗲𝘁𝗿𝗮𝗶𝗻𝗲𝗱 𝗼𝗻 𝟰𝟳,𝟭𝟳𝟭 𝗛&𝗘-𝘀𝘁𝗮𝗶𝗻𝗲𝗱 𝗪𝗦𝗜𝘀 𝘄𝗶𝘁𝗵 𝗴𝗲𝗻𝗼𝗺𝗶𝗰 𝗮𝗻𝗱 𝘁𝗿𝗮𝗻𝘀𝗰𝗿𝗶𝗽𝘁𝗼𝗺𝗶𝗰 𝗽𝗿𝗼𝗳𝗶𝗹𝗲𝘀, the largest dataset of its kind. • Enabled state-of-the-art survival prediction, identifying high-risk patients with up to 8.9% higher accuracy than previous models. • 𝗘𝘅𝗰𝗲𝗹𝗹𝗲𝗱 𝗶𝗻 𝗹𝗼𝘄-𝗱𝗮𝘁𝗮 𝘀𝗰𝗲𝗻𝗮𝗿𝗶𝗼𝘀, achieving near-clinical accuracy with just 4 training samples per class. • Introduced “molecular prompting”, allowing AI to classify cancer types and mutations without task-specific training. I like that the architecture of THREADS is notably modular. It begins with an ROI encoder based on CONCHV1.5 (a ViT-L model fine-tuned with vision–language data) to extract patch features. The patch features are then aggregated into a slide-level embedding via an attention-based multiple instance learning (ABMIL) slide encoder. In parallel, distinct encoders for transcriptomic data (a modified scGPT) and genomic data (a multi-layer perceptron) create molecular embeddings. This design not only enables integration of heterogeneous data types but also achieves remarkable parameter efficiency. For instance, THREADS is reported to be 4× smaller than PRISM and 7.5× smaller than GIGAPATH, yet outperforms them on 54 oncology tasks. Here's the awesome work: https://lnkd.in/g5y5HFuV Congrats to Faisal Mahmood, Anurag Vaidya, Andrew Zhang, Guillaume Jaume, and co! I post my takes on the latest developments in health AI – 𝗰𝗼𝗻𝗻𝗲𝗰𝘁 𝘄𝗶𝘁𝗵 𝗺𝗲 𝘁𝗼 𝘀𝘁𝗮𝘆 𝘂𝗽𝗱𝗮𝘁𝗲𝗱! Also, check out my health AI blog here: https://lnkd.in/g3nrQFxW

  • View profile for Moritz Hartmann

    Global Head Roche Information Solutions | Leading digital health business

    13,982 followers

    Groundbreaking work doesn’t lose its impact with time – it sets the stage for the future: Roche’s own Antoaneta Vladimirova co-led a study with Olivier Gevaert from Stanford University, which was published in Nature Communications and has received considerable attention. Together, they developed a deep learning model, SEQUOIA, to turn universally accessible H&E histopathology slides into molecular insights. The model predicts individual gene expression for thousands of genes, molecular pathways, and cell signatures linked to clinical outcomes —not just for one, but 16 of the most common cancers, all from slides that are cheap, routine, and accessible. This opens up immense potential for clinical decision support tools, personalized treatment plans or predicting outcomes like breast cancer recurrence, all using data that are already available. If you’re interested in AI-driven diagnostics, this is definitely worth a read! #digitalhealth #healthcare #healthtech #healthcaretransformation

  • Thrilled to share our latest publication "ADAM: Automated Digital phenotyping And Morphological texture analysis of bone biopsy images using deep learning" in the Oxford University Press Journal of Bone and Mineral Research (JBMR) Plus. Stellar work by The Wallace H. Coulter Department of Biomedical Engineering at Georgia Tech and Emory University and Emory AI.Health PhD student Satvika Bharadwaj in collaboration with colleagues in nephrology at the University of Kentucky College of Medicine. Histomorphometric analysis of undecalcified bone biopsy images provides quantitative assessment of bone turnover, volume, and mineralization using static and dynamic parameters. Traditionally, quantification has relied on manual annotation and tracing of relevant tissue structures, a process that is time-intensive and subject to inter-operator variability. We developed ADAM, an automated pipeline for digital phenotyping, to quantify tissue and cellular components pertinent to static histomorphometric parameters such as bone and osteoid area, osteoclast and osteoblast count, and bone marrow adipose tissue (BMAT) area. The pipeline allowed rapid generation of delineated tissue and cell maps for up to 20 images in less than a minute. Comparing deep learning-generated annotation pixels with manual annotations, we observed Spearman correlation coefficients of ρ = 0.99 for both mineralized bone and osteoid, and ρ = 0.94 for BMAT. For osteoclast and osteoblast cell counts, which are subject to morphologic heterogeneity, using only brightfield microscopic images and without additional staining, we noted ρ = 0.60 and 0.69, respectively (inter-operator correlation was ρ = 0.62 for osteoclast and 0.84 for osteoblast count). The study also explored the application of Morphological Texture Analysis (MTA), measuring relative pixel patterns that potentially vary with diverse tissue conditions. Notably, MTA from mineralized bone, osteoid, and BMAT showed differentiating potential to identify common pixel characteristics between images labeled as low or high bone turnover based upon the final diagnostic report of the bone biopsy. The AUC-ROC obtained for BMAT MTA features as a classifier for bone turnover, was 0.87, suggesting that computer-extracted features, not discernable to the human eye, hold potential in classifying tissue states. With additional evaluation, ADAM could be potentially integrated into existing clinical routines to improve pathology workflows and contribute to diagnostic insights into bone biopsy evaluation and reporting. Link to Journal Paper Landing Page: https://lnkd.in/etH62kXn Link to Publication: https://lnkd.in/eEdh3ix2

  • View profile for Yuming Jiang

    Assistant Professor at Wake Forest University School of Medicine

    2,600 followers

    Thrilled to report our new publication on virtual multiplexed immunofluorescence (mIF) staining using deep learning in the THE LANCET eBioMedicine journal . Title: "Virtual multiplexed immunofluorescence staining from non-antibody-stained fluorescence imaging for gastric cancer prognosis." In this study, we introduce the Multimodal-Attention-based virtual mIF Staining (MAS) system, which significantly advances the field by employing a deep learning model to generate high-quality virtual mIF images from dual-modal non-antibody-stained fluorescence imaging, specifically AF and DAPI imaging. The MAS system utilises self- and multi-attention mechanisms to accurately predict multiple survival-associated biomarkers in gastric cancer, providing a cost-effective and rapid alternative to traditional mIF techniques. Our findings add substantial value to the existing evidence by demonstrating that the MAS system can achieve prognostic accuracy comparable to standard mIF staining. We validated the system using 180 pathological slides from 94 gastric cancer patients, showing consistent performance across both cancerous and non-cancerous gastric tissues. The inclusion of seven key gastric cancer biomarkers (CD3, CD20, FOXP3, PD1, CD8, CD163, and PD-L1) in the study highlights the system's versatility and potential clinical applicability. The ability to rapidly generate reliable mIF images from easily obtainable AF and DAPI slides can facilitate broader adoption of multiplexed staining in clinical and research settings. This method alleviates the high costs and labour-intensive nature of traditional mIF techniques, promoting more widespread use in routine diagnostics and large-scale studies. https://lnkd.in/g-dPXExd

  • View profile for Joseph Steward

    Medical, Technical & Marketing Writer | Biotech, Genomics, Oncology & Regulatory | Python Data Science, Medical AI & LLM Applications | Content Development & Management

    36,919 followers

    A team of international researchers has developed RlapsRisk BC, a deep learning model that analyzes digitized tumor slides to predict 5-year metastasis-free survival in estrogen receptor-positive, HER2-negative early breast cancer patients. Methods: The team trained their AI model on the GrandTMA cohort (1,429 patients) and validated it on the multicenter CANTO cohort (1,229 patients). The model uses standard H&E-stained slides already available for diagnosis, requiring no additional tissue preparation or molecular testing. The approach involves four key steps: tissue tiling, feature extraction using a pre-trained Vision Transformer, risk score creation through multiple instance learning, and binary classification using a 5% metastasis probability threshold. Results: RlapsRisk BC demonstrated significant prognostic value beyond traditional clinical factors, achieving a C-index of 0.81 versus 0.76 for clinical factors alone (p < 0.05). When combined with clinicopathological factors, the model showed: - Improved cumulative sensitivity: 0.69 vs 0.63 - Enhanced dynamic specificity: 0.80 vs 0.76 The model performed particularly well in intermediate clinical risk patients, where treatment decisions are most challenging, with an improvement of +0.08 in C-index. Clinical Interpretability Expert pathologist analysis confirmed that the AI model relies on well-established histological features including nuclear pleomorphism, tumor architecture, mitotic activity, and microenvironment characteristics like vascular structures and fibrosis. Conclusions: This study demonstrates how AI can enhance existing clinical decision-making tools without requiring expensive molecular testing. The approach could help identify patients who may safely avoid chemotherapy or those requiring more intensive treatment, potentially improving quality of life while maintaining survival outcomes. Paper and research by @I. Garberis, @V. Gaury and larger team at OWKIN

  • View profile for Thomas Fuchs

    Chief AI Officer @ Eli Lilly and Company

    14,666 followers

    I am tremendously excited that our Nature Medicine article on #Virchow is out now. This is by far the largest Computational Pathology effort to date. #Virchow is a #FoundationModel based on billions of images from 𝟭.𝟱 𝗺𝗶𝗹𝗹𝗶𝗼𝗻 digital whole slides that is capable of characterizing the morphology of cancer and normal tissue at unprecedented acuity! #Virchow is the result of years of work in a collaboration between the exceptional #AI teams at Paige and Microsoft Research. It represents the first instantiation of a series of models that will unquestionable change the future of cancer research and patient care. 🔬  ͟𝗞͟͟𝗲͟𝘆͟ ͟𝗛͟𝗶͟𝗴͟𝗵͟𝗹͟𝗶͟𝗴͟𝗵͟𝘁͟𝘀͟:͟͟ ➡ 𝗣𝗮𝗻-𝗖𝗮𝗻𝗰𝗲𝗿 𝗗𝗲𝘁𝗲𝗰𝘁𝗶𝗼𝗻: Achieves a specimen-level AUC of 0.95 across nine common and seven rare cancers. ➡️ 𝗘𝗳𝗳𝗶𝗰𝗶𝗲𝗻𝗰𝘆 𝘄𝗶𝘁𝗵 𝗟𝗲𝘀𝘀 𝗗𝗮𝘁𝗮: Virchow's pan-cancer detector matches and often surpasses tissue-specific models using less training data. ➡️ 𝗙𝗼𝘂𝗻𝗱𝗮𝘁𝗶𝗼𝗻 𝗠𝗼𝗱𝗲𝗹 𝗕𝗲𝗻𝗲𝗳𝗶𝘁𝘀: Enables clinical-grade predictions with limited labeled data, demonstrating the power of large-scale deep learning in pathology. What is very dear to my heart is the fight against #rare #cancers. 50% of cancers are rare cancers for which we barely have #biomarkers available and absolutely no AI at all. #Virchow changes this! We focused explicitly on working out how this foundation model performs on rare diseases and how it will help patients left behind until today. 📰 Read the full article in Nature Medicine: "𝘼 𝙁𝙤𝙪𝙣𝙙𝙖𝙩𝙞𝙤𝙣 𝙈𝙤𝙙𝙚𝙡 𝙛𝙤𝙧 𝘾𝙡𝙞𝙣𝙞𝙘𝙖𝙡-𝙂𝙧𝙖𝙙𝙚 𝘾𝙤𝙢𝙥𝙪𝙩𝙖𝙩𝙞𝙤𝙣𝙖𝙡 𝙋𝙖𝙩𝙝𝙤𝙡𝙤𝙜𝙮 𝙖𝙣𝙙 𝙍𝙖𝙧𝙚 𝘾𝙖𝙣𝙘𝙚𝙧𝙨 𝘿𝙚𝙩𝙚𝙘𝙩𝙞𝙤𝙣." https://lnkd.in/dBjjBcx9 #AI #Healthcare #ComputationalPathology #CancerDetection #NatureMedicine #VirchowModel #PrecisionMedicine #Pathology

  • View profile for Sumeet Pandey, PhD

    Translational Immunology & Multi-omics

    3,489 followers

    Histopathological Images to develop “#TissueClocks”: “#TissueClocks”—deep learning-based predictors of biological age using histopathological images from 40 tissue types (GTEx dataset). #KeyFindings >Accuracy: Mean age prediction error was 4.9 years, correlating with telomere attrition, subclinical pathologies, and comorbidities. > AgeingPatterns: Tissue-specific and non-uniform rates of ageing, with some organs aging earlier and others showing bimodal changes. > InnovativeStrategy: Combined histology, gene expression, and clinical data to predict age gaps, validated in external healthy and diseased cohorts. #Significance: > Focus: Shifts from molecular/cellular changes to tissue structure. > Insights: Provides a multi-layered understanding of ageing and potential interventions for age-related diseases. > Applications: Enables biological age prediction, risk assessment, and monitoring of anti-ageing therapies. #Limitations: Invasive sampling, sex imbalance in GTEx, reliance on postmortem samples, and lack of longitudinal data. Got value from this? Share 🔄 #TranslationalResearch #MultiOmics https://lnkd.in/eW277U7s https://lnkd.in/eKqrUc6r

  • View profile for Bo Wang

    SVP and Head of Biomedical AI @ Xaira Therapeutics; Chief Artificial Intelligence Scientist @ UHN; Associate Professor @ University of Toronto; CIFAR AI Chair @ Vector Institute ; Twitter : @BoWang87

    15,671 followers

    Honored to be featured in Nature Portfolio to discuss how AI is quietly revolutionizing one of medicine’s most essential disciplines: pathology. Full article here: https://lnkd.in/gJNynXPe Pathology is the backbone of diagnosis, especially in cancer. But today, it faces immense strain from growing complexity, rising demand, and workforce shortages. AI offers a path forward—one where expert knowledge is extended, not diluted. At University Health Network and beyond, we’re building foundation models for pathology—akin to GPT, but for tissue slides. These models can generalize across tasks like tumor grading, cell counting, mutation prediction, and even generating draft reports. And when integrated with genomic and clinical data, their power multiplies. Of course, trust, validation, and clinical grounding are critical. AI in medicine must be reliable, transparent, and accountable. It must work with pathologists—not as a black box, but as a collaborative partner. Deep thanks to Diana Kwon for the thoughtful reporting and to pioneering researchers like Faisal Mahmood Harvard University and Hao Chen etc., who are building the infrastructure, insight, and trust needed to bring AI into clinical reality. The future of pathology is not automated—it’s augmented. Faster, more precise, and deeply human. #AI #Pathology #DigitalHealth #FoundationModels #MedTech #HealthcareInnovation Vector Institute Temerty Centre for AI Research and Education in Medicine (T-CAIREM) University of Toronto University Health Network UHN AI Hub

  • View profile for Thomas Clozel

    building the first biology artificial super intelligence

    32,857 followers

    Announcing Owkin latest research, published by Nature Portfolio Communications: a large-scale collaborative study with Universitätsklinikum Erlangen demonstrating the potential of a deep learning model to transform FGFR3 mutation testing in muscle-invasive and metastatic urothelial cancer (MIBC/mUC). 🔬 Large-scale validation: our model was trained and tested on 1222 cases from multiple cohorts of MIBC patients. ✅ Near-perfect performance: achieved NPV (0.99–1.00) across three independent external cohorts. 📉 Could reduce molecular testing: could lower the need for PCR/NGS testing by 36-47%, saving time and costs. 🌍 Broad applicability: performs effectively across diverse histological subtypes and growth patterns. Why does this matter? Current FGFR3 testing is expensive and time-consuming. Our AI model offers a faster, cost-effective alternative by screening routine H&E slides to rule out wild-type cases, reducing unnecessary molecular testing. Read the full study in Nature Communications here: https://lnkd.in/e-GKWPeY

  • View profile for Himanshu Jain

    Tech Strategy ,Venture and Innovation Leader|Generative AI, M/L & Cloud Strategy| Business/Digital Transformation |Keynote Speaker|Global Executive| Ex-Amazon

    22,289 followers

    Reading an interesting white paper from Nature Magazine that explores how artificial intelligence (AI) is enabling precision oncology by enhancing diagnostics, treatment strategies, and clinical workflows. Overview of AI/ML in Precision Oncology: · Applications: AI/ML techniques analyze multi-dimensional data, including genomics, radiomics, and spatial pathology, to uncover molecular pathways and optimize cancer treatments. · Technologies: Techniques like deep learning (DL), convolutional neural networks (CNNs), and large language models (LLMs) are applied for tasks such as biomarker identification, image analysis, and clinical decision-making. · Digital Twins: AI-generated synthetic data (e.g., digital twins) accelerates clinical trials by simulating patient responses. Key Applications: · Digital Pathology: AI automates immunohistochemistry (IHC) scoring for biomarkers like PD-L1 and HER2, improving accuracy and reducing variability. · Radiomics: AI extracts quantitative features from medical images to predict treatment outcomes and identify tumor characteristics. · Molecular Medicine: AI facilitates genomic sequencing, epigenomic analysis, and proteomics for biomarker discovery and drug development. · Multimodal Integration: Combining data from radiology, pathology, genomics, and clinical records enhances predictive accuracy for treatment outcomes. Challenges: · Data Limitations: Issues include data heterogeneity, lack of standardization, and biases in datasets that may affect model generalizability. · Clinical Integration: Seamless incorporation of AI tools into existing workflows remains a hurdle due to costs, training needs, and resistance to change. · Ethical Concerns: Data privacy, transparency in AI algorithms, and accountability are critical issues requiring regulatory frameworks. Emerging Trends: · Explainable AI (XAI): Enhances trust by making AI predictions interpretable for clinicians. · Federated Learning: Enables collaborative model training across institutions while preserving patient privacy. · Biosensors: Real-time monitoring devices integrated with AI improve early detection and treatment personalization. · Regulatory Progress: As of December 2024, the FDA has approved over 1,000 AI/ML-enabled medical devices, demonstrating real-world applicability in areas such as cancer detection and radiation therapy planning. Future Directions: · Standardized data-sharing frameworks to ensure diverse training datasets. · Multicenter validation of AI models to establish clinical utility. · Education programs for clinicians to effectively utilize AI tools. Addressing challenges related to data quality, ethical considerations, and clinical integration is essential for widespread adoption. #ArtificialIntelligence #MachineLearning #PrecisionOncology #DeepLearning #DigitalPathology #Radiomics #Biomarkers #ExplainableAI #FederatedLearning #CancerDiagnosis Source:www.nature.com Disclaimer: The opinons are mine and not of employer's

Explore categories