Most #Hospitals Do Not Know Their Own Costs for Surgeries, MRIs or ER Visits... Activity-Based #CostAccounting Explained. The University of Utah Healthcare System Changed That and Implemented a Cost Accounting System So They Could #Measure What Each Hospital Service Cost. They Found that the #ER Cost $0.82 Per Patient Per Minute and #Orthopedic Surgery Cost $12 Per Patient Per Minute. By Better Measuring Their Costs, The University of Utah Hospital Was Able to #Lower Their Overall Costs by Costs by 0.5% While Their Peers #Increased Costs by 2.9%... a 3.4 Percentage Point Improvement. This Achievement Was Such a Success that One of the Most Famous Business School Professors in the World--Michael Porter from Harvard Business School--Flew to Utah to See It for Himself. Cost Accounting is a Basic Business Practice That Amazingly Most Hospitals Have Never Adopted.
Healthcare Financial Management
Explore top LinkedIn content from expert professionals.
-
-
🎉 Pleased to share our paper published in Nature Portfolio digital medicine. 🥳 We’ve developed a comprehensive framework called CREOLA (short for Clinical Review Of Large Language Models (LLMs) and AI). This framework is pioneered at TORTUS, taking a safety-first, science approach to LLMs in healthcare. 🔹 Key Components of the CREOLA Framework -Error Taxonomy -Clinical Safety Assessment -Iterative Experimental Structure 🔹 Error Taxonomy Hallucinations: instances of text in clinical documents unsupported by the transcript of the clinical encounter Omissions: Clinically important text in the encounter that was not included in the clinical documentation 🔹 Clinical Safety Assessment: Our innovation incorporates accepted clinical hazard identification principles (based on NHS DCB0129 standards) to evaluate the potential harm of errors: We categorise errors as either ‘major’ or ‘minor’, where major errors can have downstream impact on the diagnosis or the management of the patient if left uncorrected. This is further assessed as a risk matrix comprising of: Risk severity (1 (minor) to 5 (catastrophic)) compared with Likelihood assessment (very low to very high) 🔹 Iterative Experimental Structure We share a methodical approach to compare different prompts, models, and workflows. Label errors, consolidate review, evaluate clinical safety (and then make further adjustments and re-evaluate if necessary). ----------Method-------------- To demonstrate how to apply CREOLA to any LLM / AVT, we used GPT-4 (early 2024) as a case study here. 🔹 We conduct one of the largest manual evaluations of LLM-generated clinical notes to date, analyzing 49,590 transcript sentences and 12,999 clinical note sentences across 18 experimental configurations. 🔹 Transcripts-clinical note pairs are broken down to a sentence level and annotated for errors by clinicians. ----------Results-------------- 🔹 Of 12,999 sentences in 450 clinical notes, 191 sentences had hallucinations (1.47%), of which 84 sentences (44%) were major. Of the 49,590 sentences from our consultation transcripts, 1712 sentences were omitted (3.45%), of which 286 (16.7%) of which were classified as major and 1426 (83.3%) as minor. 🔹 Hallucination types Fabrication (43%) - completely invented information Negation (30%) - contradicting clinical facts Contextual (17%) - mixing unrelated topics Causality (10%) - speculating on causes without evidence 🔹 Hallucinations, while less common than omissions, carry significantly more clinical risk. Negation hallucinations were the most concerning 🔹 we CAN reduce or even abolish hallucinations and omissions by making prompt or model changes. In one experiment with GPT4 - We reduced incidence of major hallucinations by 75%, major omissions by 58%, and minor omissions by 35% through prompt iteration Links in comments Ellie Asgari Nina Montaña Brown Magda Dubois Saleh Khalil Jasmine Balloch Dr Dom Pimenta M.D.
-
Anyone trying to drive meaningful change – environmental or otherwise – must address this first: We must change human behaviour. This is why I’m such a stickler for storytelling. A few weeks ago, I was at a Lincoln University Centre of Excellence in Transformative Agribusiness event where Prof. Marijn Poortvliet from Wageningen University & Research spoke about risk perception. Whenever we decide whether or not to do something – big or small – we’re weighing up perceived risk. Perceived Probability x Perceived Consequences = Risk Perception Add ‘perceived’ in front of each word, and risk becomes a personal decision. That’s why it can be so hard to convince people to change, even when the facts are known. If we can work with how humans perceive risk, we stand a better chance of influencing change. Marijn discussed the Extended Parallel Process Model (Witte, 1992) which outlines the conditions required for behaviour change: 🟦 Perceived Threat: • Susceptibility – How vulnerable we feel to it • Severity – How serious we believe it is If there’s no perceived threat, no action is taken. 🟦 Perceived Efficacy: • Self-efficacy – Can I do what’s needed? • Response efficacy – Will my efforts be enough? Low efficacy = fear and inaction. High efficacy = behaviour change. This is why storytelling matters. It helps people see the threat (or opportunity) and understand how they can respond. It reminds me of a post I once saw but unfortunately can’t remember the author of: People change when you make sustainability: • Personally relevant • Emotionally compelling • Immediately beneficial Building on that saying in alignment with the EPPM flow model, here’s how to apply this thinking to your own sustainability communication. 1️⃣ Make it personally relevant Show how the issue affects people’s lives, values, or livelihoods – not just “the planet” in abstract terms. (Susceptibility) 2️⃣ Make the threat real, but not paralysing Balance severity with hope. If people only see the doom, they switch off. (Severity) 3️⃣ Show a clear, doable path Help people believe they can act (self-efficacy) and that their action will make a difference (response efficacy). 4️⃣ Make the benefits immediate and meaningful Change sticks when it’s not only “good for the planet” but also good for them. Show how the change can save money, build community, or protect something they love. 💡Next time, ask yourself: • What risk or opportunity am I asking people to pay attention to? • How can I help them see it, feel it, and respond to it? Do that, and you’re not just sharing information, you’re changing behaviour. __________ Image: Susannah Hertrich, (2008). “Reality Checking Device”. The top circles show perceived risk versus actual risk below. #BehaviourChange #SustainabilityStorytelling #ScienceCommunication #RiskPerception
-
This study could change how every frontline clinic in the world delivers care. Penda Health and OpenAI revealed that an AI tool called AI Consult, embedded into real clinical workflows in Kenya, reduced diagnostic errors by 16% and treatment errors by 13%—across nearly 40,000 live patient visits. This is what it looks like when AI becomes a real partner in care. The clinical error rate went down and clinician confidence went up. 🤨 But this isn’t just about numbers. It’s a rare glimpse into something more profound: what happens when technology meets clinicians where they are—and earns their trust. 🦺 Clinicians described AI Consult not as a replacement, but as a safety net. It didn’t demand attention constantly. It didn’t override judgment. It whispered—quietly highlighting when something was off, offering feedback, improving outcomes. And over time, clinicians adapted. They made fewer mistakes even before AI intervened. 🚦 The tool was designed not just to be intelligent, but to be invisible when appropriate, and loud only when necessary. A red-yellow-green interface kept autonomy in the hands of the clinician, while surfacing insights only when care quality or safety was at risk. 📈 Perhaps most strikingly, the tool seemed to be teaching, not just flagging. As clinicians engaged, they internalized better practices. The "red alert" rate dropped by 10%—not because the AI got quieter, but because the humans got better. 🗣️ This study invites us to reconsider how we define “care transformation.” It's not just about algorithms being smarter than us. It's about designing systems that are humble enough to support us, and wise enough to know when to speak. 🤫 The future of medicine might not be dramatic robot takeovers or AI doctors. It might be this: thousands of quiet, careful nudges. A collective step away from the status quo, toward fewer errors, more reflection, and ultimately, more trust in both our tools and ourselves. #AIinHealthcare #PrimaryCare #CareTransformation #ClinicalDecisionSupport #HealthTech #LLM #DigitalHealth #PendaHealth #OpenAI #PatientSafety
-
🔍 Everyone’s discussing what AI agents are capable of—but few are addressing the potential pitfalls. IBM’s AI Ethics Board has just released a report that shifts the conversation. Instead of just highlighting what AI agents can achieve, it confronts the critical risks they pose. Unlike traditional AI models that generate content, AI agents act—they make decisions, take actions, and influence outcomes. This autonomy makes them powerful but also increases the risks they bring. ---------------------------- 📄 Key risks outlined in the report: 🚨 Opaque decision-making – AI agents often operate as black boxes, making it difficult to understand their reasoning. 👁️ Reduced human oversight – Their autonomy can limit real-time monitoring and intervention. 🎯 Misaligned goals – AI agents may confidently act in ways that deviate from human intentions or ethical values. ⚠️ Error propagation – Mistakes in one step can create a domino effect, leading to cascading failures. 🔍 Misinformation risks – Agents can generate and act upon incorrect or misleading data. 🔓 Security concerns – Vulnerabilities like prompt injection can be exploited for harmful purposes. ⚖️ Bias amplification – Without safeguards, AI can reinforce existing prejudices on a larger scale. 🧠 Lack of moral reasoning – Agents struggle with complex ethical decisions and context-based judgment. 🌍 Broader societal impact – Issues like job displacement, trust erosion, and misuse in sensitive fields must be addressed. ---------------------------- 🛠️ How do we mitigate these risks? ✔️ Keep humans in the loop – AI should support decision-making, not replace it. ✔️ Prioritize transparency – Systems should be built for observability, not just optimized for results. ✔️ Set clear guardrails – Constraints should go beyond prompt engineering to ensure responsible behavior. ✔️ Govern AI responsibly – Ethical considerations like fairness, accountability, and alignment with human intent must be embedded into the system. As AI agents continue evolving, one thing is clear: their challenges aren’t just technical—they're also ethical and regulatory. Responsible AI isn’t just about what AI can do but also about what it should be allowed to do. ---------------------------- Thoughts? Let’s discuss! 💡 Sarveshwaran Rajagopal
-
It's one thing that AI can help reduce medical errors while looking at data. But what about things that are not-data based such as choosing the right medication at the patient's bedside? Wearable cameras and AI analyzing the footage could be the answer. This study introduced a wearable camera system to automatically detect vial swap errors that occur when a provider incorrectly fills a syringe from a mismatched drug vial as seen in the figure. "𝑊𝑒 𝑑𝑒𝑚𝑜𝑛𝑠𝑡𝑟𝑎𝑡𝑒 𝑡ℎ𝑒 𝑢𝑠𝑒 𝑜𝑓 𝑑𝑒𝑒𝑝 𝑙𝑒𝑎𝑟𝑛𝑖𝑛𝑔 𝑡𝑜 𝑑𝑒𝑡𝑒𝑐𝑡 𝑠𝑦𝑟𝑖𝑛𝑔𝑒𝑠 𝑎𝑛𝑑 𝑣𝑖𝑎𝑙𝑠 𝑖𝑛 𝑎 𝑝𝑟𝑜𝑣𝑖𝑑𝑒𝑟’𝑠 ℎ𝑎𝑛𝑑, 𝑐𝑙𝑎𝑠𝑠𝑖𝑓𝑦 𝑡ℎ𝑒 𝑑𝑟𝑢𝑔 𝑡𝑦𝑝𝑒 𝑜𝑛 𝑡ℎ𝑒 𝑙𝑎𝑏𝑒𝑙, 𝑎𝑛𝑑 𝑎𝑢𝑡𝑜𝑚𝑎𝑡𝑖𝑐𝑎𝑙𝑙𝑦 𝑐ℎ𝑒𝑐𝑘 𝑖𝑓 𝑡ℎ𝑒𝑦 𝑚𝑎𝑡𝑐ℎ 𝑖𝑛 𝑜𝑟𝑑𝑒𝑟 𝑡𝑜 𝑑𝑒𝑡𝑒𝑐𝑡 𝑣𝑖𝑎𝑙 𝑠𝑤𝑎𝑝 𝑒𝑟𝑟𝑜𝑟𝑠. 𝑂𝑢𝑟 𝑠𝑦𝑠𝑡𝑒𝑚 𝑖𝑠 𝑡𝑟𝑎𝑖𝑛𝑒𝑑 𝑜𝑛 𝑎 𝑙𝑎𝑟𝑔𝑒-𝑠𝑐𝑎𝑙𝑒 𝑑𝑟𝑢𝑔 𝑒𝑣𝑒𝑛𝑡 𝑑𝑎𝑡𝑎𝑠𝑒𝑡 𝑐𝑎𝑝𝑡𝑢𝑟𝑒𝑑 𝑓𝑟𝑜𝑚 ℎ𝑒𝑎𝑑-𝑚𝑜𝑢𝑛𝑡𝑒𝑑 𝑐𝑎𝑚𝑒𝑟𝑎𝑠 𝑤𝑜𝑟𝑛 𝑏𝑦 𝑎𝑛𝑒𝑠𝑡ℎ𝑒𝑠𝑖𝑜𝑙𝑜𝑔𝑖𝑠𝑡𝑠 𝑜𝑟 𝑐𝑒𝑟𝑡𝑖𝑓𝑖𝑒𝑑 𝑟𝑒𝑔𝑖𝑠𝑡𝑒𝑟𝑒𝑑 𝑛𝑢𝑟𝑠𝑒 𝑎𝑛𝑒𝑠𝑡ℎ𝑒𝑡𝑖𝑠𝑡𝑠 𝑝𝑒𝑟𝑓𝑜𝑟𝑚𝑖𝑛𝑔 𝑡ℎ𝑒𝑖𝑟 𝑢𝑠𝑢𝑎𝑙 𝑐𝑙𝑖𝑛𝑖𝑐𝑎𝑙 𝑤𝑜𝑟𝑘𝑓𝑙𝑜𝑤𝑠 𝑡𝑜 𝑝𝑟𝑒𝑝𝑎𝑟𝑒 𝑚𝑒𝑑𝑖𝑐𝑎𝑡𝑖𝑜𝑛𝑠 𝑓𝑜𝑟 𝑠𝑢𝑟𝑔𝑒𝑟𝑦 𝑖𝑛 𝑎𝑛 𝑜𝑝𝑒𝑟𝑎𝑡𝑖𝑛𝑔 𝑟𝑜𝑜𝑚 𝑒𝑛𝑣𝑖𝑟𝑜𝑛𝑚𝑒𝑛𝑡." While the algorithms can detect vial swaps in real-time from videos wirelessly, those could also provide real-time auditory or visual feedback by alerting providers to medication errors prior to drug administration, providing an opportunity to intervene!
-
Here are the cost containment strategies in case you realized your plan is paying hundreds of times the acquisition price of provider-administered drugs. --- Provider-administered drugs (e.g. #infusions or injections in clinics and hospitals) represent a fast-growing and expensive category of healthcare spend. Many health plans – commercial insurers, #MedicareAdvantage plans, Medicaid MCOs, and employer self-funded plans – have seen increasing costs under the medical benefit as providers “buy and bill” high-cost specialty medications. Often, plans are paying far above Medicare or benchmarks like Average Sales Price (ASP) + 6%. --- A process for addressing this could start with identifying overpriced medical pharmacy drugs by comparing them against a benchmark such as ASP +6%. You may notice some patterns in the type of drugs or locations that are often overpriced compared to the benchmark/average. You’ll need to evaluate if these are issues or expected variations. Once you have your personalized targets, consider one or multiple of these near-term cost containment tactics until longer term strategies can be deployed: - White bagging (and variations) for some specialty injectables - Site-of-care optimization - ASP-based fee schedules - Reference pricing for biosimilars - Pass-through reimbursement for 340B drugs - Provider report cards on #DrugSpending These tactics vary in success of implementation and generating savings due to their complexity. --- Commercial plans can move faster and have more flexibility in their options, allowing them to be more aggressive since they're often paying more than other plans. Medicare Advantage plans need to make sure they stay compliant with regulations for Medicare which removes options like increasing cost sharing or denying based on cost, but they still have multiple options like site-of-care shifting, ASP-based contracting, and utilization management techniques. With #MedicalPharmacy being around a quarter of total #pharmacy costs for plans, these strategies can make meaningful changes on overall healthcare spending.
-
The Hospital Paradox: Why Indian Healthcare Giants Profit Despite Empty Beds A sector where patients are told “no beds available” while hospitals report just 65% occupancy. Sounds like inefficiency? It’s actually a strategy. And it’s reshaping Indian healthcare economics. ✅ The Hard Numbers: ARPOB is the Real Metric Even with empty beds, hospital chains are delivering record revenue per occupied bed (ARPOB): 1. Max Healthcare: Rs 78000/day 2. Fortis Healthcare: Rs 73000/day 3. Medanta: Rs 67000/day 4. Apollo Hospitals: Rs 62000/day That’s 15–20% YoY growth, despite flat occupancy. ✅ The Business Shift: From Volume to Value A) High-Margin Procedures First: Cardiac surgeries: Rs 3–15 lakh each, oncology cycles: Rs 2–20 lakh, organ transplants: Rs 8–25 lakh & complex neurosurgeries: Rs 5–18 lakh. One cardiac surgery = revenue of 50 general medicine admissions. B) Bed Mix Strategy: ICU beds: 25–30% of capacity (vs 15% norm), Super specialty wards: 40–45% & General wards: cut to 25–30%. ICU beds bring 3–5x more revenue than general beds. ✅ Ripple Effects Across Sectors A) Stocks: Apollo up 180% in 24 months, Max market cap jumped from Rs 8k cr to Rs 22k cr & Fortis revenue up 23% YoY. B) Insurance: Claims rose 31% in FY23, Avg claim: Rs 67k (up from Rs 45k), and premiums hiked 15–25% C) Medical Tourism: 12–15% of revenue from foreign patients, ARPOB for them: Rs 1.2 lakh/day. ✅ The Unseen Layer: Capacity Illusion A) Hospitals keep occupancy “low”: To handle emergency surges, maintain exclusivity, optimise staff for high-value units. B) Tech-Driven Pricing Power: Robotic surgeries: 40–60% premium, New imaging: 25% higher scan revenue & AI diagnostics: 20–30% fee premium. C) Two-Tier Model Emerging: Tier 1: Premium, complex, high-margin hospitals, and Tier 2: Volume-driven routine care providers. ✅ Metro vs Tier-2 Divide A) Metros: ARPOB Rs 65k–80k, 68–72% occupancy, 18–22% foreign patients. B) Tier-2: ARPOB Rs 35k–45k, 58–63% occupancy, domestic tourism focus. The Results A) Healthcare inflation: 12–15% annually, way above general inflation. B) Specialist premium: Salaries 200–300% higher than GPs, deepening talent gaps. C) Expansion paradox: Apollo alone added 1200 new premium beds in FY23, even with “empty” general ones This means more pending patients, longer waits for routine care, rising out-of-pocket bills & quality concentrated in urban hubs. Hospitals are now high-growth, not utilities & premium valuations are justified by margin gains. Let me share #Rajsperspectives 1. The healthcare model is shifting to profitability-first, accessibility-later. 2. Empty beds aren’t inefficiency; they’re deliberate capacity engineering. 3. ARPOB is the new heartbeat of Indian hospital economics. Can India balance profit-driven healthcare with the social responsibility of keeping essential services accessible? Do you think healthcare should follow market logic like any other business? #healthcare #india #economy #hospitals #policy #health
-
The advancement of AI technology brings forth a new era where multiple AI agents interact across various sectors, from finance to healthcare and defense. While this collaborative approach enhances efficiency, it also introduces unforeseen risks. A recent technical report by the Cooperative AI Foundation categorizes multi-agent AI risks into three main failure modes:- 🚨 Miscoordination - Despite sharing common goals, AI agents may struggle to synchronize actions due to differing strategies, communication gaps, or unexpected behaviors. For instance, self-driving cars from diverse training backgrounds failing to coordinate in real-world scenarios. ⚔️ Conflict - Competition among deploying entities can lead AI systems to engage in adversarial behaviors, potentially sparking arms races or escalations. An example is AI-powered military strategies making unpredictable decisions. 🤝 Collusion - AI agents designed for competition might instead form alliances that detriment human interests. For example, algorithmic trading bots or pricing algorithms collaborating in ways that increase prices, impacting consumers negatively. ✨ The report also identifies seven critical risk factors like information imbalances and network effects, amplifying the likelihood of these failure modes. As AI integrates into pivotal decision-making processes, establishing governance structures, technical precautions, and interdisciplinary cooperation becomes imperative to mitigate these risks. How can we ensure AI agents uphold human values amidst intricate multi-agent interactions? 🔍 Share your insights - What strategies do you believe are essential in addressing these risks? #AI #MultiAgentSystems #ArtificialIntelligence #AIrisks #AIGovernance #agenticAI
-
Construction's $1B risk allocation problem. That NOBODY wants to address: When clients provide site data with "use at your own risk" disclaimers, they're not eliminating risk - just creating a ticking time bomb. The Australian Constructors Association and Consult Australia have joined forces to tackle this issue through their "Partnership for Change" initiative: What reliance information includes: - Geotechnical reports - Concept/reference designs - Utilities data - As-built drawings - Contamination reports - Condition of existing assets The impossible position for tenderers: → Cannot verify during tight tender periods → Have no contractual relationship with the original advisors → Must accept "all risk" clauses or be disqualified → Receive zero relief when information proves inaccurate The partnership recommends 2 approaches: PREFERRED APPROACH: - Client secures third-party reliance from original advisors - Original consultants allow reliance for project delivery - No expectation of 100% accuracy, but a mechanism for collaboration when issues arise - Clear risk allocation based on ability to control FALLBACK POSITION: - Re-investigation of reliance information - Early Contractor Involvement (ECI) to assess data collaboratively - Provisional sums with extension of time provisions - Baseline reports that quantify specific risk thresholds Proof these approaches work: Level Crossing Removal Project's alliance model delivered dramatic improvements: - Competitive bid: 5% estimate omissions vs Alliance: 0.9% - Competitive bid: 6.6% cost overrun vs Alliance: 2.2% underrun - 88 weeks tender time reduced to 38 weeks Snowy 2.0 Pumped Storage Project implemented a geotechnical baseline report (GBR) that: - Set out clear risk allocation between client and tenderer - Created a principled sharing of complex geological risks - Prevented tenderers from assuming unknowable risks - Established reasonable expectations for all parties As the partnership paper states: "It is incorrect to assume that because a risk is deemed to have been transferred that it no longer exists." Risk transfer isn't risk management. It's risk multiplication. Has your organisation implemented any of these collaborative risk approaches? What were the results?