Opinions about AI agents in health care are all over the map. Without tipping my hand too much, here's a simple framework for evaluating any vendor's "agentic" claims: the AAA Test: Autonomy, Awareness, Adaptability. Autonomy — how does the agent take action? ✅ Correcting and refiling a claim via RCM integration is an action. ✅ Booking an appointment via PM/EHR integration is an action. ✅ Drafting a lab order via EHR automation is an action. ❌ Displaying tasks in a list for humans to complete is not taking action. If that's all this "agent" does, well, it might be a gimmick. Awareness — how does the agent acquire the context it needs to make a decision confidently and take action safely? ✅ FHIR API integration across relevant resources is good context. ✅ Read access to the EHR's database is good context. 🎲 Screen scraping / "bots" / RPA is a dice roll. ❌ Manual user input via copy-paste or form completion is not agentic awareness. If that's what this "agent" needs in all its use cases, it might be a gimmick. Adaptability — how does the agent handle and learn from scenarios it may not have been explicitly programmed to manage? ✅ Updating claim scrubbing configurations based on spiked rejection rate is good adaptiveness. ✅ Directing a conversation back to the topic at hand is good adaptiveness. 🔄 Simple good/bad human feedback interactions can be signs of underlying learning capabilities. ❌ Producing no output or escalating for human intervention in similar-but-not-identical scenarios without obvious and explicit safety rationale shows poor adaptability. If escalation happens more often than not, it might be a gimmick. Spend time with the product experts on staff at whatever vendor you're considering, and press for details. If the details are missing or incoherent — you guessed it — it might be a gimmick.
How to Assess AI Technologies for Healthcare
Explore top LinkedIn content from expert professionals.
Summary
Assessing AI technologies for healthcare means analyzing tools carefully to ensure they are safe, reliable, adaptive, and provide real value in clinical settings. This process involves understanding how the technology functions, how it handles data, and whether it aligns with healthcare needs and safety standards.
- Evaluate functionality and autonomy: Assess how the AI tool operates, ensuring it takes independent, meaningful actions like automating tasks or making data-driven decisions, rather than relying solely on manual input.
- Examine data integrity: Understand the origin, accuracy, and context of the data the AI system uses. This includes verifying the quality of data collection methods and considering variations in clinical practices and patient demographics.
- Prioritize safety and oversight: Implement governance structures like AI advisory committees and ongoing monitoring processes to ensure the tool consistently performs well, identifies risks, and adapts effectively to new challenges.
-
-
In order to advance AI in healthcare, it is crucial that developers understand (1) how the data came about, (2) the accuracy of the instruments and devices used to measure physiologic signals, (3) the impact of variation in the measurement frequency of features and the capture of outcomes across patients (care phenotypes), and (4) local clinical practice patterns and provider perception of the patient that are typically almost never fully captured but we know have a huge effect on outcomes, including complications, among other very complex social patterning of the data generation process. A diversity of expertise, perspectives and lived experiences is requisite to be able to understand the data and develop safe AI models. We need to invest in the "who" and the "how" rather than just the "what" if we are to leverage this beast of a technology that has the potential to truly disrupt legacy systems with data-informed redesign. #mitcriticaldata https://lnkd.in/dbpjEgbc
-
How can health system leaders evaluate and roll out new AI tools? How quickly — or slowly — should you begin? And how do you ensure a safe and effective rollout? It all comes down to governance. To share strategies and best practices, I organized a roundtable with Michael Bouton, Yaa Kumah-Crystal, MD MPH FAMIA, Joel Vengco, and Naresh Sundar Rajan. My takeaways: Invest in clean, healthy data. Move fast but be ready to hit the brakes. And always ensure your AI tools provide value. Here’s how, according to the experts. 1) Tap existing governance Good news: You may already have the systems to stand up AI governance. Adapt the frameworks, committees, and accountability models that already safeguard your other technology initiatives. 2) Establish an AI committee Forget the role of chief AI officer. Instead, create an AI advisory committee with experts from legal, compliance, ethics, IP, security, and other departments. With organization-wide visibility, the committee can offer feedback on all AI projects. 3) Don’t give AI a free pass Vet AI like you would any other technology. Use pilot projects to help determine whether a new tool generates enough value to justify its risks and costs. Make sure to define risk factors for clinical and patient safety. 4) Ensure algorithmic vigilance Create metrics and establish a process to monitor each tool’s performance and impact. Keep an eye out for model drift. Don’t forget to evaluate every new feature from vendors. P.S. I created an executive brief based on our conversation. Let me know if you’d like to read it. #AI #artificialintelligence #healthtech