#ArtificalIntelligence -- clearly a hot topic on everybody's minds (at least, according to my latest poll), but what exactly does it---and could and should it---look like in #healthcare?
Tools like #ChatGPT and automated chatbots are gaining traction and have us all questioning what is possible, especially as we deal with real world complexities of our healthcare system and some of the highest rates of HCP burnout we've ever seen.
But before we dive headfirst into mass-adoption of #AI, there are implications we must consider --- particularly as they relate to regulation, implementation across all populations, risk of bias, and privacy.
Alongside the public, medical groups and policymakers are already discussing where and how AI comes into play in health care. At their recent annual meeting, the American Medical Association acknowledged the uncertainty and need for regulation around this technology, developing recommendations for the use of "augmented intelligence" and calling for increased oversight of insurers' use of AI for prior authorization. The FDA has also called for "nimble" regulation of generative models before they get entrenched into the DNA of the tech industry and extended into our healthcare system.
Now that we've touched on how we got here and the most recent healthcare AI news, let's look at how we might actually put this technology into practice.
Source: McKinsey 2023 Nursing Time Survey
Use Cases
The easiest point of entry for AI into health care starts with using it to ease administrative burden. Amidst our climate of continued burnout and shortages, freeing up clinicians by delegating more administrative tasks using this technology empowers them and allows for focus on providing quality care to patients.
Take this Nursing Time study from McKinsey & Company --- it is evident tasks like documentation, reporting and medication administration could be tech-enabled and would ultimately optimize nursing time during an average shift. If these tasks were assisted by automation, it would enable nurses to focus more on patient relationship management and providing quality care.
Source: McKinsey 2023 Nursing Time Survey
However, whatever tech-enabled capabilities we build shouldn't take decision-making out of the hands of clinicians and caregivers, as there is much nuance to a human being and their health.
Effect on Payment & Care Models
Mass-implementing AI in the healthcare system may also raise several questions around how we quantify and pay for care.
There is certainly potential for AI to save money in healthcare, but like other advancements (telehealth), it will not gain traction on a widespread basis if there is no incentive for it to be used. Many health organizations still operate with Fee-for-Service models and are paid based on volume, so the implementation of AI to streamline caring for and treating patients might negatively impact organizations. However, with a broader adoption of #ValueBasedCare models, we might see a quicker adoption of AI-enabled technology that supports removing administrative and other waste from the health care system.
Additionally, to implement and support AI automated tools internally, organizations will also need new and different resources --- everything from managerial support and trained talent (data scientists and engineers) to the necessary technology and data management capabilities. These skill sets and capabilities will be added to many organizations.
We have a long way to go before AI and its ability to automate work are the industry standard, but asking questions, understanding the potential impact, and ensuring we have a robust framework for implementing will be extremely important.
The Pros... and the Cons
Advantages
There is understandably a lot of uncertainty and caution around AI, but as previously mentioned, I also see a lot of possibility in how we can empower and better equip our clinicians and HCPs in the work they do every day.
The more we can lighten their loads and help them delegate tasks (many of which have been added with the advent of the electronic health record), the more we can create flexibility in their profession and help them focus on patient engagement and communication. Additionally, the ability to aggregate and display relevant information in real time will allow for a better experience for the patients we serve.
Areas of Caution & Potential Barriers
While I remain optimistic in AI's potential across industries, there is great opportunity for harm when it is applied to an industry as complex as health care. We need to ensure we have addressed the following areas before we begin widespread adoption:
- Appropriate regulation
- Understanding risk of bias
- Sensitivity in behavioral health care
- Acceptance and speed of mass adoption
- Data privacy & security
With how pervasive misinformation is in our world today, accuracy and digital literacy need to be at the forefront of AI implementation, as these tools largely learn from information gleaned from existing sources. ChatGPT might provide directionally correct information, but for the time being, it will still need a human touch to fact-check information and personalize for every patient.
I worry about the potential for bias in these tools as well. If we begin to build AI models that are making decisions built on information and systems that have bias in them, then we're at risk of having those biases built into the care we provide moving forward. Another consideration is for those seeking #MentalHealth treatment --- AI is currently not equipped with the sensitivity and empathy that this care requires. Understanding these limitations will be important as we build out capabilities and embed them in the work we do every day.
We must center patients and populations in our policymaking and discussions around the mass adoption of AI -- maintaining quality of care, safeguarding data, and ensuring these capabilities address the needs of all populations (including our rural areas) will be most important.
There is an art and science to healthcare, and the reality is, AI isn't privy to all these nuances at this point in time. We will need to work together to ensure both are included as we move forward.
Looking Ahead + Final Thoughts
So, where do I think we are now with successfully implementing AI in our healthcare system? For me, that's still to be determined. A lot of the work is still ahead of us.
AI has great potential, but like many things we have encountered in the past, some of this might fall victim to being part of the hype cycle. Implementation of new technology is not always effective and meaningful. As with any new capability, we must ask ourselves --- are these things truly long-lasting, fair, and beneficial to streamlining and optimizing the work that we do for our patients and communities? Will they bring value to the people we serve?
As I mentioned in my last newsletter regarding #telehealth, I think the adoption of technology and proposal of best-fit solutions in healthcare is always a noble (and often necessary) pursuit as we work towards true #HealthEquity; but, we must proceed cautiously, understand what the unintended consequences might be, and have safeguards in place to identify and correct them.
In case you're wanting more on AI as we've come to the end of this issue --- fret not! Be on the lookout for an upcoming Part 2 newsletter, where I'll share my conversation with aiHealth President and Ayin Health Solutions Board Member Kyle Swarts on his thoughts regarding AI use-cases. 😊
Until next month.
Ruth 🌸