ADVICE

The physician executive’s crash course on AI in health care

Alexandra T. Greenhill, MD 

There are excellent longer reads on this topic,1,2 but if you don’t have the time to delve into them — or as the younger generation calls it TLDR (too long, didn’t read) — here is a quick framing that will help you be well informed as you navigate the crucial conversations you should be having about artificial intelligence (AI) in health care.

Greenhill A.T. The physician executive’s crash course on AI in health care. Can J Physician Leadersh 2023;9(3):72-77

https://doi.org/10.37964/cr24772

AI is suddenly everywhere — why now?

In the history of all innovation, there are boom and bust cycles and AI is no exception. We are emerging from what became known as the “AI winter,” a period of reduced funding and interest in the field, following several “hype cycles,” that led to periods of disappointment, skepticism, and funding cuts. Years later, as the innovation becomes more mature and usable, interest renews. The current sudden hype around all things AI can be attributed to four key factors.

Increased data availability: As so many aspects of the world we live in have become digitized, we are seeing exponential growth in data, the primary matter for AI. These data have enabled the training of AI models, enhancing their accuracy and capabilities.

Technological advancements: We have better and faster computing power enabling access and capabilities to do AI. Therefore, AI algorithms have significantly improved, allowing for more accurate predictions and better performance in various tasks.

Increased funding combined with technology cost reductions: The massive increase in funding combined with significantly decreasing costs of computing power and storage have made AI more accessible to different organizations and domains and many of their staff and users, not just specialists. 

Clear and practical applications: The Gartner Hype Cycle (Figure 1), created by United States research, advisory, and information technology firm, Gartner, is a graphic representation of the maturity and adoption of innovative technology over time through the five key phases that all disruptive technologies go through.3 Overall, AI is in phase 5 in terms of enabling more automation and predictions and in a new phase 2 for some aspects, such as more autonomous capabilities. For health care, one can argue, we are further behind, as most efforts have been in other domains, and most AI health articles still refer to theoretical models instead of real applications.

Figure 1. The Gartner Hype Cycle for innovative technologies.3

Phase 1: Innovation Trigger is the period of initial breakthrough that creates excitement in scientific circles and often leads the mainstream media to celebrate it as the one thing that will change everything. This creates Phase 2: Peak of Inflated Expectations, where hype about the potential of the technology amplifies, often overestimating its current capabilities. This leads to Phase 3: Trough of Disillusionment. As more people have direct experience with the technology, there’s a realization that it doesn’t yet do all that people expected and this leads to disappointment as the limitations, challenges, and failures become apparent. Although most abandon the domain for a new invention entering its Phase 1, many work to make the technology deliver on its potential through a usually lengthy and quiet Phase 4: Slope of Enlightenment. Lessons are learned, and improvements are made. Finally, the technology reaches a stable level of maturity and sufficient solid examples of success, and enters Phase 5: Plateau of Productivity, where it hits the mainstream news cycle again. It then becomes widely adopted, as it is now reliable and its practical applications and benefits are clear and well understood. 

A state of confusion — when you hear AI, what is meant by that?

It’s important to always clarify what is meant by the term AI in a particular instance, as it can refer to anything related to software-driven automation and complex algorithms (Figure 2). The abbreviation itself is unclear, as AI can refer to augmented intelligence (AuI), also known as artificial narrow intelligence or weak AI, algorithms that support the work of humans with additional insights (also known as human-in-the-loop), as well as true artificial general intelligence (AGI), the capability of a machine to think independently.

Figure 2. Visual representation of various types of AI from most machine-like to most human-like.1

For example, the American Medical Association decided to use AI as an abbreviation for augmented intelligence to emphasize its assistive role, in which it enhances human intelligence rather than replaces it.4 Health Canada and Canada’s Drug and Health Technology Agency have defined AI as a broad term for a category of algorithms and models that perform tasks and exhibit behaviours, such as learning and making decisions and predictions.5,6 Similarly, the US Food and Drug Administration (FDA) uses a very broad definition of AI as a device or a product that can imitate intelligent behaviour or mimic human learning and reasoning.7

The most common current use of AI is to designate something called machine learning (ML), i.e., the application of sets of algorithms to analyze a given situation and the ability to learn from the outcome of the analysis to self-adjust and improve accuracy. Often called “smart algorithms,” they are more than simple statistical learning (SL) outputs or “if this, then that” simple automation. Natural language processing (NLP) is a subset of ML that enables computers to understand, analyze, and generate human language. Deep learning (DL) is the next generation of ML. While ML uses algorithms to parse data, learn from that data, and make informed decisions based on what it has learned, DL structures algorithms in layers to create an artificial neural network (ANN), or simply neural network, that can learn and make intelligent decisions on its own.

Generative AI, like ChatGPT for text or DALL-E for images, is where the software, trained on large data sets, generates new original content, data, or outputs that mimic or resemble human-created content. Unlike traditional AI systems that analyze and process existing data to make predictions or classifications, generative AI can produce original content, such as images, text, music, or even entire pieces of code, that didn’t previously exist. However, it’s not AGI or even AuI.

Even as all of these solutions fall under the category of AI, they differ vastly in the extent of how capable and how independent they are: they can be simply executing what they are told versus continuously learning versus creating their own approaches that are usually incomprehensible to humans versus explainable AI (XAI); hence, the importance of asking what is meant by AI in the specific situation. 

One additional important definition: AI can be run on real data or on something called “digital twin,” an idea that started during the US National Aeronautics and Space Administration’s Apollo project in the late 1960s. NASA assembled two identical spacecraft using a “twin” on Earth to imitate its counterpart’s action in real time and guide decisions. Digital twin is now used, usually, but not always, as an anonymized copy of a real data set to enable testing of ideas, but it is not an AI solution per se.8   

Confirm whether there is a need for AI

One of the biggest challenges with the current hype period is the “shiny object syndrome,” as people focus on how AI can be used to solve all problems, instead of adding AI as an option along with other tools and then choosing a solution that makes sense. In simple terms, even if you have access to a helicopter, it makes no sense to use it to get to the park or the local store.

  1. What problem are we trying to solve? 
  2. Are there other ways of solving this problem? 
  3. How do the different options compare in terms of: 
    • Will it work? Accuracy, risks, biases, and ethical considerations 
    • Will the output be useful and actionable? Real direct and/or indirect benefit to care
    • What will be its impact beyond? Compatibility with existing workflows and technologies
    • What does it take? Cost, effort, and timeline to deploy and then to run

A great example of using AI where it makes sense would be review of vast numbers of patient charts to find patients with treatable conditions that are not currently being treated. It’s poor use of AI to power a chat bot to answer simple questions on a website, such as when is the clinic open and how does one get to it. 

Examples of promising AI uses and key caveats 

Drug–drug interactions (DDIs) cause adverse effects in patients, with serious consequences and increased costs. Almost two-thirds of patients receiving critical medical care in hospital develop at least one possible DDI. Manual detection is time consuming and expensive. However, AI can review the records of hundreds of thousands of patients to identify known DDIs and help generate hypotheses about new unknown DDIs that humans can then review and make decisions about. Models for this are promising, but they need considerable additional development before they can be implemented. 

However, all the excitement about AI’s potential needs to be balanced. For example, sepsis is the third leading cause of death in hospitals in the United States.10 Thus, when one of the largest US developers of electronic health record (EHR) software, Epic Systems, developed the proprietary Epic Early Detection of Sepsis Model, which uses AI to help physicians diagnose and treat sepsis sooner, it was rapidly adopted by 170 hospitals. Like many other AI tools, it did not have to undergo FDA review, and there is no formal system in place to monitor its safety or performance. A study published in JAMA Internal Medicine in 2021 found that it failed to predict sepsis in 67% of patients who developed it, and it generated false sepsis alerts for thousands of patients who did not.11 Without this external study, these outcomes would not have been known, and an opportunity would have been missed to improve the tool and ensure that it works. 

Deciding to use AI: powerful additional questions to ask

Physician leaders need to know the right questions to ask when guiding the selection and integration of AI and ensuring that the various dimensions are covered: clinical usefulness, quality and safety of patient care, strategy, and ethics. In addition to the questions listed in the previous sections intended to help understand what kind of AI is being discussed and determine whether AI should be considered at all, the following set of 10 questions focuses on practicalities.

Regulatory approval: Is such approval required or may it soon be required? Are there standards to be met? 

Accuracy and performance: What are the false positives and negatives? What are the stakes of the predictions? Is there a human in the loop?

Usability and effectiveness: What has been the end user experience and feedback? 

Data training and use: What data are used in the AI training? How are data collected, cleaned, and labeled? How much data do you need to make initial predictions? How does the system handle uncertain or incomplete data? How much data do you need to make personalized predictions? How are your data at scale different from your data starting up? 

Data and intellectual property ownership: Who owns the data, algorithms, and output?

Ethical and bias considerations: What assumptions are embedded in the data and AI?

Interpretability and explainability: How transparent is the AI’s decision-making process? 

Data security and privacy: What are the policies and practices? 

Scalability and long-term viability: Can the AI system handle increasing amounts of data or expanding health care needs? What is the roadmap for future improvements and support?

Impact: Do we have real-world examples of the AI’s performance? Are there any third-party evaluations or studies?

Without becoming experts in AI, physician leaders can, thus, ensure that AI serves as a powerful tool in advancing health care while prioritizing patient well-being and maintaining the integrity of health care. The onus should be on the AI technology providers to offer clear and easy-to-follow answers to the questions above. If they are unable to do so, that may be a strong red flag over the proposed project.

Where next?

This quote from the father of modern medicine, Sir William Osler: “The philosophies of one age have become the absurdities of the next, and the foolishness of yesterday has become the wisdom of tomorrow,” is a great reminder that we live in a special moment in time where the actions we take today will define the decade, if not the century ahead. The creative destruction of what we believe and do today is underway, and physicians in general, but especially physician leaders, need to become better at understanding the opportunities and challenges of AI for health care if we are to make quick and effective gains. 

This is the first in a series of articles focused on demystifying AI. Future articles will cover clinician and patient attitudes toward using AI in health care; review of some successful and unsuccessful uses of AI in health care; how to successfully deploy AI solutions in health care; and how to address the creative challenges between innovation and learning and the need for control and regulations. 

References

1.Greenhill AT, Edmunds BR. A primer of artificial intelligence in medicine. Tech Innov Gastrointest Endosc 2020;22(2):85-9. https://doi.org/10.1016/j.tgie.2019.150642 

2.Byrne MF, Parsa N, Greenhill AT, Chahal D, Ahmad O, Bagci U. Editors. AI in clinical medicine: a practical guide for healthcare professionals. Hoboken, NJ: Wiley-Blackwell; 2023. 

3.Blosch M, Fenn J. Understanding Gartner’s hype cycles. Ottawa: Gartner Research; 2018. https://www.gartner.com/en/documents/3887767

4.Augmented intelligence in medicine. Chicago: American Medical Association; 2018. https://tinyurl.com/bdjkys33 

5.Guidance document: software as a medical device (SaMD): definition and classification. Ottawa: Health Canada; 2019. https://tinyurl.com/mw7k5ntn 

6.An overview of clinical applications of artificial intelligence. Ottawa: Canada’s Drug and Health Technology Agency; 2022. https://tinyurl.com/3mvxraxb 

7.Artificial intelligence and machine learning in software as a medical device. Silver Spring, Md: US Food and Drug Administration; 2021. https://tinyurl.com/2e6d2fzf 

8.Sun T, He X, Li Z. Digital twin in healthcare: recent updates and challenges. Digit Health 2023;9:20552076221149651. https://doi.org/10.1177/20552076221149651  

9.Han K, Cao P, Want Y, Xie F, Ma J, Yu M, et al. A review of approaches for predicting drug–drug interactions based on machine learning. Front Pharmacol 2022;12:814858. https://doi.org/10.3389/fphar.2021.814858 

10.What is sepsis? Atlanta: Centers for Disease Control and Prevention; 2023. https://www.cdc.gov/sepsis/what-is-sepsis.html

11.Wong A, Otles E, Donnelly JP. External validation of a widely implemented proprietary sepsis prediction model in hospitalized patients. JAMA Intern Med 2021;181(8):1065-70. https://doi.org/10.1001/jamainternmed.2021.2626 Erratum in: JAMA Intern Med 2021;181(8):1144.

Author

Alexandra T. Greenhill, MD, is CEO and chief medical officer at Careteam Technologies and associate clinical faculty in the Department of Family Medicine, University of British Columbia. She is also co-editor of AI in Clinical Medicine: A Practical Guide for Healthcare Professionals (2023). 

Correspondence to: [email protected]

This article has been peer reviewed.