HEALTH INFORMATICS: The physician executive’s crash course on AI in health care
Part 2: What patients and physicians think

Alexandra T. Greenhill, MD

This second in a series of articles on artificial intelligence (AI) in health care presents six core concepts that will help physician leaders frame their understanding of the rapidly evolving state of what patients and physicians think of AI. It covers biases in data collection, the need for rules, the implications for health care workers, how to avoid assumptions, patients’ attitudes, and hidden inequities.   

KEY WORDS: artificial intelligence, health data, data collection, interpretation

Greenhill AT. The physician executive’s crash course on AI in health care. Part 2: What patients and physicians think. Can J Physician Leadersh 2024;10(1):9-10 https://doi.org/10.37964/cr24777

Artificial intelligence (AI) is such a massive force of change in this decade, understanding the knowledge level and perspectives of patients and physicians is crucial, because the successful integration of AI into health care requires their support. It is also important to realize how patients feel about the collection of personal health data and provider actions, as these data sets are key to the creation and optimization of AI solutions. When patients and physicians do not understand, accept, or trust AI applications, this slows down the adoption rate and causes delays in time to benefit from these promising technologies.1,2

Numerous published studies in medical journals are heterogeneous regarding the study population, study design, and the field and type of AI under study.3 Similar issues exist with surveys being done by various health care organizations, governments, policy groups, and consulting firms.

Although it is, of course, useful to stay aware of the latest published results, the beliefs and concerns of patients and physicians are rapidly evolving, driven by the fast entry of AI tools into work and life outside of health care. This creates an additional challenge to understanding the trends. Here are six core concepts that will help physician leaders stay better informed about what is actually happening.

Biases related to surveys can lead to gaps in the data collected and inaccurate insights

Always consider that a survey may not have been able to collect the perspective of important subgroups of people or may be overreporting or underreporting key dimensions. After screening over 2500 articles on patient and public perception of AI from 2000 to 2020, reviewers concluded that the quality of the methods of these studies was mixed, with a frequent issue of selection bias.3

What is termed systemic bias can also be introduced when some survey participants simply don’t respond. Studies have shown that there are often important differences between respondents and nonrespondents. People may choose not to respond for a number of reasons, including who is doing the survey, how the survey is described, how long it is, its format, how it’s distributed, and how easy it is to understand the questions. Random selection is often used to ensure that participants are representative; however, this does not ensure that those who respond are also representative.

The design and reporting of survey questions may be biased, especially in summaries. Many surveys tend to use leading questions instead of open-ended questions and include generalizations in the summary that reflect the bias of the organization or author. For example, “ How concerned are you about the use of AI?” is very different from “How do you feel about the use of AI?” The abstract of the 2021 Canada Health Infoway’s Canadian Digital Health Survey5 states that “half of Canadians surveyed feel knowledgeable about AI.” However, the full report shows that, although it is technically true that “50% of people surveyed said they are very or somewhat knowledgeable about AI,” only 8% said they feel “very knowledgeable” while 42% said they are “somewhat knowledgeable.” In addition, 32% said they are “not very knowledgeable” and 16% said they “not at all knowledgeable.” Therefore, these results could also have been reported in the abstract as: “Almost all people (92%) don’t feel very knowledgeable about AI.” It is key to access the original questions and look at the actual response rates.

Beyond surveys, it is also important to do qualitative studies.1,2,5,8 When evaluating AI in health care, we found that patients draw on a variety of factors to contextualize these new technologies, including previous experiences of illness, interactions with health systems and established health technologies, comfort with other information technology, and other personal experiences. Key informant interviews, deliberative dialogue, and a multistakeholder design lab process about how AI should be implemented in health care have revealed important insights that a survey would not have been able to capture, including key differences between deploying AI versus other health care innovations.1,2

There are also significant differences between opinions and behaviour. Studies that assess people’s reactions to available AI tools are, therefore, different from ones that assess hypothetical, broadly defined AI.3 For example, in surveys, people say almost universally that they are very concerned about their privacy. However, most people don’t even look at how the apps and devices they use collect and manage their data, and 25% of health care apps, many of which have hundreds of thousands of downloads, don’t even have a privacy policy or terms of use.6 It is important to ask what people think, feel, and say, but just as important to monitor what they actually do.

Need and convenience are powerful drivers of behaviour that differs from what one would have imagined one would do. For example, the COVID-19 virtual agent is an AI chatbot attached to the BC Center for Disease Control. Launched in April 2021, by early December, it had had conversations with over 2.89 million people and answered approximately 25 000 questions a day regarding COVID-19. There were no issues and concerns with users, especially as it was not collecting any personal health information.9

Patients and the general public are becoming more informed and excited about AI in health care, all while signaling the need for rules and caution

Numbers vary, but more and more studies and surveys show that people report feeling more knowledgeable about AI and are more comfortable with the use of AI as a tool in health care, especially if there is transparency on whether AI is being used or not and there are legal and policy assurances that privacy and personal data are protected, both when building and when running an AI system, and that their data is not used to harm or discriminate against them.4,5

Most people report wanting control over their personal health data, and their willingness to share depends on what organization is collecting the data and the intended use. The framing and information provided about proposed use also influence how people feel about AI.4,5

Finally, most people feel that it is important to continue to invest in innovative technologies, such as AI in health care, especially to improve access and outcomes.5

Health care providers are interested in being more informed about AI in health care, but they are tired and innovation weary

Physicians and providers are interested in AI, but are cautious as they have experienced the challenges of moving from paper to digital records in hospitals and clinics. There are several domains for their concerns: matters related to technology performance (for example, evidence, accuracy, safety, bias); and people-and-process factors (for example, impact on workflows, equity, reimbursement, doctor–patient relationship, liability).10 In addition, their reactions to AI tools that improve access, care outcomes, and experiences are different from how they view tools that support back office administration practices focused on efficiency gains and cost containment. The staggering level of burnout of the profession in the wake of the COVID-19 pandemic also influences providers’ attitude to new innovations that will require learning and adaptation of existing workflows.

Be cautious about assumptions and make efforts to gain more granular insights

For example, studies have shown that there is no truth behind the hypothesis that younger people, who are assumed to be more exposed to and knowledgeable about emerging technology have more favourable opinions and responses to the use of AI in health care compared with older people.11 Similarly, studies have voided the hypothesis that having previous experience with digital technologies that use AI, as well as being satisfied with the reaction, would predict more positive perceptions of Canadians.11

Despite being less knowledgeable about AI, older Canadians are significantly more comfortable with AI in specific branches of health care than younger Canadians.11 Common assumptions about older groups’ difficulties with navigating technology, lack of experience or knowledge of technology, and preference for traditional methods of care over web-based care are also not accurate, as older individuals are increasingly more comfortable with the use of technology, and technologies are becoming simpler to use.11

People’s attitudes toward the human versus machine dynamic are more unexpected and complicated than initially assumed

In surveys, most people indicate that they value continued human contact and discretion in service provision more than any speed, accuracy, or convenience that AI systems might provide, and that they are very concerned about the loss of human interaction with health care providers.5 However, in real life scenarios, many patients show a preference for the speed and reliability of a chatbot and even report feeling it was easier to discuss sensitive issues with a machine than a human; a chatbot has been rated significantly higher for both quality and empathy.12

People in general tend to perceive machines as less emotional and, therefore, more objective, secure, and impartial than humans; many don’t realize that AI algorithms are a product of human design, and they often inherit our mistakes and biases. An AI system carries the bias of the data used and of the creators of the algorithms; therefore, it’s not a question of “is there bias” but rather “what bias exists.”13

Patients can now access AI tools that are often better than those used by providers, which can democratize access, but also deepen inequities

Consumer-grade health care AI, which is by definition not clinically validated, is different from medical-grade AI, which requires clinical validation and regulatory approval.7 However, increasingly, consumer-facing health technologies are on par or even remarkably better than those made available to physicians, as companies forgo the time and expense related to medical approval of their inventions and choose the straight-to-consumer route, positioning their innovation as a wellness product. This can cause issues, as physicians are not prepared on how to respond to patients who use such new AI-based tools.6

There are also concerns about deepening the divide between “haves” and “have nots,” as ability to pay often determines access to these consumer-grade tools, and direct experience with AI then informs acceptance of AI. The concern about creating more inequities not only applies to access to AI tools, but also in terms of access to health care services, based on consumer-grade tools being able to detect issues at an earlier stage, leading to “queue jumping,” or to cause false negatives that must be assessed, using health system resources. “Pro-AI” patients tend to be more comfortable with clinical AI use, have a higher degree of education, are more knowledgeable about AI use in their daily lives, and see AI use as a significant advancement in medicine, while “AI-cautious” patients report lack of human qualities and low trust in the technology as detriments to AI use.14 A number of organizations now provide free access for everyone to digital health tools, including AI, in an effort to close the gaps in population health needs and address inequities.6

Summary

These six core concepts can help physician leaders frame their understanding of the rapidly evolving thinking of patients and physicians about AI. Digital technologies — in general and in health care — have led to unexpected positives and negatives. However, the most important thing to remember is that assumptions must be verified. It’s important to think about what may be missing rather than just how to interpret trends that are being shown.

Future articles in this AI-focused series will cover some successful and unsuccessful uses of AI in health care, how to successfully deploy AI solutions in health care, and how to address the challenge of balancing innovation and learning with the need for control and regulations.

References

1.Implementing artificial intelligence in Canadian healthcare: a kit for getting started. Ottawa: Healthcare Excellence Canada; 2021. Available: https://tinyurl.com/bddx85t4

2.Darcel K, Upshaw T, Craig-Neil A, Macklin J, Steele Gray C, Chan TCY, et al. Implementing artificial intelligence in Canadian primary care: barriers and strategies identified through a national deliberative dialogue. PLoS One 2023;18(2):e0281733. https://doi.org/10.1371/journal.pone.0281733

3.Young, AT, Amara, D, Bhattacharya, A, Wei ML. Patient and general public attitudes towards clinical artificial intelligence: a mixed methods systematic review. Lancet Digit Health 2021;3(9):e599-e611. https://doi.org/10.1016/S2589-7500(21)00132-1 

4.Khullar D, Casalino LP, Qian Y, Lu Y, Krumholz HM, Aneja S. Perspectives of patients about artificial intelligence in health dare. JAMA Netw Open 2022;5(5):e2210309. https://doi.org/10.1001/jamanetworkopen.2022.10309

5.2021 Canadian Digital Health Survey. Ottawa: Borealis; 2021. https://doi.org/10.5683/SP3/CEYG42

6.Greenhill AT. Chapter 38: AI-enabled consumer-facing health technology. In Byrne, MF, Parsa N, Greenhill AT, Chahal D, Ahmad O, Bagci U, editors. AI in clinical medicine: a practical guide for healthcare professionals. Hoboken, NJ: Wiley Blackwell; 2023.

7.Barkal JL, Stockert JW, Ehrenfeld JE, Cohen LK. Chapter 44: AI and the evolution of the patient–physician relationship. In Byrne, MF, Parsa N, Greenhill AT, Chahal D, Ahmad O, Bagci U, editors. AI in clinical medicine: a practical guide for healthcare professionals. Hoboken, NJ: Wiley Blackwell; 2023.

8.Richardson JP, Curtis S, Smith C, Pacyna J, Zhu X, Barry B, et al. A framework for examining patient attitudes regarding applications of artificial intelligence in healthcare. Digit Health 2022;8. https://doi.org/10.1177/20552076221089084

9.Part 2: Virtual health resources to support BC citizens. Victoria: Provincial Health Services Authority; 2020.

10.Allen MR, Webb S, Mandvi A, Frieden M, Tai-Seale M, Kallenberg G. Navigating the doctor-patient-AI relationship — a mixed-methods study of physician attitudes toward artificial intelligence in primary care. BMC Prim Care 2024;25(1):42. https://doi.org/10.1186/s12875-024-02282-y

11.Cinalioglu K, Elbaz S, Sekhon K, Su CL, Rej S, Sekhon H. Exploring differential perceptions of artificial intelligence in health care among younger versus older Canadians: results from the 2021 Canadian Digital Health Survey. J Med Internet Res 2023;25:e38169. https://doi.org/10.2196/38169

12.Ayers JW, Poliak A, Dredze M, Leas EC, Zhu Z, Kelley JB, et al. Comparing physician and artificial intelligence chatbot responses to patient questions posted to a public social media forum. JAMA Intern Med 2023;183(6):589-96. https://doi.org/10.1001/jamainternmed.2023.1838

13.Vicente L, Matute H. Humans inherit artificial intelligence biases. Sci Rep 2023;13:15737. https://doi.org/10.1038/s41598-023-42384-8

14.Armero W, Gray KJ, Fields KG, Cole NM, Bates DW, Kovacheva VP. A survey of pregnant patients’ perspectives on the implementation of artificial intelligence in clinical care. J Am Med Inform Assoc 2022;30(1):46-53. https://doi.org/10.1093/jamia/ocac200

Author

Alexandra T. Greenhill, MD, is CEO and chief medical officer, Careteam Technologies, and associate clinical faculty in the Department of Family Medicine, University of British Columbia. She is also co-editor of AI in Clinical Medicine: A Practical Guide to Healthcare Professionals.

Correspondence to: agreenhill@getcareteam.com

This article has been peer reviewed.