masthead

HEALTH INFORMATICS

Leadership in the era of AI: skills for 2025 and beyond

Mamta Gautam, MD, and Kathleen Ross, MD

Artificial Intelligence

This article was inspired by the workshop Dr. Mamta Gautam and Dr. Kathleen Ross hosted at the 2025 Canadian Conference on Physician Leadership in May.

As artificial intelligence (AI) transforms the fabric of health care, physician leaders face an urgent call to lead with vision, nimbleness, and a deep sense of ethical responsibility. AI is not a silver bullet; it is, however, here to stay. How we lead through disruption in this era of AI will shape the future of medicine.

KEY WORDS: Leadership, Artificial Intelligence, Ethics, AI

Gautam M, Ross K. Leadership in the era of AI: skills for 2025 and beyond. Can J Physician Leadersh 2025;11(2): 91-101. https://doi.org/10.37964/cr24792

Physician leaders will be asked to address competing priorities of innovation, ethics, and implementation. Physician leadership will be crucial, steering us toward outcomes that serve both efficiency and the humanity of our profession.

Setting the stage

AI is not new technology; however, its potential in health care became more tangible following the release of ChatGPT in 2022. From enhancing decision-making to optimizing workflows, AI is poised to disrupt traditional medical models.

This is not the first time health care has been redefined by technological advancements. The shift from paper and pager-based systems to digital solutions, such as electronic medical records and secure messaging, was already significant. Today, AI promises to further accelerate innovation, making it essential for health care leaders to understand its implications. Remember though, that for something to be truly innovative, it must be new and it must generate value.

Physician leaders will be tasked with navigating this evolving, complex landscape with vision, curiosity, and a commitment to improving patient care, while keeping an eye on both financial costs and environmental impact. As clinicians, it will be our professional responsibility to ensure AI use meets established safety and practice standards.

Understanding the technology

John McCarthy defined AI in 1955 as the intelligence demonstrated by machines. We now consider AI much more broadly as a technology that can learn, make decisions, and act. AI, however, is not just one entity and cannot be applied to every task.

Machine learning (ML) is a type of AI that learns from experience, without being explicitly programmed. ML uses algorithms, step-by-step procedures, or sets of rules that a computer follows to achieve a specific task. ML can discover patterns in data and use them to make predictions or decisions.

Deep learning (DL), a type of ML, automatically learns complex patterns across multiple layers of large data sets. Given enough data of good quality and parameters, DL will create its own strategies for action. It makes its own way to the conclusion. Data drives, modifies, and improves the model without direct human commands. A great example would be support vector machines, such as those applied to imaging to detect breast cancer. These algorithms excel at differentiating between normal and abnormal without supervision, but within clear, human-defined planes of differentiation that are then compared with human interpretations to achieve rapid improvement.

Large language models (LLMs) enable computers to understand, interpret, and generate meaningful human language, driving such innovations as AI scribes and chatbots. LLMs interpret our questions — even emotional ones — as data, to generate more helpful human-sounding responses.

Generative AI (Gen AI) can create novel content based on patterns and structures. It learns by extracting useful text from a variety of data sources and then generating insights to inform decisions, predictions, or better optimization of its own performance. Text, images, music or even entire video simulations that mimic or resemble novel human content are created. Applied to medical imaging, Gen AI can process millions of images, compare them with a database of known outcomes, and provide a highly accurate diagnosis that rivals or even surpasses human experts. AI applied to calculating bone age is one such example.

Artificial IntelligenceDespite all the hype and hope, AI today largely focuses on specific, single, defined tasks. AI output, based on data input, responds to a single query — even if that query is complex and crosses silos of data. Essentially, today’s AI transforms large amounts of data inputs into actionable information.

From large language models and predictive analytics to ambient scribes and robotic-assisted surgeries, AI tools are no longer hypothetical in health care.

AI offers us an opportunity to improve our own productivity and the quality of care we deliver to patients through implementation of a series of specific tools. Those supporting clinicians will generally address repetitive tasks, including AI scribes, clinical decision pathways, image interpretation, and chart review and summary, even building differential diagnoses. AI-driven system-level efficiencies, such as triage, scheduling, and predictive staffing tools, will scan data from multiple sources, summarize, and implement improvement plans much more quickly than human data analysts. Patient-empowering AI resources, such as education tools, consult visit summaries, wearables, home monitors, and chatbot-driven disease management supports, will drive substantial change in how and where care is delivered, truly enabling personalized medicine.

As these technologies evolve, leaders must understand both the AI resources’ technical capabilities and their limitations. This includes how AI models are trained and validated, where biases may exist, and how data privacy is protected.

Key leadership skills for the AI era

While AI introduces unparalleled opportunities, it also brings challenges that demand careful leadership. Several key areas must be addressed as we integrate AI into health care systems. Exploring how LEADS, a time-tested Canadian leadership model, remains essential in today’s AI-era will help navigate these increasingly active agents in our clinical and administrative settings.

Lead self

With so much evolving at pace, leaders must develop digital literacy to understand AI and continue to engage in lifelong learning, increasing knowledge of this topic as part of their ongoing professional development.

AI is best seen as a support tool rather than a replacement for critical decision-making. Leaders must be self-aware, understanding their own comfort levels with AI while supporting staff in flagging mistakes or biases. Encouraging open communication and feedback loops will foster a collaborative environment where AI serves as an extension of the physician’s capabilities.

There is no doubt that we will be approached by many purveyors of this technology as AI’s applications in health care strive to address our collective pain points related to managing increasingly complex patients in an increasingly fragmented and complex care system. Fortunately, we don’t all need to be data scientists; however, we must understand the capabilities and limitations of AI, the ethical considerations, and the potential impact on our organizations and the broader industry.

Ethical decision-making, bias recognition, and curiosity have never been more important. Being aware of knowledge gaps in ourselves and in the technology, which potentially undermine equity and social justice, will lessen unintended harms.

Engage others

One of the cornerstones of effective AI integration is building trust among frontline health care providers. Leaders need to foster education on AI technologies, ensuring that staff understand the capabilities and limitations of these tools. Clear communication and explainability of AI processes are critical to engender confidence and transparency.

Leaders must develop and share a clear vision and articulate how AI fits into the broader strategy of their organization. This vision should be patient-centric, focused on AI-driven outcomes improvements, enhanced patient experiences, and more efficient and effective care delivery models. But vision alone is not enough; being adept at implementation is required. Turning our vision into reality demands careful planning, investment, and forging partnerships with technology innovators in ways we are not accustomed to undertaking.

AI has the potential to upend traditional models of care, reallocate roles and responsibilities among health care providers, and shift the balance of productivity toward those who embrace these new technologies. Therefore, communication and advocacy skills are vital. Focusing on both our people and the culture of innovation in our workspaces, while simultaneously addressing concerns over equity, privacy, and trust, will lead to successful adoption and spread. Jobs will change and we will need to understand how.

This is all positive. However, before jumping to AI as a solution, it is important to step back and ask, “what problem are we trying to solve?” Clearly defining for our teams how AI will augment workflows and encourage employees to inform themselves, while providing opportunities to support their learning, will require a mindset shift.

Communicating our sense that, while AI can be used to enhance human judgement, capabilities, and expertise, it is not a replacement for human compassion in care settings. Although AI may predict clinical deterioration, only humans can comfort a family in distress. AI may generate a care plan, yet humans are best poised to explain it with compassion. As humans, we continue to have our best outcomes when we connect — human to human.

Accuracy remains an issue, as generative AI does, at times, fill in gaps in data with information that it determines fits the situation — the so called “hallucinations.” Current tools quote references that do not exist. Key to the transition to AI is trust, and trust is currently lagging. Adoption remains slow because of concerns over accuracy and responsibility.

In 2023, the American Medical Association first highlighted that earning clinician trust is critical to accelerating the adoption of AI into patient care.1 Establishing standards and revisiting regulatory systems is needed. Who is responsible if AI makes an error? Application of ethical frameworks, such as the World Health Organization’s Ethics and Governance of Artificial Intelligence for Health2 will further trust for both patients and clinicians.

Achieve results

Change often brings fear, and the transition to AI is no exception. Leaders must acknowledge and manage the resisters to change, which include not only systemic concerns but also personal fears, such as career security or retirement uncertainty. Leaders will need to manage not just the change (the external aspects that will be modified), but also the transition (people’s internal reactions to the change). All change creates a loss, as we let go of what we used to do and embrace the new way. Recognizing the stages of grief that often accompany this loss as people transition from old systems to new ones is an essential aspect of empathetic leadership.

In some cases, AI is a solution in search of a problem. To meet with success, AI should not be implemented for its own sake, but to address tangible issues, such as clinician burnout, inefficiency, inequity, or clinical gaps. The financial costs and environmental impacts are not negligible. We need to be clear that the effort to incorporate AI into a specific process is something needed or wanted by those in the system. “Readiness to change,” a careful examination of risks, liability, privacy, and implementation plans, should all be considered in our communications, including what clinician upskilling or increased physician engagement will be required to adopt new technologies.

Implementation science skills become indispensable. Setting goals and directions, aligning with our values and evidence, and measuring outcomes are as vital as tracking measurable outcomes. If our teams can see the gaps and tangible solutions, they will be incentivized to build on their own skills and measure improvement, making change management a great deal less onerous.

For example, AI scribes have demonstrated the ability to reduce documentation time by up to 90%, according to a recent OntarioMD study.3 The same study noted a significant improvement in clinician cognitive load, job satisfaction, and work–life balance. Yet this “heavenly” advancement raises equally critical questions: How are recordings stored? Who ensures data deletion? Can patients truly offer informed consent when the technology is opaque even to providers? This duality of promise and peril is at the heart of AI leadership.

In addition, leaders must drive change while maintaining critical thinking skills. Humans tend to quickly become reliant on evolving technology. How many of us can navigate a new city without the assistance of Google Maps? Normalizing active involvement with AI tools, including regularly reviewing outcomes with our teams, should help to limit automation bias, maintain clinician autonomy, and reduce the risk of deskilling our healthcare workforce.

Ethical considerations are paramount in deploying AI in health care. Leaders must ensure that AI models are designed with accountability and responsibility, accepting the possibility of errors and building awareness about biases. Tools, such as reflexive AI, which identifies its own biases, and adversarial AI, where one AI interrogates another, can help highlight gaps and biases in decision-making processes. In addition, it is vital to ensure patient consent when AI tools are used, equipping clinicians with the knowledge to address questions about data handling.

Develop coalitions

Artificial IntelligenceImplementing AI solutions requires robust change-management strategies. Leaders must identify internal and external stakeholders, recognize barriers to adoption, and align behind a common vision and shared goals. Engaging patient advisors and health care providers in the AI resource development process ensures that tools are inclusive, accessible, and designed to meet diverse patient needs. Tailored interaction methods for patients, such as verbal phone bots or video AI, can help address varying levels of digital literacy while ensuring that patients are informed when interacting with AI. Building collaborative networks across sectors, including health care professionals, information technology specialists, and policymakers, has always been essential in health care.

Leadership in this new space means more than just enthusiasm for innovation. It demands building trust. Leaders must emphasize patient feedback and data to continuously improve AI tools. Ensuring accountability in design, implementation, and outcomes reinforces trust while aligning AI advancements with patient care standards. Including patients in the development process enhances the relevance and effectiveness of AI interventions. Patients and providers must believe that AI systems are accurate, equitable, and secure. Yet public AI tools remain largely unregulated, and professional organizations are only beginning to issue guidance.

AI vendors are not clinicians. Leaders must evaluate them as they would any industry stakeholder — applying rigorous scrutiny to ensure that AI tools serve patient and provider needs, not just commercial interests. Yet, there is an inherent need to form partnerships between publicly funded care delivery teams and technology partners in ways we may not be accustomed to undertaking.

The ethical landscape is complex. From algorithmic transparency and data bias to environmental impact and cybersecurity, the path forward requires careful navigation.

Busy clinicians will never be able to master all the various AI platforms. However, if we could provide a single technology entry point, such as an LLM, that linked to various other tools in a seamless fashion, we would be able to accelerate adoption. Technology partners will need to engage with frontline workers to truly understand and streamline clinician workflow.

Policymakers and regulatory bodies have a critical role in accelerating adoption of AI. If we are to entrust aspects of care to machines, robust oversight is non-negotiable.

System transformation

Building a culture that embraces AI requires ongoing advocacy and a focus on achieving results while transitioning people effectively through change. Developing coalitions with other organizations using AI can help share insights and establish best practices. Staying up to date with regulations and considering the broad impacts of AI on the workforce and the environment are also crucial to sustainable leadership.

At its core, health care is a human endeavour. Even as AI changes the game, the rules must still be rooted in compassion, ethics, and service.

Leaders must think strategically, act innovatively, and anticipate regulatory shifts and climate implications related to AI infrastructure.

As we push the boundaries of AI, we face ethical challenges. AI systems learn from data, and the quality of those data determines the quality of the outcomes. So clearly naming where training data came from is important. If the data are biased, the AI’s decisions will be biased. If the data are incomplete, the AI’s recommendations may be flawed. Garbage in, garbage out.

Artificial IntelligenceAI is increasingly in everything, including our supply chains and drug development, our classrooms, training programs, and clinical care. As such, knowledge of AI is no longer optional; it is a professional obligation. So too is our commitment to patient privacy, to ongoing education, and to careful evaluation of the tools we bring into care environments.

At the moment, we do not have one specific law or set of laws establishing the guardrails for AI. We rely on multiple layers of established regulation and frameworks and essentially cross-pollinate them into AI’s use. Canada is lagging behind.

Legislation, such as Bill C-27, introduced the Artificial Intelligence and Data Act, recognizing that AI systems require stringent oversight, risk assessments, and compliance with specific regulatory requirements to mitigate potential harm.4

A big part of the concern arises out of understanding the data set used for training and validation of AI processes, which we know has serious implications for the outputs. Often this is not made clear to end users, or at least the information is not publicly available.

Many patients and providers may not have the level of digital literacy needed to fully use new tools as they arise. This will drive inequity, and we should be prepared to address this head on.

Patients, health care providers, and the public at large must trust that AI is being used responsibly, and several of the regulatory colleges have released positions on this, including British Columbia,5 Alberta,6 Saskatchewan,7 and Manitoba.8

Key to these recommendations is assurance that AI systems are accountable, transparent, and free from bias to prevent inequitable treatment. There are increasing attempts to provide explainable decision trees for AI tools to help us evaluate how the algorithm intends to reach its conclusions, helping to clarify what data were used and in what setting. This is critical as many of the tools to date are trained on small data sets and in particular settings that may not be transferable to other settings.

We should also acknowledge that we cannot expect AI to be flawless. There is plenty of bias in human decision-making.

Key areas of vigilance for leaders to consider in system transformation include:

  • data privacy, particularly for sensitive patient information
  • cybersecurity for models and training processes selected
  • fairness and equity
  • transparency
  • safety and performance, especially for accreditation, legal, and regulatory issues

Explainability is difficult, however. If we can’t explain how AI reaches its conclusions, we can’t maintain transparency beyond noting that we employed AI.

Conclusion

The integration of AI into health care is a defining challenge for physician leaders in 2025 and beyond. AI in health care is the future. On offer are tangible solutions to the challenges plaguing our current workflow, lack of interoperability, complexity of patients, lack of standardization of protocols, inefficiencies in communication, lack of time and training support for clinicians to learn new technologies, and of course limited funding to implement new strategies.

AI has the potential to ensure that the right care is delivered by the right person, in the right place, at the right time, leading to better patient outcomes and reducing health care provider turnover, alongside cost reduction through better resource management and reduced waste.

We echo the World Economic Forum’s 2025 Future of Jobs Report, which states that human skills will remain irreplaceable.9 In the face of rising automation, it is our empathy, emotional intelligence, and interpersonal agility that differentiate us as leaders. These skills build the psychological safety needed for innovation and system change to flourish and for trust in AI to take root.

By addressing ethical considerations, building trust, managing change, and fostering cultural shifts, health care leaders can ensure that AI serves as a transformative tool that enhances both the efficiency and humanity of medicine. Leadership in this age demands not only technical understanding but also a commitment to advocacy, inclusivity, and ethical stewardship. Together, we can navigate this new frontier and shape a future that honours the core values of health care.

References

1. Augmented intelligence development, deployment, and use in health care. Policy. Chicago: American Medical Association; 2024. Available: https://tinyurl.com/527jmm9s

2. Ethics and governance of artificial intelligence for health: guidance on large multi-modal models. Geneva: World Health Organization; 2024. Available: https://tinyurl.com/2fsa6fjc

3. Clinical evaluation of artificial intelligence and automation technology to reduced administrative burden in primary care. Report to Ontario MD. Toronto: Centre for Digital Health Evaluation, Women’s College Hospital Institute for Health System Solutions and Virtual Care; 2024. Available: https://tinyurl.com/52zcyr9t

4. Artificial intelligence and data act. Ottawa: Government of Canada; 2023. Available: https://tinyurl.com/4muz7ke9

5. Ethical principles for artificial intelligence in medicine (version 1.1) Vancouver: BC College of Physicians and Surgeons; 2024. Available: https://tinyurl.com/3s5k499j 

6. Advice to the profession: artificial intelligence in generated patient record content. Edmonton: College of Physicians and Surgeons of Alberta; 2023.
Available: https://tinyurl.com/bdanfdre 

7. Guidance to the profession: artificial intelligence (AI) in medicine. Saskatoon: College of Physicians and Surgeons of Saskatchewan; 2024. Available: https://tinyurl.com/2n89db9a 

8. Advice to the profession on the responsible use of artificial intelligence in the practice of medicine. Winnipeg: College of Physicians and Surgeons of Manitoba; 2024. Available: https://tinyurl.com/yavrc8u6 

9. Future of jobs report 2025. Geneva: World Economic Forum; 2025. Available: https://tinyurl.com/3cdupy6v

Author

Mamta Gautam, MD, FRCPC, CPDC, CCPE, CPE, is an internationally renowned psychiatrist, consultant, certified coach, author, and speaker. A trailblazer in the field of physician well-being, she is known as the “The Doctor’s Doctor.” Her work focuses on physician well-being and physician leadership.

Kathleen Ross, MD, MCFP, is a clinically active family physician and a recognized expert in physician leadership and quality improvement. She serves as chair of the Pathways Patient Referral Association, an online clinical and referral resource for physicians in BC and the Yukon.

Correspondence to: drkathleenross@gmail.com

CMA Ad