AI Scribes in Healthcare: Regulatory Expectations and Privacy Considerations

AI

Artificial intelligence scribes are rapidly entering healthcare environments across Ontario. Clinics, hospitals, and physicians are exploring these tools to reduce administrative burden and improve clinical documentation.

AI scribes can listen to patient interactions and automatically generate medical notes, summaries, and draft patient chart entries. While these tools may improve workflow efficiency, they also introduce privacy, governance, and accuracy risks that healthcare organizations must carefully manage.

The Information and Privacy Commissioner of Ontario (IPC) has issued guidance outlining expectations for healthcare organizations using AI scribes. 

The message is clear: the use of artificial intelligence does not change the legal responsibilities of health information custodians (HICs) under the Personal Health Information Protection Act (PHIPA).

HICs considering AI scribes must understand how these tools affect patient privacy, documentation integrity, and governance obligations.

1. Why the IPC Issued This Guidance

AI scribes are being adopted quickly across the healthcare sector. In many cases, these tools are introduced through vendor demonstrations, pilot programs, or free trials before a full privacy assessment takes place.

The IPC issued guidance to address the risks associated with this rapid adoption. Regulators have observed that many AI tools enter healthcare workflows informally, sometimes without structured governance or clear oversight.

AI scribes are viewed as high-risk technologies because they interact directly with personal health information (PHI) and clinical documentation.

A key concern raised by regulators is that many risks associated with AI tools do not emerge during procurement or vendor demonstrations. Instead, risks often appear during day-to-day clinical use when systems begin processing real patient data.

For this reason, the IPC emphasizes that HICs must approach AI scribes with structured governance, privacy safeguards, and ongoing monitoring.

2. Custodian Accountability

Under PHIPA, HICs remain responsible for protecting personal health information even when third-party technologies are used.

This responsibility applies in several situations:

  • When using vendors

  • When vendors process patient data

  • When organizations participate in vendor free trials

  • When AI models update automatically

  • How and when clinicians experiment with new tools

Responsibility for patient information cannot be delegated or transferred to vendors through contracts.

HICs should also recognize that vendor marketing claims do not determine compliance. A system described as “vendor approved” or “industry standard” does not automatically meet regulatory expectations.

HICs must conduct their own due diligence and maintain oversight of how AI systems interact with patient information.

3. Where the IPC Sees the Highest Risks

The IPC has identified several areas where AI scribes introduce the greatest risks for healthcare organizations.

These risks typically relate to documentation accuracy, consent practices, and the handling of recorded conversations.

A. Accuracy = Privacy and Safety Risk

AI scribes may generate, modify, or summarize personal health information before it is approved by a clinician and entered into a patient record (i.e., into an organization's electronic medical record or EMR).

Errors in documentation can occur when systems misunderstand context, misinterpret speech, or incorrectly summarize patient interactions.

Inaccurate medical records can create several consequences, including:

  • Clinical harm resulting from incorrect documentation

  • Insurance or employment consequences tied to medical records

  • Long-term reputational harm and loss of trust

Regulators increasingly recognize that inaccurate information can create both clinical and privacy harms.

Once incorrect information enters an electronic medical record, it can be extremely difficult to remove.

For this reason, the IPC emphasizes that AI-generated documentation should always be reviewed by a human clinician before it becomes part of the patient record and that AI outputs must never be used or disclosed unchecked.

HICs should also consider implementing accuracy monitoring processes and escalation procedures if errors are identified.

B. Consent Is Not Optional

Patients should be informed when AI technologies are involved in clinical documentation. There is no PHIPA exception allowing AI Scribes without consent.

Express consent is a strong best practice and transparency helps patients understand how their information is being processed and allows them to ask questions about the technology.

Clinicians should be able to explain:

  • The role of the AI system

  • How patient information is processed

  • What safeguards are in place

  • Who the Privacy Officer is and how to contact them

HICsshould also ensure clinicians receive training on the use of AI tools so that conversations about consent and transparency are meaningful and informed. Supplementing and reinforcing training with an AI Acceptable Use Policy is also strongly recommended as a good practice for PHIPA compliance (i.e., having administrative safeguards in place to protect personal health information). Additionally, Bill 194 requires public sector organizations to have an accountability framework governing how AI is used, which is expected to be implemented through internal policies, standards, and controls. While an AI Acceptable‑Use Policy is not explicitly required, it is a clear and defensible way to demonstrate that AI scribes are used in a limited, intentional, and monitored manner with risks appropriately managed.

Patients often rely on clinicians to explain new technologies, and knowledge gaps can undermine trust.

C. Recordings and Transcripts Create Additional Risks

Many AI scribes rely on audio recordings of clinical encounters.

Audio recordings present unique privacy challenges because they capture contextual information that is difficult to de-identify.

Recordings may include:

  • Voice characteristics that identify individuals

  • Discussions involving family members who are present in the exam room

  • Background conversations involving other patients

  • Personal information about healthcare providers

HICs should evaluate how recordings are handled, including:

  • Whether recordings are retained by the AI Scribe

  • How long they are stored

  • Where they are stored and processed (especially if storage takes place outside of Canada)

  • Whether vendors use recordings to train models

Strong data minimization and retention practices are essential when audio data is involved.

4. AI Governance Is Not Optional

The IPC has emphasized that organizations using AI scribes must implement governance and accountability frameworks.

These governance measures can be scaled depending on the size of the organization, but they cannot be skipped.

HICs should clearly define:

  • Who is responsible for approving AI tools

  • Who monitors system performance

  • Who has authority to pause or discontinue use

  • How incidents or documentation errors are reported

Governance should also include ongoing monitoring rather than a single approval decision during procurement or internal launch.

AI systems may change over time as models update or new features are introduced.

5. Procurement Does Not Equal Compliance

Selecting an AI vendor does not, on its own, ensure compliance with privacy obligations. HICs remain responsible for how personal health information is collected, used, and disclosed when AI tools are deployed in clinical environments.

The Information and Privacy Commissioner has emphasized that procurement processes should not be treated as a substitute for ongoing governance and due diligence.

Several points are particularly important for healthcare organizations considering AI scribes.

Vendor of Record programs support procurement but do not replace due diligence.

Participation in a vendor program or approved vendor list may simplify procurement processes, but it does not eliminate the responsibility of the HIC to evaluate how a technology interacts with patient information.

Vendor documentation should be treated as inputs, not final answers.

Materials such as model cards, security documentation, and vendor compliance statements can help inform decision-making. However, HICs must still independently assess how the system operates within their clinical and regulatory environment.

Healthcare environments create heightened risk contexts.

Even when vendors disclaim liability or present their products as industry standard, the responsibility for protecting personal health information remains with the HICusing the technology. The sensitivity of health data requires a higher level of scrutiny than many other industries.

Contractual agreements with vendors should support ongoing oversight and accountability. In particular, organizations should ensure that agreements address:

  • Accuracy reporting, including mechanisms for identifying and correcting documentation errors

  • Bias monitoring, especially where AI systems generate or summarize clinical information

  • Breach response procedures, including notification obligations and incident management

  • Limitations on vendor data use, including restrictions on secondary use of patient information

HICs should also recognize that free trials remain subject to PHIPA requirements. Even when a HIC is evaluating a product or testing a pilot system, personal health information must be protected in accordance with privacy legislation.

For this reason, HICs should approach AI procurement as part of a broader governance process rather than a one-time purchasing decision.

6. Transparency and Notice

HICs should be able to clearly explain their use of AI scribes to patients and regulators.

Organizations should be prepared to answer questions such as:

  • Why is the AI system being used?

  • What patient information is shared with vendors?

  • Does any data leave Canada?

  • What limitations or biases may exist in the system?

  • How can patients exercise their privacy rights?

Transparency helps build trust and allows patients to make informed decisions about their care.

Clinicians should also understand the basics of how AI scribes work so they can respond appropriately to patient questions. Physician gaps in knowledge should not be a barrier to quality care and informed decision making.

7. Access and Correction in an AI Environment

Patients have the right to access and request correction of their personal health information (PHI).

When AI scribes generate transcripts or documentation, those records may fall within the scope of these rights.

If transcripts or recordings are retained, HICs should ensure patients understand that both the transcript and the summarized documentation may exist.

A best practice is identifying PHI records that were generated or modified by AI systems or an AI Scribe.

IPC Note: 

Custodians must consider the agreements and terms of service that are entered into with third-party vendors and how they will meet their obligations under PHIPA, including during a free trial. For example, custodians must ensure that they are able to provide individuals with access and correction of their records of PHI after the free trial has ended and when the custodian has not agreed to purchase a license with the vendor.

8. Additional Recommendations From the IPC

Regulators recommend that HICs should develop and maintain an AI risk management framework.

Two assessments are particularly important when evaluating AI scribes.

Privacy Impact Assessments (PIAs) help HICs understand how personal health information flows through the system and what safeguards are required.

AI Impact Assessments (AIAs) help HICs evaluate broader risks related to system accuracy, bias, governance, and accountability.

Together, these assessments help HICs understand whether the technology is appropriate for their environment.

Contact our team today if you have any questions or explore our Privacy Impact Assessment Services.

Additional Insights From Privacy and AI Governance Conferences

Recent discussions at privacy and data governance conferences, highlight several emerging themes that are shaping how regulators, clinicians, and policy experts are thinking about AI in healthcare.

These conversations provide additional context for organizations considering technologies such as AI scribes.

The Promise of AI in Healthcare

Across the healthcare sector, the conversation around artificial intelligence is shifting. Rather than viewing AI purely as a replacement for human work, many experts are beginning to frame it as a tool that can teach, support and evaluate clinicians and elevate human decision-making.

In this view, AI tools may eventually function as part of the broader care environment, assisting clinicians with documentation, analysis, and information retrieval.

Some clinicians have even begun describing AI systems as potential participants within the “Circle of Care.”

However, concerns arise when AI tools are introduced without formal approval or oversight. In these situations, an AI system may effectively become an unauthorized participant in the care process, creating governance and accountability challenges.

Public trust remains one of the largest barriers to broader adoption of health AI. Without trust in how health data is used and protected, technological progress in healthcare can slow significantly.

Privacy Is Not Opposed to Data Sharing

A consistent message emerging from privacy regulators and health data leaders is that privacy should not be viewed as an obstacle to responsible data use.

Instead, privacy frameworks exist to enable data sharing in ways that maintain public trust.

As noted by the Ontario Privacy Commissioner, “Privacy is not anti-ethical to sharing. We need a new social contract.”

The real tension in health data governance is not simply between privacy and innovation. It is between:

  • Harms caused by misuse of data

  • Harms caused when data is not shared at all

When health data remains siloed across systems, clinicians may lack the information needed to make fully informed decisions on patient care. Limited data sharing can slow research, weaken health system planning, and reduce the ability to improve patient outcomes.

At the same time, regulators emphasized that privacy protections are what make responsible data sharing possible. Privacy frameworks provide the safeguards that allow data to be used while maintaining accountability and transparency.

Leaders from organizations such as the Information and Privacy Commissioner of Ontario, CIHI, and Statistics Canada repeatedly emphasized that privacy is what makes data sharing socially legitimate.

Without trust in how institutions handle data, public support for health data use quickly erodes. In that environment, both data sharing initiatives and AI adoption can stall.

Patient Consent Is About Trust, Not Just Legal Authority

Patient advocates and researchers emphasize that consent in healthcare should not be treated only as a legal requirement. Patients want to understand why their data is being used, who benefits from it, and whether it could be used against them.

When consent cannot realistically be obtained, transparency becomes essential. Clear explanations and visible notices in clinics help patients understand how their information may be used and maintain trust.

The Role Trust Places

Trust strongly shapes how patients view new technologies in healthcare. Many patients accept AI tools because they trust their clinicians, but power imbalances can influence how consent is expressed. Some patients may agree to a clinician’s use of an AI Scribe simply to avoid delays in care or to avoid questioning a clinician’s recommendation.

Patients generally want transparency. They want clear explanations of how the technology works, its risks, and its limitations. As AI tools such as documentation systems change how clinicians interact with patients, healthcare organizations must also consider fairness, bias, and the overall patient experience.

Data as a Public Good — With Guardrails

Many health policy experts now describe health data as a potential public good, capable of supporting research, improving health system planning, and advancing medical knowledge.

However, this concept depends heavily on strong governance and stewardship.

Health data can be used:

  • As a public good, when it is responsibly stewarded and protected

  • For a public good, when it supports research, policy development, and system improvement

Researchers are increasingly exploring ways to share data models and insights across institutions to improve health outcomes.

Another important consideration is ensuring that AI systems are trained on datasets that reflect the populations they serve. Models trained primarily on U.S. healthcare data may not fully represent Canadian populations or health system dynamics.

Why These Insights Matter for AI Scribe Adoption

Together, these themes highlight that the discussion around AI in healthcare is broader than technology alone.

Organizations adopting AI scribes must consider:

  • Patient trust

  • Clinical workflow changes

  • The long-term integrity of medical records

  • How AI tools alter documentation practices

Responsible AI adoption requires balancing innovation with strong privacy protections and transparent governance structures.

How Bamboo Data Consulting Can Help

HICs are under pressure to adopt new technologies while maintaining strong privacy protections.

Bamboo Data Consulting helps HICs evaluate AI technologies and implement governance frameworks that align with regulatory expectations.

Our services include:

AI Risk Assessments: Evaluate operational, privacy, and documentation risks associated with AI tools.

Privacy Impact Assessments (PIAs): Assess how AI systems interact with personal health information under PHIPA.

AI Governance Framework Development: Design governance structures that support responsible AI adoption.

Vendor Due Diligence: Review vendor security practices, contractual terms, and data handling processes.

Policy Development and Training: Help organizations develop internal policies and staff awareness programs for responsible AI use.

Contact Bamboo Data Consulting to:

  • Conduct an AI scribe risk assessment

  • Complete a PHIPA-aligned Privacy Impact Assessment

  • Evaluate AI vendors before implementation

  • Develop an AI governance framework for your organization

Next
Next

Shadow AI Detection: What AI Is Quietly Running Inside Your Organization?