Getting to the bottom of it- summarizing the CMPA’s online documentation on the use of AI
Key takeaways
- Obtaining informed patient consent is necessary when using AI tools
- Physicians must be open and have a discussion explaining the AI tool and how their data may be used
- AI tools must meet all privacy and security regulations
- Physicians will still remain responsible for the accuracy of their work and must always check their AI notes
- Physicians should ensure that they use tools that are accurate but must also make sure not to rely solely on the tool
Key Considerations from the CMPA
Artificial intelligence (AI) continues to become increasingly integrated into healthcare. The CMPA has begun to offer guidance on important considerations for clinicians considering integrating AI into their practice. Regulatory guidance on the use of AI technologies is still beginning to emerge.
Transparency and Consent
Transparency with patients is fundamental especially when integrating AI tools into medical practice. The CMPA emphasizes the importance of obtaining informed consent from patients before using AI tools, such as AI scribes.
To obtain informed consent, there should be a clear explanation of the AI tool’s purpose, how it works, and the potential uses of any recorded data. To ensure consent is informed, the recording of any conversation should be made clear using understandable language, and the purpose and uses of recording must be explained. A form should not be seen as a substitute for having a discussion with a patient to explain the AI tool where necessary– the uses and risks should be discussed. Patients should be made aware of how their data will be used and if it will be de-identified and used to further improve AI algorithms. For example, if a physician uses an AI scribe which will use patient data to improve future iterations of the AI model, this should be explained to the patient. Informed consent about the use of AI tools is essential to maintain trust in the doctor-patient relationship and ensure that physicians abide by ethical standards.
Privacy
AI tools must be compliant with the Personal Health Information Protection Act (PHIPA). When choosing an AI tool, physicians should evaluate whether the tool has implemented appropriate safeguards to ensure confidentiality and protection of patient data. In Ontario, it is recommended as best practice that a company conduct a formal privacy impact assessment before collecting personal information and data. A privacy impact assessment (PIA) is an analysis of how personally identifiable information is handled to ensure compliance with appropriate regulations, determine the privacy risks associated with information systems or activities, and evaluate ways to reduce the privacy risks.
Physicians and other staff with access to AI tools must be properly informed and trained on the privacy policies of both their clinic and the policy of the AI tool.
Physicians must also consider whether recordings of patient encounters will be stored, and if so, maintain that they are secure in the patient’s chart for the required retention period. If not keeping these recordings, physicians must have a policy to set out the timing and process for destruction and ensure the report is accurate in the chart prior to this destruction.
Accountability for Quality and Accuracy
Physicians and practitioners must remain responsible for their AI tools’ accuracy, quality, and reliability. Physicians should always review AI-transcribed notes and suggestions to avoid any errors. It is possible for AI to hallucinate, introduce biases, or misinterpret information and data. AI tools must, therefore, be viewed as supportive tools and should never be solely relied upon. It is the physician’s responsibility to monitor and correct any inaccuracies and ensure completeness. Verifying the accuracy of AI tools reduces potential harm to patients and also reduces medicolegal risk.
Additional checks
The CMPA states that it is also important to consider the following questions (from the CMPA article here):
- Are there appropriate safeguards for the collection, transfer, and use of patient data?
- Are appropriate procedures followed and is consent obtained if the AI developer plans to store or use patient data to adapt the AI tool?
- Are measures in place to maintain high standards for patient safety and reliability, including if the AI tool is adaptive and will change in response to new patient data? These standards should be maintained throughout the AI tool’s lifecycle.
- What is the stated purpose and objective of the AI technology and is its use appropriate in the circumstances of your practice? This includes a clear understanding of the intended patient populations and the clinical use conditions.
For more detailed information, visit the CMPA’s official resources on navigating AI in healthcare:
To see how Pippen aligns with the CMPA guidelines, click here.
Contributed by: Charlotte Chen
Charlotte Chen is a 2nd year law student at the Lincoln Alexander School of Law at Toronto Metropolitan University and a summer student with Pippen.