What is Artificial Intelligence (AI) in Healthcare?
Artificial intelligence is becoming more prevalent in every industry. Find out how it works in the medical field.

To put it simply, AI is machine learning that mimics human intelligence and discernment in providing predictive analytics and support in automated decision-making through recommendations.[1]
How does AI work?
At the base level, AI uses mathematical and statistical analysis to make predictions and recommendations. However, when AI is advancing to learn from its training data it results in “machine learning” (ML) that can optimize workflows and predict results. ML involves neural networks of interconnected arrays of nodes consisting of an input layer and an output layer. Each node is a computational unit that is assigned a weight, which mathematically and statistically analyzes an input layer to convey to an output. [2]These nodes are organized into layers or patterns to form a network. When the patterns become multi-layered it can result in an extensive algorithm with hidden complexities that can be difficult to oversee or supervise. Such complex algorithms are often referred to as “black boxes.” So, ML is about computers being able to think and act with less supervision or human intervention. On the other hand, deep learning models which are referred to as “glass boxes” are transparent and have traceable components. AIML is not only able to learn from its training data, but from receiving new input from the external real-world to continue to learn and self-optimize.[3]
Why AI in Healthcare?
Canadian hospitals and health care organizations are beginning to implement AI to improve administrative efficiency and quality of patient outcomes. Hospitals can use AI applications to improve administration of their resources. For example, Unity Health Toronto is using AI to assist in managing triage of acute patients and assessing resources needed to justify staff expenditures.[4] Such applications allow hospitals to effectively quantify patient traffic and staffing needs resulting in better allocation of resources. At Humber River Hospital, medical robots deliver medicine, linens, equipment and greet patients and families at the hospital entrance, which frees the nursing staff to complete other tasks.[5]
The use of AI in clinical applications improves patient care, procedures, and healthcare outcomes. These applications can improve accuracy and efficiency in disease detection and diagnosis. The range of possible applications in healthcare is extensive in Canada because we have an ecosystem for start-ups to grow in this field. The Saskatchewan Health Authority adopted technology that uses computer vision and ML to diagnose lung abnormalities on chest x-rays. The technology became increasingly useful to rapidly diagnose COVID-19 infections.[6]
An example of detection technology is McGill University Health Centre’s Augmented Intelligence and Precision Health Laboratory in Montreal. The lab is using ML and GE’s Edison platform to predict the spread of head and neck squamous cell carcinoma to the cervical lymph nodes.[7] Also, AI is used in research and development particularly in discovery of new drugs, medical devices and increasing efficiency in systems. For instance, in Newfoundland and Labrador, Eastern Health partnered with IBM to implement an AI-based solution to assist health care workers with appropriate mental health resources and recommendations for relevant support services. The chatbot, Employee Virtual Assistant (EVA) improved employee knowledge of mental health resources and reduced barriers to access.[8] The impending development and growth of AI application in healthcare delivery and research requires a careful consideration of the risks involved in using such advancements.
What are the risks with AI in Healthcare?
Multiple Sources of Bias
There are two forms of risk bias with AI systems. The first is algorithmic bias, which includes the developer’s unconscious bias during the development of the AI application, coding, and AIML that results in learned bias over time. In 2019, a science study revealed that algorithms used by well-known health care providers in the United States are racially biased. [9]Health care practitioners relying on recommendations from these algorithms must beware that marginalized groups may not be factored into the data used by the AI application.[10] The second is biased data, which is concerned with two sources of bias: (a) training on non-representative data and (b) data bias.[11] Training on non-representative data refers to inadequate representation of a range of possible patients that will be treated using the AI application. The exclusion of a subpopulation or relevant traits of likely patients can result in the oversight of contextual factors or healthcare needs. Data bias is a result derived from systems that are biased in terms of access, diagnosis, and treatment of marginalized patients.[12] In 2021, a study that examined over two hundred COVID 19 AI models discovered 97% were at “high risk of bias,” and 42% of those results were because the training data did not reflect the intended patient population.[13] The risk of bias in AI application for Canadian patients is of concern since our patient population has diverse representation, indigenous peoples and lower income communities.[14]
Privacy and Data Governance
In 2020, the Privacy Commissioner of Canada raised privacy concerns about AI and the need for regulation. The Privacy Commissioner of Canada wrote:
“…uses of AI that are based on individiuals’ personal information can have serious consequences for their privacy. AI models have the capability to analyze, infer and predict aspects of individuals’ behaviour, interests and even their emotions in striking ways. AI systems can use such insights to make automated decisions about individuals…Such decisions have a real impact on individuals’ lives, and raise concerns about how they are reached, as well as issues of fairness, accuracy, bias, and discrimination.” [15]
The need for regulation for technological development is emerging amid the existing landscape of Canadian privacy laws. Protecting the privacy and personal data with the use of AI in health will require both technical scrutiny and engagement with the public to build trust. Access to healthcare data for researchers and developers is complex and involves different data sharing and ownership models ranging from patient-owned to public, private, not-for-profit data trusts, or federate learning models.[16] Obtaining informed consent in the collection, use and disclosure of personal data is required under both federal and provincial privacy laws. AI systems need large amounts of data for training and development, but usually identifiable data is not required. However, de-identifying data may have enough elements to re-identify an individual. For example, it is possible to reconstruct facial features from an MRI scan.[17] Most provinces prohibit the improper use of personal health data which includes re-identifying individuals. Ontario has a new statutory provision prohibiting the re-identifying of de-identified personal health data with fines up to $200,000 or imprisonment up to one year for natural individuals and up to $1, 000,000 for non-natural individuals.[18]
The importance of data governance is increasing because privacy laws and their interpretation vary in the provinces and territories. Data stewardship for each province and territory would be complex, lengthy, and costly because of the different jurisdictional requirements and diversity of populations in each region.[19] While there are multi-jurisdictional portals being developed, interoperability challenges (the ability of systems to exchange and make use of the information collected) and incomplete data issues will likely remain.[20]
Also, there is the risk of breach and data security with AI training systems. AI analytics is at increased risk of a harmful data breach. Federate learning (FL) promotes access to data onsite without transferring or moving data. The FL model allows AI training at each hospital location where a dataset is located. Each trained algorithm is then transferred to a central location to combine into one master algorithm. However, the FL model requires each location to have AI training infrastructure.[21]
What do innovators and start-ups in the AI healthcare space need to know?
The Supreme Court of Canada recognizes jurisdiction over healthcare delivery remains with the provinces but held that the topic of “health” is subject to overlapping jurisdiction depending on the scope of the legislation.[22] The complexity of AI applications means being subject to a cross-section of federal, provincial, and institutional policies and practices.[23] For example, the regulatory body of the College of Physicians and Surgeons of Ontario, as a form of indirect regulation for the safe use of AI applications in healthcare, could require a degree of competency in digital health literacy as part of their licensing structure. Provinces have jurisdiction to regulate public and private healthcare facilities. However, the direct ex ante regulations of AI applications in healthcare fall within Health Canada. For instance, the federal Food and Drugs Act and the subordinate Medical Device Regulations enacted under the criminal law power, prohibit the selling, labeling, packaging, treating, processing, and advertising of unsafe medical devices.[24]
Under the Medical Device Regulations, Health Canada will only authorize the sale and importation of a device if it meets the safety and effectiveness requirements under section 10 to 20 of the Regulations. Medical devices under the Regulations are classified into four classes measured in degree of risk posed by a given device as follows:
- Class I Low
- Class II Low-Moderate
- Class III High-Moderate
- Class IV High
There are two pathways for medical device registration in Canada. First, Class I devices apply for Medical Device Establishment License (MDEL) by preparing mandatory procedures and paying Health Canada fees. Second, Class II, III, IV apply for a Canadian Medical Device License (MDL) application. The document requirements for each device class vary.[25] The Food and Drugs Act defines “device” to “include any of its components, accessories, and devices to diagnose, treat, mitigate, or prevent a disease or disorder or abnormal physical state or to restore, modify, or correct the body structure or the functioning of any body part.”[26] The Act and the Regulations does not have any AI provisions, but under the definition of “device” it captures medical devices (hardware) with software and “software as medical devices.” Health Canada has not yet approved any adaptive AIML applications, but regulatory requirements for adaptive ML medical devices could be launched in 2022 or 2023.[27]
On October 27, 2021, United States Food and Drug Administration (FDA), the United Kingdom’s Medicine and Healthcare Products Regulatory Agency (MHPA) and Health Canada provided 10 guiding principles to inform the development of Good Machine Learning Practices (GMLP) in medical devices.[28] The effort by Health Canada is to adjust its approach to better regulate and guide market access to adaptive AIML medical devices.
The ten guiding principles are as follows:[29]
- Leveraging multi-disciplinary expertise throughout the complete product’s life cycle to ensure safe and effective AIML medical devices.
- Applying good software engineering and security practices, that consists of good data quality assurance, data management, cybersecurity, and methodical risk management and design processes.
- Ensuring clinical study uses participants and data sets that represents the intended patient population (include factors such as age, gender, sex, race, and ethnicity).
- Using training data (that is data used to train a medical device) sets that are separate from test data sets which is the data used to test a medical device after it has been trained to measure performance, accuracy, and efficiency.
- Developing reference data sets that are based upon the best clinically relevant available methods.
- Modifying the model design to the available data and ensuring the model design reflects the intended use of the device.
- If humans are involved in the model, paying attention to the human-AI performance rather than the AI performance in isolation.
- Ensuring the testing plans demonstrate device performance during clinically relevant conditions.
- Providing clear and contextually relevant information to users, including healthcare providers and patients.
- Monitoring deployed models for performance and managing retraining risks.
These principles will guide developers, suppliers, manufacturers, and importers of AIML powered medical devices that would benefit their product’s life cycle.
What does this all mean for innovators and start-ups in the AI healthcare space?
The potential of AI and ML applications in healthcare innovation is advantageous for achieving administration efficiency, improving clinical outcomes and increasing quality of care. However, the high risks of unconscious systematic racial bias and exclusion of marginal groups in the development of AI applications is a serious roadblock to the advancement and effective innovation for universal healthcare. There is a clear need for privacy law reform and data governance for effective AI applications innovation. In the end, to minimize or resolve these risks we need to build public trust by having collaboration of multi-jurisdictional governments and innovators/developers/researchers working together to commit to addressing these risks with viable solutions to further advance AI healthcare innovation.
Footnotes:
[1] See Canadian Institute for Advanced Research, Building a Learning Health System for Canadians Report of the Artificial Intelligence for Health Task,(Toronto: Canadian Institute for Advanced Research, 2020) at 5 [AI for Health Taskforce]
[2] Bradley Henderson et al, “Artificial Intelligence in Canadian Healthcare: Will the Law Protect Us from Algorithmic Bias Resulting in Discrimination?” (2022) 19 Canadian Journal of Law and Technology 477
[3] Ibid.
[4] Underwood, Nora. 2021, Artificial Intelligence Is Changing How We Diagnose Disease, Triage Cases in the ER, Even Help Prep Doctors.<online:https://www.thestar.com/business/mars/2021/06/22/artificial-intelligence-is-changing-how-we-diagnose-disease-triage-cases-in-the-er-even-help-prep-doctors.html>
[5] Global News. 2018, Medical Robotics Changing Health Care in Canada: Watch News Videos Online. <online:https://globalnews.ca/video/4093029/medical-robotics-changing-health-care-in-canada/>
[6] See Canadian Institute for Advanced Research, Building a Learning Health System for Canadians Report of the Artificial Intelligence for Health Task,(Toronto: Canadian Institute for Advanced Research, 2020) at 7 [AI for Health Taskforce]
[7] Tollinsky, Norm. 2020, AI Improves Diagnostic Imaging at Canadian Hospitals.<online:https://www.canhealth.com/2020/03/30/ai-improves-diagnostic-imaging-at-canadianhospitals/>
[8] Source: https://ri.easternhealth.ca/projects/improving-psychological-health-and-safety-in-health-care-workers-through-technology-based-support/
[9] Obermeyer Z, Powers B, Vogeli C, Mullainathan S. 2019, Dissecting racial bias in an algorithm used to manage the health of populations. Science.;366(6464):447-453. doi: 10.1126/science.aax2342. PMID: 31649194.
[10] Igoe, Katherine J. 2021, Algorithmic Bias in Health Care Exacerbates Social Inequities – How to Prevent It. <online:https://www.hsph.harvard.edu/ecpe/how-to-prevent-algorithmic-bias-in-health-care/>
[11] Bradley Henderson et al, supra note 2 at 480.
[12] Ibid, at 481.
[13] Ibid.
[14] Ibid.
[15] Office of the Privacy Commissioner of Canada. 2020, A Regulatory Framework for AI: Recommendations for Pipeda Reform. <online:https://www.priv.gc.ca/en/about-the-opc/what-we-do/consultations/completed-consultations/consultation-ai/reg-fw_202011/>
[16] See AI for Health Taskforce, supra note 1 at 14
[17] Ibid.
[18] Bradley Henderson et al, supra note 2 at 488.
[19] Ibid, at 486.
[20] Ibid.
[21] See AI for Health Taskforce, supra note 1 at 83.
[22] See RJR-Macdonald Inc. c. Canada (Procureur général), 1995 CarswellQue 119, 1995 CarswellQue 119F, (sub nom. RJR-MacDonald Inc. v. Canada (Attorney General)) [1995] 3 S.C.R. 199 (S.C.C.) at para. 32, citing Schneider v. British Columbia, 1982 CarswellBC 241, 1982 CarswellBC 741, [1982] 2 S.C.R. 112 (S.C.C.) at 141-142 [S.C.R.].
[23] Bradley Henderson et al, supra note 2 at 493.
[24] Ibid, at 494.
[25] Food and Drugs Act, RSC 1985, c F-27 [FDA]; Medical Devices Regulations, SOR /98-282 [MDR].
[26] Bradley Henderson et al, supra note 2 at 495.
[27] Ibid, Jesse Cnockaret, “Medical professionals’ discomfort with digital tech is stumbling block spread of AI in health care, says experts,” < https://www.hilltimes.com/2022/03/30/medical-professionals-discomfort-with-digital-tech-is-stumbling-block-to-spread-of-ai-in-health-care-say-experts/353132>
[28] See Health Canada’s “Good Machine Learning Practice for Medical Device Development – Guiding Principles,” October 27, 2021, online. See the U.K. bulletin here and the U.S. bulletin here.
[29] Ibid.
Join BLS as we bring groundbreaking entrepreneurs and professionals together to discuss the creativity that inspires them and makes businesses flourish.