Artificial Intelligence ‘Very Promising’ for Health, Says WHO
Dr Tedros Adhanom Ghebreyesus, WHO Director-General.

Artificial intelligence (AI) has the potential to strengthen the delivery of healthcare and move the world closer towards universal health coverage, but ethical considerations and human rights must be central to the design, development, and deployment of AI technologies, according to a new report released on Monday.

The World Health Organization’s (WHO) Ethics and Governance of Artificial Intelligence for Health report, the world’s first global report on the use of AI in health, is the result of two years of consultations conducted by a panel of 20 international interdisciplinary experts in ethics, digital technology, law, human rights, and health.

“Like many new technologies, artificial intelligence holds enormous potential for improving health,” said Dr Tedros Adhanom Ghebreyesus, WHO Director-General, at the launch of the report on Monday. “This important new report provides a valuable guide for countries on how to maximize the benefits of AI, while minimizing its risks and avoiding its pitfalls.” 

“I hope this report will help countries to harness the power of artificial intelligence while minimizing the risk for a healthier, safer, and fairer future,” Tedros added. 

AI refers to the ability of algorithms encoded in technology to learn from data to perform automated tasks and is an exploding area of science that is being used in numerous disciplines. 

AI is “poised to strengthen healthcare, health research, drug development, improved diagnosis of infectious diseases, including COVID, as we are now seeing, and public health surveillance,” said Professor Partha Majumder, co-chair of the WHO Expert Group on Ethics and Governance of AI for Health and founder of the National Institute of Biomedical Genomics in India.

Professor Partha Majumder, co-chair of the WHO Expert Group on Ethics and Governance of AI for Health and founder of the National Institute of Biomedical Genomics in India.

The COVID-19 pandemic accelerated the willingness to use and invest in innovations, including AI, to address disease outbreaks and curb the spread of pandemics. 

“The key lesson from the pandemic is the important role technology plays in surveillance, disease detection, and treatment,” said Dr E. Osagie Ehanire, Nigeria’s Minister of Health. “[The pandemic] also highlights the potentially enormous value of digital health in improving care and outcomes.”

As innovation and development of AI continues, it could allow medical providers to make faster and more accurate diagnoses, enhancing the capabilities of health systems. 

The future of public health will increasingly become digital, with the development of technologies that “bring both promise and opportunities, but also challenges and ethical questions,” said Dr Soumya Swaminathan, WHO Chief Scientist. 

Applications of AI in Health 

In high-income countries, the use of AI has already begun to transform health systems through the prevention, diagnosis, and treatment of diseases.

Currently, AI is being used for radiological diagnosis in oncology, such as colonoscopy, mammography, and brain imaging. In addition, AI algorithms based on RNA and DNA sequence data are used to guide immunotherapy cancer treatment.

AI technologies are also being piloted for the detection, management, treatment, and care of patients with tuberculosis (TB) and those living in areas with rampant TB. 

Predictive AI systems were able to identify the risk of birth asphyxia, a condition where a newborn doesn’t get enough oxygen before or during birth, with the use of imaging technology during the labor process, according to the report. 

In Singapore, a national programme was established in 2017 to develop and support the country’s AI ecosystem, focusing on healthcare innovation. AI-driven solutions are being used to address high cholesterol, high blood pressure, and diabetes, which are prevalent in Singapore. 

Predictive modelling is used to identify those at the highest risk of developing chronic diseases for early intervention programs. The goal in using AI is to slow the progression of diseases, reduce complications in patients, and lower healthcare costs. 

Low- and middle-income countries (LMICs) have the most to gain from the transformation to health systems brought by AI, as it could fill gaps in health care delivery and services. 

Numerous LMICs face chronic shortages of health workers, a high burden of diseases, and large underserved populations. AI could provide healthcare workers with assistance in diagnostics and speed up the analysis of X-rays and pathology slides, if there is a lack of health specialists. 

A pilot programme of AI-based tools is underway in India, Kenya, Malawi, Rwanda, South Africa and Zambia to screen for cervical cancer. LMICs could also use AI to manage HIC antiretroviral therapy by predicting resistance to the drugs and helping health workers to optimize the therapy, according to WHO’s report.

Ethical Challenges of Using AI in Health Systems

While AI tools and technologies will likely play an important role in improving patient outcomes, strengthening health systems, and driving progress towards universal health coverage, several ethical challenges could emerge. 

“In as much as AI offers enormous advantages to healthcare delivery systems, there remain significant challenges and gaps in the adoption, scale up and integration into health systems,” said Dr Ehanire. 

Dr E. Osagie Ehanire, Nigeria’s Minister of Health.

“Like all new technology, artificial intelligence holds enormous potential for improving the health of millions of people around the world, but like all technology, it can also be misused and cause harm,” said Dr Tedros. 

“Artificial intelligence raises potential ethical concerns, including equitable access to technologies, data protection, and liability,” he added. 

The use of limited, low-quality, or non-representative data in AI could deepen disparities in health as predictive algorithms based on inadequate data could result in racial or ethnic bias. 

Biases based on race, ethnicity, age, or gender that are encoded into AI algorithms can be detrimental to the equitable provision of and access to healthcare services. Many data sets used to train AI models exclude women, ethnic minorities, older people, rural communities, and disadvantaged groups. 

Discrimination in health systems will be captured by machine-learning models, making their recommendations inaccurate for populations excluded from the data. 

“Machine learning technologies have been shown to harm our right to equality and non-discrimination,” said Agnès Callamard, Secretary General of Amnesty International. “There is a substantive and growing body of evidence showing that these machine learning systems have discriminatory impacts and contribute to discriminatory practices.”

Potential to exacerbate disparities

The quality and availability of data may not be adequate in LMICs, resulting in algorithms with inaccurate performances. 

In addition, it is unclear whether AI trained for use in one context can be used accurately and safely in another geographical region. 

Investments will be needed to improve the collection of data in resource-poor settings and to ensure sufficient data on vulnerable and marginalized populations. 

If AI technologies are not deployed carefully, they could exacerbate disparities in health care, cause the over-medicalization of individuals, and cause stress and stigmatization of individuals or communities, according to the report. 

Issues of equity and access could be raised through the exacerbation of the existing digital divide, which refers to the uneven distribution of access to or use of information and communication technologies, such as broadband or smartphones. 

Some 1.2 billion women in LMICs don’t use or have access to mobile internet services and the infrastructure to operate digital technologies may be limited in many countries. 

Employing AI could further marginalize those who lack access to health services and they could be left behind by healthcare systems. 

Another major ethical issue is cybersecurity and data protection. AI technologies, which hold patient health data, could be the target of malicious attacks, putting individuals’ privacy at risk.

With the involvement of the private sector in designing AI systems, concerns are raised over where data is coming from, how it is being stored, how it is being used, and who has access to it. 

To combat the ethical issues that emerge through the use of AI, transparency must be prioritized, with independent oversight and public participation in the design and use of AI in healthcare, said experts at a WHO briefing on Monday.  

AI systems have to be designed to reflect the socio-economic and racial diversity in the relevant health care setting and must be accompanied by training of healthcare workers in digital literacy. 

Principles and Recommendations for use of AI

In an effort to limit the risks and maximize the benefits of AI systems, the expert group developed six principles as a basis for AI governance in the domain of health:

  • Protecting human autonomy;
  • Promoting human wellbeing and safety and the public interest;
  • Ensuring transparency, explainability, and intelligibility;
  • Fostering responsibility and accountability;
  • Ensuring inclusiveness and equity; and
  • Promoting AI that is responsive and sustainable.

The report detailed 47 recommendations to a range of stakeholders to encourage the ethical and transparent design of AI technologies to enhance clinical decision making, mitigate workforce shortages, and increase efficiencies in health service delivery. 

“The need for international comprehensive guidance on the use of artificial intelligence for health, in accordance with ethical norms, cannot be overstated,” said Callamard. 

“There needs to be a framework that addresses some of the ethical issues, the legal issues, as well as other societal challenges, including not creating another digital divide,” said Swaminathan.

Dr Soumya Swaminathan, WHO Chief Scientist.

The recommendations called on the private sector to design AI systems with the accuracy to improve the capacity of health systems; governments to require the use of impact assessments of AI technologies; companies to adhere to national and international regulations on the development and use of AI for health systems; and governments to support the global governance of AI for health. 

“To harness the promise of artificial intelligence for health, human rights cannot be an afterthought,” said Callamard.

“Success is only possible if we collectively and deliberately place ethics and human rights at the center of the design, deployment, and use of AI technologies for health,” said Dr John Reeder, Director of WHO’s TDR, the Special Programme for Research and Training in Tropical Diseases. 

The report was created as a living document, with the opportunity to update it as research emerges on AI and as the field evolves. 

In the coming weeks and months, WHO will focus on developing an implementation plan for the report, holding mission briefings for member states to advise them on the enactment of the recommendations. 

“We should all work together so that artificial intelligence for health becomes a panacea for most of the world and…[it] can be used to meaningfully make universal health coverage a reality,” said Majumder. 

Image Credits: WHO.

Combat the infodemic in health information and support health policy reporting from the global South. Our growing network of journalists in Africa, Asia, Geneva and New York connect the dots between regional realities and the big global debates, with evidence-based, open access news and analysis. To make a personal or organisational contribution click here on PayPal.