WHO Issues New Diagnostics List and Guide for Regulating Artificial Intelligence
A man with diabetes checking his blood sugar level with a glucometer

World Health Organization (WHO) member states should include personal-use glucose monitoring devices in their vitro diagnostics (IVD) lists to help people with diabetes, according to the global body’s 2023 Essential Diagnostics List (EDL) released this week.

Diabetes caused 1.5 million deaths in 2019, and including personal glucose testing devices “could lead to better disease management and reduced negative outcomes”, said the WHO.

Another first for the list is the inclusion of three tests for hepatitis E virus (HEV), including a rapid test to aid in the diagnosis and surveillance of HEV infection, an under-reported disease which causes acute liver failure in a small number of people.

The list offers guidance rather than being prescriptive, with the aim of increasing patients’  access to diagnostics and better outcomes.

“The WHO Essential Diagnostics List is a critical tool that gives countries evidence-based recommendations to guide local decisions to ensure the most important and reliable diagnostics are available to health workers and patients,” said Dr Tedros Adhanom Ghebreyesus, WHO Director-General. 

Other new tests added to the list include those for endocrine disorders, reproductive, maternal and new-born health and cardiovascular health:

The recent World Health Assembly  resolution on strengthening diagnostics capacity urges member states to consider the development of national essential diagnostics lists, adapting the WHO model list of essential in vitro diagnostics.  

Regulatory considerations for AI

The WHO also raised issues for consideration when regulating artificial intelligence (AI) for health this week.

With the increasing availability of health care data and the rapid progress in analytic techniques – whether machine learning, logic-based or statistical – AI tools could transform the health sector. 

WHO emphasizes the importance of establishing AI systems’ safety and effectiveness, rapidly making appropriate systems available to those who need them, and fostering dialogue among stakeholders, including developers, regulators, manufacturers, health workers, and patients.

“Artificial intelligence holds great promise for health, but also comes with serious challenges, including unethical data collection, cybersecurity threats and amplifying biases or misinformation,” said Dr Tedros.

“This new guidance will support countries to regulate AI effectively, to harness its potential, whether in treating cancer or detecting tuberculosis, while minimising the risks.” 

In response to growing country needs to responsibly manage the rapid rise of AI health technologies, the publication outlines six areas for regulation of AI for health:

  • transparency and documentation, such as through documenting the entire product lifecycle and tracking development processes.
  • risk management, including addressing issues including human interventions, training models and cybersecurity threats
  • externally validating data and being clear about the intended use of AI
  • commitment to data quality,
  • understanding the scope of jurisdiction and consent requirements, in service of privacy and data protection.
  • fostering collaboration between regulatory bodies, patients, healthcare professionals, industry representatives, and government partners.

 

Image Credits: Dischem.

Combat the infodemic in health information and support health policy reporting from the global South. Our growing network of journalists in Africa, Asia, Geneva and New York connect the dots between regional realities and the big global debates, with evidence-based, open access news and analysis. To make a personal or organisational contribution click here on PayPal.