Tackling Bias, Inequality, Lack of Privacy – New WHO Guidelines on AI Ethics and Governance are Released Digital Health 19/01/2024 • Zuzanna Stawiska Share this:Click to share on Twitter (Opens in new window)Click to share on LinkedIn (Opens in new window)Click to share on Facebook (Opens in new window)Click to print (Opens in new window) WHO has released a novel set of guidelines on the ethics and governance of artificial intelligence (AI) in large multi-modal models (LMMs), a type of generative AI frequently used in healthcare. The guidelines include 40 recommendations for governments as well as other actors, such as technology companies and health care providers. Based on 2021 WHO guidelines for responsible AI usage, the new document takes into account the latest technological advances and the challenges they bring. “We need transparent information and policies to manage the design, development, and use of LMMs to achieve better health outcomes and overcome persisting health inequities,” said Dr. Jeremy Farrar, WHO’s Chief Scientist. LMMs – like Chat GPT, for instance – can produce various types of outputs, independent of the type of training data fed into the system. This type of algorithmic machine learning is unique insofar as it can mimic human communication and perform more innovative tasks beyond those explicitly programmed. Advanced technologies offer new opportunities but also risk enhancing existing problems of discrimination and bias, inequalities in access, lack of privacy or automation bias: too much confidence in machines, said Farrar, at a WHO press conference launching the guidelines on Thursday. AI is increasingly used in the health sector for many diverse purposes – from drug development to patient diagnosis as well as data management and administration. In its guidelines, WHO also outlined expanding applications, such as self-guided diagnosis and treatment as well as medical and nursing education. WHO Bangladesh Office data analysts are in the control room, where dengue related data is monitored and stored. Diagnosis is a field where LMM use holds a promise of substantial improvement. Models are used to detect various conditions, from tuberculosis, through reproductive and mental health to several types of cancer. As any new technology, LMMs carry risks in case of inappropriate usage. Yet, stresses Farrar, “we should not be scared of but rather responsible towards new technology.” ‘I wanted to ask LMM to write the opening remarks – but is that ethical?’ At a WHO-organised webinar Friday, leading WHO and external experts delved deeper into usage, threats and benefits to generative AI in healthcare. With this rapidly developing technology, new possibilities can be both promising and unpredictable, panelists stressed. “I wanted to ask an LMM to write those [opening] remarks for me but then I wondered if it’s ethical,” joked Alain Labrique of WHO’s Digital Health & Innovation division. With this rapidly developing technology, new possibilities can be both promising and unpredictable, panelists stressed. Because of LMM’s complexity, the threats associated with other AI types are even more salient – including risk of data biases. “From the Global South perspective, diversity is crucial, especially to ensure data is adequately representative, ” remarked Keymanthri Moodley of Stellenbosch University, in South Africa. 📢 WHO launches guidance for Large Multi-Modal Models (LMMs) – technologies like ChatGPT, Bart, and Berd – to shape the future of #ArtificialIntelligence in healthcare. Check out WHO's latest guidance, which introduces 5⃣ impactful applications 👉 https://t.co/mK6WVMecsB pic.twitter.com/M20sEpcJho — World Health Organization (WHO) (@WHO) January 19, 2024 Another concern is data privacy and cybersecurity threats to health systems relying more and more on LMMs. “We need to ensure adequate data collection, storage and sharing regulations. It is crucial to ensure the patients’ safety,” said Moodley. Limits of accuracy and reliability The models’ outputs also still tend to have limited accuracy and reliability. As most resources in the field of AI are in the hands of for-profit enterprises, the models’ predictions can be skewed towards a solution beneficial for their designers. Despite those pitfalls, LMM usage also carries risk of overly trusting the machine’s recommendations. Good, reliable AI can also turn out to be inaccessible to many healthcare systems, enhancing existing inequalities. To mediate the existing risks, the guidelines propose policies and good practices to ensure responsible LMM use. The authors stress the importance of including all relevant actors from the design phase on, focusing on the product’s transparency, inclusion and enabling possibility for voicing concerns. Key recommendations for governments and developers in the second phase of AI deployment The new WHO guidelines encourage governments to audit and monitor LMM usage as well as ensuring that reliability and accuracy standards are met. The models must also be checked for respecting state and international law in cases that affect, for instance, a person’s dignity, autonomy or privacy. “Governments from all countries must cooperatively lead efforts to effectively regulate the development and use of AI technologies, such as LMMs,” said Labrique. Image Credits: WHO, WHO/Fabeha Monir, WHO. Share this:Click to share on Twitter (Opens in new window)Click to share on LinkedIn (Opens in new window)Click to share on Facebook (Opens in new window)Click to print (Opens in new window) Combat the infodemic in health information and support health policy reporting from the global South. Our growing network of journalists in Africa, Asia, Geneva and New York connect the dots between regional realities and the big global debates, with evidence-based, open access news and analysis. To make a personal or organisational contribution click here on PayPal.