ADVERTISEMENT

WHO issues guidelines for generative AI in healthcare

WHO suggested guidelines for government and developers for the ethical use of LMMs in healthcare to prevent associated risks

WHO makes recommendations for ethical use of AI in healthcare / @WHO(X)

The World Health Organization (WHO) has released a set of recommendations for consideration by the government, technology companies, and healthcare providers to ensure the ethical use of generative AI in healthcare.

“Generative AI technologies have the potential to improve health care but only if those who develop, regulate, and use these technologies identify and fully account for the associated risks,” said Dr Jeremy Farrar, WHO Chief Scientist, in an official statement.

“We need transparent information and policies to manage the design, development, and use of LMMs to achieve better health outcomes and overcome persisting health inequities,” he added.

The new guidance is focused on the large multi-modal models (LMMs) – a type of fast-growing generative AI technology with applications across healthcare. LMMs use multiple data inputs such as text, videos, and images to generate inputs that are not similar to the data entered. LMMs can mimic human communication and can carry out tasks that were not explicitly programmed to perform.

The WHO guidance outlines broad applications of LMMs for health, including diagnosis and clinical care, such as responding to patients’ written queries and clerical and administrative tasks.

There are several documented risks of producing false, inaccurate, biased, or incomplete statements with the use of LMMs. The models are also vulnerable to cybersecurity risks that could leak out patient information.

To create safe and effective LMMs, WHO has underlined the need for engagement of various stakeholders, including healthcare providers, patients, and civil society, in all stages of development and deployment of such technologies, including oversight and regulation.

“Governments from all countries must cooperatively lead efforts to effectively regulate the development and use of AI technologies, such as LMMs,” said Dr Alain Labrique, WHO director for Digital Health and Innovation in the Science Division.

The WHO suggested keeping all direct and indirect stakeholders engaged from the early stages of AI development, thereby creating space for raising ethical issues, voicing concerns, and providing inputs for the AI application.

For governments, the WHO suggested laws and policies to ensure that LMMs used in healthcare and medicine meet ethical obligations and human rights standards that affect a person’s dignity, autonomy, or privacy. The regulatory body also recommended introducing mandatory post-release auditing and impact assessments by independent third parties when an LMM is deployed on a large scale.

Comments