According to the World Health Organization, artificial intelligence (AI) holds great promise for improving healthcare and medicine delivery worldwide, but only if ethics and human rights are prioritized in its design, deployment, and use (WHO). The WHO report, Ethics, and governance of artificial intelligence for health is the outcome of two years of deliberations by a WHO-appointed group of worldwide experts.
“Like with any new technology, artificial intelligence has immense potential to improve the health of millions of people around the globe, but as with any technology, it may also be exploited and cause harm,” said Dr. Tedros Adhanom Ghebreyesus, Director-General of the World Health Organization. This significant new research offers countries a valuable roadmap on how to harness the benefits of AI while limiting its risks and avoiding its pitfalls.”
Also Read: Digitalization of The Healthcare Industry
Artificial intelligence can and is being used in some wealthy countries to improve the speed and accuracy of disease diagnosis and screening; to assist with clinical care; strengthen health research and drug development; and to support a variety of public health interventions such as disease surveillance, outbreak response, and health system management.
AI has the potential to enable individuals to take greater control of their own health care and better comprehend their changing requirements. It could also help resource-poor countries and rural regions, where patients frequently have limited access to healthcare workers or medical professionals, bridge access gaps to health services.
However, the World Health Organization’s latest report, issued on June 28, warns against overestimating the benefits of AI for health, especially when it comes at the expense of essential investments and policies required to attain universal health care. It also mentions obstacles and hazards, such as unethical gathering and use of health data, biases inherent in algorithms, and the risks of AI to patient safety, cybersecurity, and the environment.
While private and public sector investment in AI development and deployment is vital, unchecked use of AI risks subordinate patients’ and communities’ rights and interests to the financial objectives of technology corporations or government interests in surveillance and social control. The research also notes that systems trained primarily on data obtained from individuals in high-income nations may underperform when applied to individuals in low- and middle-income countries.
As a result, AI systems must be carefully developed to reflect the diversity of socioeconomic and healthcare situations. They should be accompanied by digital skills training, community engagement, and awareness-raising, particularly for the millions of healthcare workers who will need digital literacy or retraining if their roles and functions are automated, and who will have to contend with machines that may challenge the decision-making and autonomy of providers and patients.
WHO recommends the following principles as the foundation for AI regulation and governance to reduce the risks and maximize the opportunities inherent in the use of AI for health:
- In the context of health care, this means that humans should retain control of healthcare systems and medical choices; privacy and confidentiality should be preserved, and patients must provide valid informed consent through suitable legal frameworks for data protection.
- AI technology designers must meet regulatory standards for safety, accuracy, and efficacy for well-defined use cases or indications. Quality control measures in practice and quality improvement in the application of AI must be provided.
- Ensure transparency, explainability, and comprehension. Transparency necessitates the publication or documentation of adequate information prior to the design or deployment of an AI technology. Such data must be freely accessible in order to permit meaningful public participation and debate about how the technology is created and how it should or should not be used.
- Developing a sense of responsibility and accountability. Although AI technologies are capable of performing specific jobs, it is the responsibility of stakeholders to ensure that they are employed under acceptable settings and by adequately qualified individuals. Individuals and groups who are harmed by algorithm-based choices should have access to effective channels for challenging and remedy.
- Ensure inclusivity and equity. Inclusiveness necessitates that AI for health is intended to support the broadest possible equitable use and access, regardless of age, gender, poverty, race, ethnicity, sexual orientation, ability, or other human rights-protected traits.
- Promoting AI that is responsive and long-term. AI applications should be continually and transparently assessed by designers, developers, and users to see whether AI responds effectively and appropriately to expectations and requirements.
AI systems should also be developed to have a low environmental impact and boost energy efficiency. Governments and businesses should prepare for anticipated workplace disruptions, such as training for healthcare employees to adjust to the use of AI systems and probable job losses owing to the usage of automated systems.