Ethics and Governance of Artificial Intelligence
This page serves as a resource for pertinent articles related to ethical aspects of AI.
The main aim is to establish a "go-to-place" for pertinent literature.
If you would like to contribute, please email us at digipathalliance@gmail.com
The WHO introduced a guidance regarding Ethics and Governance of Artificial Intelligence for Health on June 28th 2021.
"While new technologies that use artificial intelligence hold great promise to improve diagnosis, treatment, health research and drug development and to support governments carrying out public health functions, including surveillance and outbreak response, such technologies … must put ethics and human rights at the heart of its design, deployment and use."
The 150-page report contains numerous aspects and there are 6 key principles that should govern AI for Health
The principles are:
Protecting human autonomy
Promoting human well-being and safety and the public interest
Ensuring transparency, explainability and intelligibility
Fostering responsibility and accountability
Ensuring inclusiveness and equity
Promoting AI that is responsive and sustainable
The WHO guidance on Ethics & Governance of Artificial Intelligence for Health is the product of eighteen months of deliberation amongst leading experts in ethics, digital technology, law, human rights, as well as experts from Ministries of Health. While new technologies that use artificial intelligence hold great promise to improve diagnosis, treatment, health research and drug development and to support governments carrying out public health functions, including surveillance and outbreak response, such technologies, according to the report, must put ethics and human rights at the heart of its design, deployment, and use.
The report identifies the ethical challenges and risks with the use of artificial intelligence of health, six consensus principles to ensure AI works to the public benefit of all countries. It also contains a set of recommendations that can ensure the governance of artificial intelligence for health maximizes the promise of the technology and holds all stakeholders – in the public and private sector – accountable and responsive to the healthcare workers who will rely on these technologies and the communities and individuals whose health will be affected by its use.
Link to WHO guidance.
Pertinent Literature
The Ethics of Artificial Intelligence in Pathology and Laboratory Medicine: Principles and Practice
Jackson et al., provides an overview of various ethical principles relevant to AI in pathology.
The group also emphasizes the following aspects as the ethical foundation
Stewardship of patient data
Development of software applications
Validation of applications for clinical use
Scientific study and publication of AI applications
Development of institutional policies and processes
Management of external business relationships
The work is notable because the group also provides an overview of mechanisms to assure adherence at the organizational level.
Ethics of Artificial Intelligence in Radiology: Summary of the Joint European and North American Multi-society Statement
Geis et al., is a multi-society statement outlining ethical issues of AI in radiology.
Take-Home Points
Ethical use of AI in radiology should promote wellbeing, minimize harm, and ensure that the benefits and harms are distributed among the possible stakeholders in a just manner.
AI in radiology should be appropriately transparent and highly dependable, curtail bias in decision making, and ensure that responsibility and accountability remains with human designers or operators.
The radiology community should start now to develop codes of ethics and practice for AI.
Radiologists will remain ultimately responsible for patient care and will need to acquire new skills to do their best for patients in the new AI ecosystem.
Ethics of AI in Pathology: Current Paradigms and Emerging Issues
Chauhan & Gullapalli et al., 2021
Deep learning has rapidly advanced artificial intelligence (AI) and algorithmic decision-making (ADM)paradigms, affecting many traditional fields of medicine, including pathology, which is a heavily data-centric specialty of medicine. The structured nature of pathology data repositories makes it highly attractive to AI researchers to train deep learning models to improve health care delivery. Additionally, there are enormous financial incentives driving adoption of AI and ADM due to promise of increased efficiency of the health care delivery process. AI, if used unethically, may exacerbate existing inequities of health care, especially if not implemented correctly. There is an urgent need to harness the vast power of AI in an ethically and morally justifiable manner. This review explores the key issues involving AI ethics in pathology. Issues related to ethical design of pathology AI studies and the potential risks associated with implementation of AI and ADM within the pathology workflow are discussed. Three key foundational principles of ethical AI: transparency, accountability, and governance, are described in the context of pathology. The future practice of pathology must be guided by these principles. Pathologists should be aware of the potential of AI to deliver superlative health care and the ethical pitfalls associated with it. Finally, pathologists must have a seat at the table to drive future implementation of ethical AI in the practice of pathology.(Am J Pathol 2021)