AI, human rights, democracy and the rule of law: A primer prepared for the Council of Europe

The purpose of this primer, co-produced by The Alan Turing Institute and the Council of Europe, is to introduce the main concepts and principles presented in the Council of Europe’s Ad Hoc Committee on Artificial Intelligence Feasibility Study for a general, non-technical audience.

I am a co-author, with David Leslie, Chris Burr, Mhairi Aitken, Mike Katell, and Morgan Briggs, of a primer produced for the Council of Europe. The document sets out, for a general audience, the ethical and political considerations that should inform a potential legal framework for the design, development and deployment of AI systems, with a focus on safeguarding human rights, democracy and the rule of law.

The ethics of algorithms: key problems and solutions

Research on the ethics of algorithms has grown substantially over the past decade. This article builds on a review of the ethics of algorithms published in 2016 … to contribute to the debate on the identification and analysis of the ethical implications of algorithms, to provide an updated analysis of epistemic and normative concerns, and to offer actionable guidance for the governance of the design, development and deployment of algorithms.

A new paper written by Andreas Tsamados, Nikita Aggarwal, myself, Jessica Morley, Huw Roberts, Mariarosaria Taddeo and Luciano Floridi has recently been published in AI and Society.

The Ethics of AI in Health Care: a Mapping Review

This article presents a mapping review of the literature concerning the ethics of artificial intelligence (AI) in health care. The goal of this review is to summarise current debates and identify open questions for future research. Our goal is to inform policymakers, regulators and developers of what they must consider if they are to enable health and care systems to capitalise on the dual advantage of ethical AI; maximising the opportunities to cut costs, improve care, and improve the efficiency of health and care systems, whilst proactively avoiding the potential harms.

I am a co-author on a new paper written with Jessica Morley, Caio Machado, Chris Burr, Indra Joshi, Rosaria Taddeo and Luciano Floridi, now published in Social Science and Medicine.

The Chinese approach to artificial intelligence: an analysis of policy, ethics, and regulation


In this article, we focus on the socio-political background and policy debates that are shaping China’s AI strategy. In particular, we analyse the main strategic areas in which China is investing in AI and the concurrent ethical debates that are delimiting its use. By focusing on the policy backdrop, we seek to provide a more comprehensive and critical understanding of China’s AI policy by bringing together debates and analyses of a wide array of policy documents.

A new paper by Huw Roberts, myself, Jess Morley, Vincent Wang, Rosaria Taddeo and Luciano Floridi has been published in AI & Society.

A Unified Framework of Five Principles for AI in Society

A new short paper by Luciano Floridi and I has been published, open access, in the inaugural issue of the Harvard Data Science Review. 2000px-Black_pencil.svg


Artificial Intelligence (AI) is already having a major impact on society. As a result, many organizations have launched a wide range of initiatives to establish ethical principles for the adoption of socially beneficial AI. Unfortunately, the sheer volume of proposed principles threatens to overwhelm and confuse. How might this problem of ‘principle proliferation’ be solved? In this paper, we report the results of a fine-grained analysis of several of the highest-profile sets of ethical principles for AI. We assess whether these principles converge upon a set of agreed-upon principles, or diverge, with significant disagreement over what constitutes ‘ethical AI.’ Our analysis finds a high degree of overlap among the sets of principles we analyze. We then identify an overarching framework consisting of five core principlesfor ethical AI. Four of them are core principles commonly used in bioethics: beneficence, non-maleficence, autonomy, and justice.On the basis of our comparative analysis, we argue that a new principle is needed in addition: explicability, understood as incorporating both the epistemological sense of intelligibility (as an answer to the question ‘how does it work?’) and in the ethical sense of accountability (as an answer to the question: ‘who is responsible for the way it works?’). In the ensuing discussion, we note the limitations and assess the implications of this ethical framework for future efforts to create laws, rules, technical standards, and best practices for ethical AI in a wide range of contexts.