A Unified Framework of Five Principles for AI in Society

A new short paper by Luciano Floridi and I has been published, open access, in the inaugural issue of the Harvard Data Science Review.

Abstract:

Artificial Intelligence (AI) is already having a major impact on society. As a result, many organizations have launched a wide range of initiatives to establish ethical principles for the adoption of socially beneficial AI. Unfortunately, the sheer volume of proposed principles threatens to overwhelm and confuse. How might this problem of ‘principle proliferation’ be solved? In this paper, we report the results of a fine-grained analysis of several of the highest-profile sets of ethical principles for AI. We assess whether these principles converge upon a set of agreed-upon principles, or diverge, with significant disagreement over what constitutes ‘ethical AI.’ Our analysis finds a high degree of overlap among the sets of principles we analyze. We then identify an overarching framework consisting of five core principlesfor ethical AI. Four of them are core principles commonly used in bioethics: beneficence, non-maleficence, autonomy, and justice.On the basis of our comparative analysis, we argue that a new principle is needed in addition: explicability, understood as incorporating both the epistemological sense of intelligibility (as an answer to the question ‘how does it work?’) and in the ethical sense of accountability (as an answer to the question: ‘who is responsible for the way it works?’). In the ensuing discussion, we note the limitations and assess the implications of this ethical framework for future efforts to create laws, rules, technical standards, and best practices for ethical AI in a wide range of contexts.

From #MAGA to @AOC: Reflections on Radical Media in the Trump Era

I presented this paper, co-authored with Katie Arthur, at MIT’s Media in Transition conference in May 2019.

Abstract:

That social media both “giveth and taketh away” is not a new idea, but it is one that came to the fore in the tumultuous 2016. As the events of that year showed, while technological advances have afforded new space for radical media strategies—helping advance goals such as climate justice—so too have they created opportunities for political candidates from outside the mainstream to leverage populist resentment in the successful pursuit of political power. In this paper, we will explore how the use of civic media has evolved in the two years since our CMS Masters theses were submitted. While Donald Trump has, as President, consolidated his hold on mainstream media attention via his Twitter account, other voices have also emerged from the very different tradition of civic organising to share space on the “platform” of Twitter. Among the most prominent of these new voices is Alexandria Ocasio-Cortez, whose political experience as an organizer for the Bernie Sanders campaign and as a supporter of marginalised communities such as the residents of Standing Rock, helped propel her to the U.S. House of Representatives, as the youngest woman ever elected to Congress. In the paper we will explore Ocasio-Cortez’s rise, with a focus on her visibility on social media. As we will show, the rapid rise of “AOC” holds lessons for the prospects of both the “Green New Deal” policy she has trumpeted, and for whichever Democratic candidate is nominated to challenge Donald Trump in 2020.

AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations

I am a co-author on a new paper which appears in Minds and Machines (open access).2000px-Black_pencil.svgThis article reports the findings of AI4People, an Atomium—EISMD initiative designed to lay the foundations for a “Good AI Society”. We introduce the core opportunities and risks of AI for society; present a synthesis of five ethical principles that should undergird its development and adoption; and offer 20 concrete recommendations—to assess, to develop, to incentivise, and to support good AI—which in some cases may be undertaken directly by national or supranational policy makers, while in others may be led by other stakeholders. If adopted, these recommendations would serve as a firm foundation for the establishment of a Good AI Society.

Prolegomena to a White Paper on an Ethical Framework for a Good AI Society

Myself and Luciano Floridi have released a new paper on SSRN:

Prolegomena to a White Paper on an Ethical Framework for a Good AI Society.

The paper discusses the opportunities and challenges of AI for society and reports the results of a meta analysis, which found that five principles – beneficence, non-maleficence, autonomy, justice, and explicability – undergird the emerging ethics of AI as expressed by leading multistakeholder organisations.