Digital phenotyping and sensitive health data: Implications for data governance

Mobile and wearable devices, such as smartwatches and fitness trackers, increasingly enable the continuous collection of physiological and behavioral data that permit inferences about users’ physical and mental health. Growing consumer adoption of these technologies has reduced the cost of generating clinically meaningful data. This can help reduce medical research costs and aid large-scale studies. However, the collection, processing, and storage of data comes with significant ethical, security, and data governance considerations. Here, we use the emerging concept of “digital phenotyping” to highlight key lessons for data governance that draw on parallels with the history of genomics research, while highlighting areas in which digital phenotyping will require novel governance frameworks.

A new paper in the Journal of the American Medical Informatics Association by Ignacio Perez-Pozuelo,  Dimitris Spathis,  Jordan Gifford-Moore,  Jessica Morley and myself has just been published.

The ethics of algorithms: key problems and solutions

Research on the ethics of algorithms has grown substantially over the past decade. This article builds on a review of the ethics of algorithms published in 2016 … to contribute to the debate on the identification and analysis of the ethical implications of algorithms, to provide an updated analysis of epistemic and normative concerns, and to offer actionable guidance for the governance of the design, development and deployment of algorithms.

A new paper written by Andreas Tsamados, Nikita Aggarwal, myself, Jessica Morley, Huw Roberts, Mariarosaria Taddeo and Luciano Floridi has recently been published in AI and Society.

A definition, benchmark and database of AI for social good initiatives

Initiatives relying on artificial intelligence (AI) to deliver socially beneficial outcomes—AI for social good (AI4SG)—are on the rise. However, existing attempts to understand and foster AI4SG initiatives have so far been limited by the lack of normative analyses and a shortage of empirical evidence. In this Perspective, we address these limitations by providing a definition of AI4SG and by advocating the use of the United Nations’ Sustainable Development Goals (SDGs) as a benchmark for tracing the scope and spread of AI4SG.

A new “perspective” paper by myself, Andreas Tsamados, Mariarosaria Taddeo and Luciano Floridi has recently been published in Nature: Machine Intelligence.

Public Health in the Information Age: Recognizing the Infosphere as a Social Determinant of Health

Since 2016, social media companies and news providers have come under pressure to tackle the spread of political mis-and disinformation (MDI) online. However, despite evidence that online health MDI (on the web, on social media, and within mobile apps) also has negative real-world effects, there has been a lack of comparable action by either online service providers or state-sponsored public health bodies. We argue that this is problematic and seek to answer three questions: why has so little been done to control the flow of, and exposure to, health MDI online; how might more robust action be justified; and what specific, newly justified actions are needed to curb the flow of, and exposure to, online health MDI?

A new paper written by Jessica Morley, myself, Rosaria Taddeo and Luciano Floridi has now been published in the Journal of Medical Internet Research.

The Ethics of AI in Health Care: a Mapping Review

This article presents a mapping review of the literature concerning the ethics of artificial intelligence (AI) in health care. The goal of this review is to summarise current debates and identify open questions for future research. Our goal is to inform policymakers, regulators and developers of what they must consider if they are to enable health and care systems to capitalise on the dual advantage of ethical AI; maximising the opportunities to cut costs, improve care, and improve the efficiency of health and care systems, whilst proactively avoiding the potential harms.

I am a co-author on a new paper written with Jessica Morley, Caio Machado, Chris Burr, Indra Joshi, Rosaria Taddeo and Luciano Floridi, now published in Social Science and Medicine.

The Chinese approach to artificial intelligence: an analysis of policy, ethics, and regulation

 

In this article, we focus on the socio-political background and policy debates that are shaping China’s AI strategy. In particular, we analyse the main strategic areas in which China is investing in AI and the concurrent ethical debates that are delimiting its use. By focusing on the policy backdrop, we seek to provide a more comprehensive and critical understanding of China’s AI policy by bringing together debates and analyses of a wide array of policy documents.

A new paper by Huw Roberts, myself, Jess Morley, Vincent Wang, Rosaria Taddeo and Luciano Floridi has been published in AI & Society.

How to Design AI for Social Good: Seven Essential Factors

A new paper I co-authored with Luciano Floridi, Thomas C. King and Mariarosaria Taddeo has been published (open access) in Science and Engineering Ethics.

Abstract:

The idea of artificial intelligence for social good (henceforth AI4SG) is gaining traction within information societies in general and the AI community in particular. It has the potential to tackle social problems through the development of AI-based solutions. Yet, to date, there is only limited understanding of what makes AI socially good in theory, what counts as AI4SG in practice, and how to reproduce its initial successes in terms of policies. This article addresses this gap by identifying seven ethical factors that are essential for future AI4SG initiatives. The analysis is supported by 27 case examples of AI4SG projects. Some of these factors are almost entirely novel to AI, while the significance of other factors is heightened by the use of AI. From each of these factors, corresponding best practices are formulated which, subject to context and balance, may serve as preliminary guidelines to ensure that well-designed AI is more likely to serve the social good.

Deciding How to Decide: Six Key Questions for Reducing AI’s Democratic Deficit

I contributed a chapter to the 2019 Yearbook of the Digital Ethics Lab, which has just been published.

Abstract:

Through its power to “rationalise”, artificial intelligence (AI) is rapidly changing the relationship between people and the state. But to echo Max Weber’s warnings from one hundred years ago about the increasingly rational bureaucratic state, the “reducing” power of AI systems seems to pose a threat to democracy—unless such systems are developed with public preferences, perspectives and priorities in mind. In other words, we must move beyond minimal legal compliance and faith in free markets to consider public opinion as constitutive of legitimising the use of AI in society. In this chapter I pose six questions regarding how public opinion about AI ought to be sought: what we should ask the public about AI; how we should ask; where and when we should ask; why we should ask; and who is the “we” doing the asking. I conclude by contending that while the messiness of politics may preclude clear answers about the use of AI, this is preferable to the “coolly rational” yet democratically deficient AI systems of today.

The Silicon Cage: “Legitimate” governance 100 years after Weber

I presented this paper at the Data Power conference at the University of Bremen in September 2019.

Abstract:

In “Politics as a Vocation”, the lecture that he gave one hundred years ago, Max Weber offered what would become one of his most influential ideas: that a state is that which “claims the monopoly of the legitimate use of physical force within a given territory”. Such use of violence, Weber argued, is legitimated in one of three distinct ways: by “tradition”, by “charisma”, or by the “virtue of ‘legality’ … the belief in the validity of legal statute … based on rationally created rules”.

In this centennial year of Weber’s lecture, much has been made of Weber’s prescience regarding modern-day charismatic demagogues. Yet it is in the conceptualisation of “legal-rational” legitimacy that greater purchase may be found when we grapple with the use of data and algorithms in contemporary society. As I will argue, the “iron cage” that Weber identified, which serves to constrain human freedom through the coercive combination of efficiency and calculation, has been supplanted. Today, we instead occupy what might be called a “silicon cage”, resulting from a step change in the nature and extent of calculation and prediction relating to people’s activities and intentions.

Moreover, while the bureaucratisation that Weber described was already entwined with a capitalist logic, the silicon cage of today has emerged from an even firmer embedding of the tools, practices and ideologies of capitalist enterprise in the rules-based (we might say algorithmic) governance of everyday life. Alternative arrangements present themselves, however, in the form of both “agonistic” and “cooperative” democracy.

A Unified Framework of Five Principles for AI in Society

A new short paper by Luciano Floridi and I has been published, open access, in the inaugural issue of the Harvard Data Science Review. 2000px-Black_pencil.svg

Abstract:

Artificial Intelligence (AI) is already having a major impact on society. As a result, many organizations have launched a wide range of initiatives to establish ethical principles for the adoption of socially beneficial AI. Unfortunately, the sheer volume of proposed principles threatens to overwhelm and confuse. How might this problem of ‘principle proliferation’ be solved? In this paper, we report the results of a fine-grained analysis of several of the highest-profile sets of ethical principles for AI. We assess whether these principles converge upon a set of agreed-upon principles, or diverge, with significant disagreement over what constitutes ‘ethical AI.’ Our analysis finds a high degree of overlap among the sets of principles we analyze. We then identify an overarching framework consisting of five core principlesfor ethical AI. Four of them are core principles commonly used in bioethics: beneficence, non-maleficence, autonomy, and justice.On the basis of our comparative analysis, we argue that a new principle is needed in addition: explicability, understood as incorporating both the epistemological sense of intelligibility (as an answer to the question ‘how does it work?’) and in the ethical sense of accountability (as an answer to the question: ‘who is responsible for the way it works?’). In the ensuing discussion, we note the limitations and assess the implications of this ethical framework for future efforts to create laws, rules, technical standards, and best practices for ethical AI in a wide range of contexts.