Headshot small

Welcome to my website.

I’m a doctoral researcher based at the Oxford Internet Institute, exploring the ethical and political impact of data and AI on society, with specific research interests in AI ethics, the use of AI for social good, social media as a public sphere, and the use of algorithms to deal with online hate speech. My past research has included work on digital politics, big and open data, digital state surveillance, and the use of web archives in research.

I’m also a Research Associate at the Alan Turing Institute, where as part of the public policy programme I focus on translating insights from academia into effective policy frameworks for the ethical use of data science and AI by governments.

My other engagements include serving as the Convenor of the Turing’s Ethics Advisory Group, a member of the Institute’s Data Ethics Group, and a member of the Ethics Committee of Digital Catapult’s Machine Intelligence Garage, where I work with start-ups to develop action plans for ensuring the ethical use of machine learning technology. Finally, I appear regularly on Monocle 24’s morning show The Globalist to discuss politics and technology, and host AlgoRhythms, a weekly tech-focused podcast.

Less often than I’d like, I also write about food, sport, and films.

This website is a semi-regularly updated repository of my research, writing, project experience and other information. You can find me in other forms and formats on TwitterMedium, LinkedIn and Google Scholar.

The Chinese approach to artificial intelligence: an analysis of policy, ethics, and regulation

 

In this article, we focus on the socio-political background and policy debates that are shaping China’s AI strategy. In particular, we analyse the main strategic areas in which China is investing in AI and the concurrent ethical debates that are delimiting its use. By focusing on the policy backdrop, we seek to provide a more comprehensive and critical understanding of China’s AI policy by bringing together debates and analyses of a wide array of policy documents.

A new paper by Huw Roberts, myself, Jess Morley, Vincent Wang, Rosaria Taddeo and Luciano Floridi has been published in AI & Society.

Ethical guidelines for COVID-19 tracing apps

Here we set out 16 questions to assess whether — and to what extent — a contact-tracing app is ethically justifiable. These questions could assist governments, public-health agencies and providers [and] will also help watchdogs and others to scrutinize such technologies.

A comment piece by colleagues Jessica Morley, Rosaria Taddeo, Luciano Floridi and myself was recently published in Nature.

How to Design AI for Social Good: Seven Essential Factors

A new paper I co-authored with Luciano Floridi, Thomas C. King and Mariarosaria Taddeo has been published (open access) in Science and Engineering Ethics.

Abstract:

The idea of artificial intelligence for social good (henceforth AI4SG) is gaining traction within information societies in general and the AI community in particular. It has the potential to tackle social problems through the development of AI-based solutions. Yet, to date, there is only limited understanding of what makes AI socially good in theory, what counts as AI4SG in practice, and how to reproduce its initial successes in terms of policies. This article addresses this gap by identifying seven ethical factors that are essential for future AI4SG initiatives. The analysis is supported by 27 case examples of AI4SG projects. Some of these factors are almost entirely novel to AI, while the significance of other factors is heightened by the use of AI. From each of these factors, corresponding best practices are formulated which, subject to context and balance, may serve as preliminary guidelines to ensure that well-designed AI is more likely to serve the social good.

Deciding How to Decide: Six Key Questions for Reducing AI’s Democratic Deficit

I contributed a chapter to the 2019 Yearbook of the Digital Ethics Lab, which has just been published.

Abstract:

Through its power to “rationalise”, artificial intelligence (AI) is rapidly changing the relationship between people and the state. But to echo Max Weber’s warnings from one hundred years ago about the increasingly rational bureaucratic state, the “reducing” power of AI systems seems to pose a threat to democracy—unless such systems are developed with public preferences, perspectives and priorities in mind. In other words, we must move beyond minimal legal compliance and faith in free markets to consider public opinion as constitutive of legitimising the use of AI in society. In this chapter I pose six questions regarding how public opinion about AI ought to be sought: what we should ask the public about AI; how we should ask; where and when we should ask; why we should ask; and who is the “we” doing the asking. I conclude by contending that while the messiness of politics may preclude clear answers about the use of AI, this is preferable to the “coolly rational” yet democratically deficient AI systems of today.

Launching AlgoRhythms, a new podcast

Algorhythms logoI’ve launched a podcast! Co-presented with fellow Oxford Internet Institute PhD student Nayana Prakash, AlgoRhythms is a weekly show covering tech news and research, broadcast on Oxford University’s student radio station Oxide and released as a podcast. Each week, we cover the latest stories in technology and interview fellow researchers about their work.

You can find us on Apple Podcasts, Spotify and Overcast, and follow us on Twitter @OxAlgoRhythms.