Headshot small

Welcome to my website.

I’m a doctoral researcher based at the Oxford Internet Institute, exploring the ethics, politics, and social implications of digital technology. My doctoral research centres on the growing governance role played by operators of major online platforms, and what this means for democracy and online participation. My broader ongoing research interests include the ethics and governance of AI, the deployment of AI in the contexts of health and sustainable development, online political communication, and the applicability of political-theoretical concepts like sovereignty and legitimacy in the digital age. My past research has included work on big data, digital state surveillance, and the use of web archives in research.

My recent engagements have included serving as a Research Associate in the Alan Turing Institute’s public policy programme, as Convenor of the Turing’s Ethics Advisory Group, and as a member of the Ethics Committee of Digital Catapult’s Machine Intelligence Garage.

I am also a frequent commentator on technology and politics for mainstream audiences, contributing to channels such as the BBC World Service and Times Radio. I present a regular round-up of tech news on Monocle 24’s morning show The Globalist, and co-host the (now award-winning) podcast Skeptechs, which focuses on politics and technology stories from around the world. Less often than I’d like, I also write about food, sport, and films.

This website is a semi-regularly updated repository of my academic research, writing, presentations, and media appearances. You can find me in other forms and formats on TwitterMedium, LinkedIn and Google Scholar.

The 2020 Yearbook of the Digital Ethics Lab

In the midst of lockdowns and social distancing, the role of digital technology in society has become made ever more integral. For many of us, the lived experience of the pandemic would have been strikingly different even a decade ago without the affordances of the latest information and communication technology—even as digital divides persist within, and between, societies. Yet as digital technology “fades into the foreground” of everyday life, for both scholars and civil society at large it is necessary to engage even more robustly with the implications of such shifts.

I am pleased to say that the latest edition of the Digital Ethics Lab Yearbook, which Jessica Morley and I co-authored, is now in print. In our Introduction, we provide an overview of this year’s contributions.

Achieving a ‘Good AI Society’: Comparing the Aims and Progress of the EU and the US

Over the past few years, there has been a proliferation of artificial intelligence (AI) strategies, released by governments around the world, that seek to maximise the benefits of AI and minimise potential harms. This article provides a comparative analysis of the European Union (EU) and the United States’ (US) AI strategies and considers (i) the visions of a ‘Good AI Society’ that are forwarded in key policy documents and their opportunity costs, (ii) the extent to which the implementation of each vision is living up to stated aims and (iii) the consequences that these differing visions of a ‘Good AI Society’ have for transatlantic cooperation. 

A new article by Huw Roberts, myself and colleagues, on conceptions of a “good AI society” for the EU and US, has been published in Science and Engineering Ethics.

The AI gambit: leveraging artificial intelligence to combat climate change—opportunities, challenges, and recommendations

In this article, we analyse the role that artificial intelligence (AI) could play, and is playing, to combat global climate change. We identify two crucial opportunities that AI offers in this domain: it can help improve and expand current understanding of climate change, and it can contribute to combatting the climate crisis effectively. However, the development of AI also raises two sets of problems when considering climate change: the possible exacerbation of social and ethical challenges already associated with AI, and the contribution to climate change of the greenhouse gases emitted by training data and computation-intensive AI systems.

A new article by myself, Andreas Tsamados, Mariarosaria Taddeo and Luciano Floridi, about the potential impacts of AI on climate change, has been published by AI&Society.

Safeguarding European values with digital sovereignty: an analysis of statements and policies

The European Union (EU) has, with increasing frequency, outlined an intention to strengthen its “digital sovereignty” as a basis for safeguarding European values in the digital age. Yet, uncertainty remains as to how the term should be defined, undermining efforts to assess the success of the EU’s digital sovereignty agenda. The task of this paper is to reduce this uncertainty by i) analysing how digital sovereignty has been discussed by EU institutional actors and placing this in a wider conceptual framework, ii) mapping specific policy areas and measures that EU institutional actors cite as important for strengthening digital sovereignty, iii) assessing the effectiveness of current policy measures at strengthening digital sovereignty, and iv) proposing policy solutions that go above and beyond current measures and address existing gaps. 

A new article by Huw Roberts and myself with colleagues has been published in Internet Policy Review, as part of the special issue on Governing “European Values” Inside Data Flows.

New Special Issue of Philosophy & Technology on “AI for Social Good”

Over the past decade, research into artificial intelligence (AI) has emerged from the shadow of a long winter of disregard into a balmy summer of hope and hype. Whilst scholars and advocates have studiously documented the risks and potential harms of deploying AI-based tools and techniques in an array of societal domains, the idea nonetheless persists that the promised power of AI functionally could and ethically should be harnessed for, or at least (re-)oriented towards, ‘socially good’ purposes. The twin aims of this Special Issue, simply stated, are to interrogate the plausibility of this notion and to consider its implications. 

I am pleased to say that the special issue of Philosophy & Technology that I guest edited is now available online. In my introduction I discuss the aims of the issue and provide an overview of the rich array of contributions that it includes.

Artificial intelligence and the climate emergency: Opportunities, challenges, and recommendations

Artificial intelligence (AI) has the potential to play an important role in addressing the climate emergency, but this potential must be set against the environmental costs of developing AI systems. In this commentary, we assess the carbon footprint of AI training processes and offer 14 policy recommendations to reduce it.

A new commentary by Mariarosaria Taddeo, Andreas Tsamados, myself and Luciano Floridi has been published in Cell — One Earth.

Skeptechs wins Spotify’s Next Wave award

Skeptechs, the podcast I host with Nayana Prakash, has been selected as one of the winners of Spotify’s Next Wave award from thousands of submissions. The competition set out to identify up-and-coming student podcasters, and the winners are currently featured atop Spotify’s student genre page. It’s great to have been recognised in this way and we’re looking forward to recording and releasing more episodes in the weeks to come.

The ethical debate about the gig economy: A review and critical analysis

The gig economy is a phenomenon that is rapidly expanding, redefining the nature of work and contributing to a significant change in how contemporary economies are organised. Its expansion is not unproblematic. This article provides a clear and systematic analysis of the main ethical challenges caused by the gig economy. 

A new article by Zhi Tang, Nikita Aggarwal, Jess Morley, Mariarosaria Taddeo, Luciano Floridi and myself has been published in Technology in Society.

AI, human rights, democracy and the rule of law: A primer prepared for the Council of Europe

The purpose of this primer, co-produced by The Alan Turing Institute and the Council of Europe, is to introduce the main concepts and principles presented in the Council of Europe’s Ad Hoc Committee on Artificial Intelligence Feasibility Study for a general, non-technical audience.

I am a co-author, with David Leslie, Chris Burr, Mhairi Aitken, Mike Katell, and Morgan Briggs, of a primer produced for the Council of Europe. The document sets out, for a general audience, the ethical and political considerations that should inform a potential legal framework for the design, development and deployment of AI systems, with a focus on safeguarding human rights, democracy and the rule of law.

Constitutional Metaphors: Facebook’s ‘Supreme Court’ and Platform Legitimation

In our paper, we trace the increasing popularity of constitutional metaphors among private platforms to show how these metaphors obscure rather than elucidate the position of private decision-making bodies in society.

With co-authors Philipp Darius, Dominiquo Santistevan and Moritz Schramm, I presented this paper earlier today at the First Annual Conference of The Platform Governance Research Network.