A range of rhetorical devices have been used to simplify the complexities associated with the governance of online platforms. This includes “constitutional metaphors”: metaphorical allusions to traditional political concepts such as statehood, democracy, and constitutionalism. Here, we empirically trace the ascent of a powerful constitutional metaphor currently employed in the news media discourse on platform governance: characterizations of Facebook’s Oversight Board (OB) as a “supreme court.”
I have a new paper published open access in New Media and Society with Philipp Darius, Dominiquo Santistevan and Moritz Schramm, about Facebook’s “Oversight Board” and the depiction of it as a “Supreme Court”.
While social media’s incredible growth has fostered extraordinary new possibilities for human connection and creativity, it has also enabled – and at times even incentivised – a 21st century resurgence of extremism, disinformation, surveillance, and many other ills.
For the 15th anniversary issue of Monocle, I was commissioned to write a piece about the many changes to social media, and society, since 2007. The piece is now available in print and online.
Trump’s transition from mainstream platform user to putative platform operator marks a watershed moment in the politics of platform governance. The launch of “Truth Social” brings to light a frequently underrated aspect of platform governance: how platform operators govern not only their own users, but also other platforms.
I have a new blog post at the OII discussing what the launch of Donald Trump’s “Truth Social” app may mean for platform governance going forward.
In the midst of lockdowns and social distancing, the role of digital technology in society has become made ever more integral. For many of us, the lived experience of the pandemic would have been strikingly different even a decade ago without the affordances of the latest information and communication technology—even as digital divides persist within, and between, societies. Yet as digital technology “fades into the foreground” of everyday life, for both scholars and civil society at large it is necessary to engage even more robustly with the implications of such shifts.
I am pleased to say that the latest edition of the Digital Ethics Lab Yearbook, which Jessica Morley and I co-edited, is now in print. In our Introduction, we provide an overview of this year’s contributions.
Over the past few years, there has been a proliferation of artificial intelligence (AI) strategies, released by governments around the world, that seek to maximise the benefits of AI and minimise potential harms. This article provides a comparative analysis of the European Union (EU) and the United States’ (US) AI strategies and considers (i) the visions of a ‘Good AI Society’ that are forwarded in key policy documents and their opportunity costs, (ii) the extent to which the implementation of each vision is living up to stated aims and (iii) the consequences that these differing visions of a ‘Good AI Society’ have for transatlantic cooperation.
A new article by Huw Roberts, myself and colleagues, on conceptions of a “good AI society” for the EU and US, has been published in Science and Engineering Ethics.
In this article, we analyse the role that artificial intelligence (AI) could play, and is playing, to combat global climate change. We identify two crucial opportunities that AI offers in this domain: it can help improve and expand current understanding of climate change, and it can contribute to combatting the climate crisis effectively. However, the development of AI also raises two sets of problems when considering climate change: the possible exacerbation of social and ethical challenges already associated with AI, and the contribution to climate change of the greenhouse gases emitted by training data and computation-intensive AI systems.
A new article by myself, Andreas Tsamados, Mariarosaria Taddeo and Luciano Floridi, about the potential impacts of AI on climate change, has been published by AI&Society.
The European Union (EU) has, with increasing frequency, outlined an intention to strengthen its “digital sovereignty” as a basis for safeguarding European values in the digital age. Yet, uncertainty remains as to how the term should be defined, undermining efforts to assess the success of the EU’s digital sovereignty agenda. The task of this paper is to reduce this uncertainty by i) analysing how digital sovereignty has been discussed by EU institutional actors and placing this in a wider conceptual framework, ii) mapping specific policy areas and measures that EU institutional actors cite as important for strengthening digital sovereignty, iii) assessing the effectiveness of current policy measures at strengthening digital sovereignty, and iv) proposing policy solutions that go above and beyond current measures and address existing gaps.
A new article by Huw Roberts and myself with colleagues has been published in Internet Policy Review, as part of the special issue on Governing “European Values” Inside Data Flows.
Over the past decade, research into artificial intelligence (AI) has emerged from the shadow of a long winter of disregard into a balmy summer of hope and hype. Whilst scholars and advocates have studiously documented the risks and potential harms of deploying AI-based tools and techniques in an array of societal domains, the idea nonetheless persists that the promised power of AI functionally could and ethically should be harnessed for, or at least (re-)oriented towards, ‘socially good’ purposes. The twin aims of this Special Issue, simply stated, are to interrogate the plausibility of this notion and to consider its implications.
I am pleased to say that the special issue of Philosophy & Technology that I guest edited is now available online. In my introduction I discuss the aims of the issue and provide an overview of the rich array of contributions that it includes.
Artificial intelligence (AI) has the potential to play an important role in addressing the climate emergency, but this potential must be set against the environmental costs of developing AI systems. In this commentary, we assess the carbon footprint of AI training processes and offer 14 policy recommendations to reduce it.
A new commentary by Mariarosaria Taddeo, Andreas Tsamados, myself and Luciano Floridi has been published in Cell — One Earth.
The purpose of this primer, co-produced by The Alan Turing Institute and the Council of Europe, is to introduce the main concepts and principles presented in the Council of Europe’s Ad Hoc Committee on Artificial Intelligence Feasibility Study for a general, non-technical audience.
I am a co-author, with David Leslie, Chris Burr, Mhairi Aitken, Mike Katell, and Morgan Briggs, of a primer produced for the Council of Europe. The document sets out, for a general audience, the ethical and political considerations that should inform a potential legal framework for the design, development and deployment of AI systems, with a focus on safeguarding human rights, democracy and the rule of law.