App store governance: implications, limitations, and regulatory responses

In this article, we analyse two case studies: the removals from app stores in 2021 of the fringe American social media app Parler and of the Russian opposition app Smart Voting. On the basis of this analysis, we identify three critical limitations for app store governance at present: Apple’s and Google’s dominance, the substantive opacity of their respective app store guidelines, and the procedural arbitrariness with which these guidelines are applied to specific cases. We then assess the potential efficacy of legislative proposals in the EU and US to intervene in this domain and conclude by offering some recommendations supporting more efficacious and socially responsible app store governance.

I have a new article with Jessica Morley and Luciano Floridi now published in Telecommunications Policy, which looks at the issues for app store governance raised by the removals of Parler and Smart Voting from app stores.

New article in Monocle Magazine’s anniversary issue

While social media’s incredible growth has fostered extraordinary new possibilities for human connection and creativity, it has also enabled – and at times even incentivised – a 21st century resurgence of extremism, disinformation, surveillance, and many other ills.

For the 15th anniversary issue of Monocle, I was commissioned to write a piece about the many changes to social media, and society, since 2007. The piece is now available in print and online.

Trump Central? App stores as a new front in the platform governance of Donald Trump

Trump’s transition from mainstream platform user to putative platform operator marks a watershed moment in the politics of platform governance. The launch of “Truth Social” brings to light a frequently underrated aspect of platform governance: how platform operators govern not only their own users, but also other platforms

I have a new blog post at the OII discussing what the launch of Donald Trump’s “Truth Social” app may mean for platform governance going forward.

The 2020 Yearbook of the Digital Ethics Lab

In the midst of lockdowns and social distancing, the role of digital technology in society has become made ever more integral. For many of us, the lived experience of the pandemic would have been strikingly different even a decade ago without the affordances of the latest information and communication technology—even as digital divides persist within, and between, societies. Yet as digital technology “fades into the foreground” of everyday life, for both scholars and civil society at large it is necessary to engage even more robustly with the implications of such shifts.

I am pleased to say that the latest edition of the Digital Ethics Lab Yearbook, which Jessica Morley and I co-edited, is now in print. In our Introduction, we provide an overview of this year’s contributions.

Achieving a ‘Good AI Society’: Comparing the Aims and Progress of the EU and the US

Over the past few years, there has been a proliferation of artificial intelligence (AI) strategies, released by governments around the world, that seek to maximise the benefits of AI and minimise potential harms. This article provides a comparative analysis of the European Union (EU) and the United States’ (US) AI strategies and considers (i) the visions of a ‘Good AI Society’ that are forwarded in key policy documents and their opportunity costs, (ii) the extent to which the implementation of each vision is living up to stated aims and (iii) the consequences that these differing visions of a ‘Good AI Society’ have for transatlantic cooperation. 

A new article by Huw Roberts, myself and colleagues, on conceptions of a “good AI society” for the EU and US, has been published in Science and Engineering Ethics.

The AI gambit: leveraging artificial intelligence to combat climate change—opportunities, challenges, and recommendations

In this article, we analyse the role that artificial intelligence (AI) could play, and is playing, to combat global climate change. We identify two crucial opportunities that AI offers in this domain: it can help improve and expand current understanding of climate change, and it can contribute to combatting the climate crisis effectively. However, the development of AI also raises two sets of problems when considering climate change: the possible exacerbation of social and ethical challenges already associated with AI, and the contribution to climate change of the greenhouse gases emitted by training data and computation-intensive AI systems.

A new article by myself, Andreas Tsamados, Mariarosaria Taddeo and Luciano Floridi, about the potential impacts of AI on climate change, has been published by AI&Society.

Safeguarding European values with digital sovereignty: an analysis of statements and policies

The European Union (EU) has, with increasing frequency, outlined an intention to strengthen its “digital sovereignty” as a basis for safeguarding European values in the digital age. Yet, uncertainty remains as to how the term should be defined, undermining efforts to assess the success of the EU’s digital sovereignty agenda. The task of this paper is to reduce this uncertainty by i) analysing how digital sovereignty has been discussed by EU institutional actors and placing this in a wider conceptual framework, ii) mapping specific policy areas and measures that EU institutional actors cite as important for strengthening digital sovereignty, iii) assessing the effectiveness of current policy measures at strengthening digital sovereignty, and iv) proposing policy solutions that go above and beyond current measures and address existing gaps. 

A new article by Huw Roberts and myself with colleagues has been published in Internet Policy Review, as part of the special issue on Governing “European Values” Inside Data Flows.

New Special Issue of Philosophy & Technology on “AI for Social Good”

Over the past decade, research into artificial intelligence (AI) has emerged from the shadow of a long winter of disregard into a balmy summer of hope and hype. Whilst scholars and advocates have studiously documented the risks and potential harms of deploying AI-based tools and techniques in an array of societal domains, the idea nonetheless persists that the promised power of AI functionally could and ethically should be harnessed for, or at least (re-)oriented towards, ‘socially good’ purposes. The twin aims of this Special Issue, simply stated, are to interrogate the plausibility of this notion and to consider its implications. 

I am pleased to say that the special issue of Philosophy & Technology that I guest edited is now available online. In my introduction I discuss the aims of the issue and provide an overview of the rich array of contributions that it includes.

Artificial intelligence and the climate emergency: Opportunities, challenges, and recommendations

Artificial intelligence (AI) has the potential to play an important role in addressing the climate emergency, but this potential must be set against the environmental costs of developing AI systems. In this commentary, we assess the carbon footprint of AI training processes and offer 14 policy recommendations to reduce it.

A new commentary by Mariarosaria Taddeo, Andreas Tsamados, myself and Luciano Floridi has been published in Cell — One Earth.