Constitutional metaphors: Facebook’s “Supreme Court” and the Legitimation of Platform Governance

A range of rhetorical devices have been used to simplify the complexities associated with the governance of online platforms. This includes “constitutional metaphors”: metaphorical allusions to traditional political concepts such as statehood, democracy, and constitutionalism. Here, we empirically trace the ascent of a powerful constitutional metaphor currently employed in the news media discourse on platform governance: characterizations of Facebook’s Oversight Board (OB) as a “supreme court.”

I have a new paper published open access in New Media and Society with Philipp Darius, Dominiquo Santistevan and Moritz Schramm, about Facebook’s “Oversight Board” and the depiction of it as a “Supreme Court”.

Trump Central? App stores as a new front in the platform governance of Donald Trump

Trump’s transition from mainstream platform user to putative platform operator marks a watershed moment in the politics of platform governance. The launch of “Truth Social” brings to light a frequently underrated aspect of platform governance: how platform operators govern not only their own users, but also other platforms

I have a new blog post at the OII discussing what the launch of Donald Trump’s “Truth Social” app may mean for platform governance going forward.

The 2020 Yearbook of the Digital Ethics Lab

In the midst of lockdowns and social distancing, the role of digital technology in society has become made ever more integral. For many of us, the lived experience of the pandemic would have been strikingly different even a decade ago without the affordances of the latest information and communication technology—even as digital divides persist within, and between, societies. Yet as digital technology “fades into the foreground” of everyday life, for both scholars and civil society at large it is necessary to engage even more robustly with the implications of such shifts.

I am pleased to say that the latest edition of the Digital Ethics Lab Yearbook, which Jessica Morley and I co-edited, is now in print. In our Introduction, we provide an overview of this year’s contributions.

Safeguarding European values with digital sovereignty: an analysis of statements and policies

The European Union (EU) has, with increasing frequency, outlined an intention to strengthen its “digital sovereignty” as a basis for safeguarding European values in the digital age. Yet, uncertainty remains as to how the term should be defined, undermining efforts to assess the success of the EU’s digital sovereignty agenda. The task of this paper is to reduce this uncertainty by i) analysing how digital sovereignty has been discussed by EU institutional actors and placing this in a wider conceptual framework, ii) mapping specific policy areas and measures that EU institutional actors cite as important for strengthening digital sovereignty, iii) assessing the effectiveness of current policy measures at strengthening digital sovereignty, and iv) proposing policy solutions that go above and beyond current measures and address existing gaps. 

A new article by Huw Roberts and myself with colleagues has been published in Internet Policy Review, as part of the special issue on Governing “European Values” Inside Data Flows.

Constitutional Metaphors: Facebook’s ‘Supreme Court’ and Platform Legitimation

In our paper, we trace the increasing popularity of constitutional metaphors among private platforms to show how these metaphors obscure rather than elucidate the position of private decision-making bodies in society.

With co-authors Philipp Darius, Dominiquo Santistevan and Moritz Schramm, I presented this paper earlier today at the First Annual Conference of The Platform Governance Research Network.

Deciding How to Decide: Six Key Questions for Reducing AI’s Democratic Deficit

I contributed a chapter to the 2019 Yearbook of the Digital Ethics Lab, which has just been published.

Abstract:

Through its power to “rationalise”, artificial intelligence (AI) is rapidly changing the relationship between people and the state. But to echo Max Weber’s warnings from one hundred years ago about the increasingly rational bureaucratic state, the “reducing” power of AI systems seems to pose a threat to democracy—unless such systems are developed with public preferences, perspectives and priorities in mind. In other words, we must move beyond minimal legal compliance and faith in free markets to consider public opinion as constitutive of legitimising the use of AI in society. In this chapter I pose six questions regarding how public opinion about AI ought to be sought: what we should ask the public about AI; how we should ask; where and when we should ask; why we should ask; and who is the “we” doing the asking. I conclude by contending that while the messiness of politics may preclude clear answers about the use of AI, this is preferable to the “coolly rational” yet democratically deficient AI systems of today.

The Silicon Cage: “Legitimate” governance 100 years after Weber

I presented this paper at the Data Power conference at the University of Bremen in September 2019.

Abstract:

In “Politics as a Vocation”, the lecture that he gave one hundred years ago, Max Weber offered what would become one of his most influential ideas: that a state is that which “claims the monopoly of the legitimate use of physical force within a given territory”. Such use of violence, Weber argued, is legitimated in one of three distinct ways: by “tradition”, by “charisma”, or by the “virtue of ‘legality’ … the belief in the validity of legal statute … based on rationally created rules”.

In this centennial year of Weber’s lecture, much has been made of Weber’s prescience regarding modern-day charismatic demagogues. Yet it is in the conceptualisation of “legal-rational” legitimacy that greater purchase may be found when we grapple with the use of data and algorithms in contemporary society. As I will argue, the “iron cage” that Weber identified, which serves to constrain human freedom through the coercive combination of efficiency and calculation, has been supplanted. Today, we instead occupy what might be called a “silicon cage”, resulting from a step change in the nature and extent of calculation and prediction relating to people’s activities and intentions.

Moreover, while the bureaucratisation that Weber described was already entwined with a capitalist logic, the silicon cage of today has emerged from an even firmer embedding of the tools, practices and ideologies of capitalist enterprise in the rules-based (we might say algorithmic) governance of everyday life. Alternative arrangements present themselves, however, in the form of both “agonistic” and “cooperative” democracy.