The Silicon Cage: “Legitimate” governance 100 years after Weber

I presented this paper at the Data Power conference at the University of Bremen in September 2019.

Abstract:

In “Politics as a Vocation”, the lecture that he gave one hundred years ago, Max Weber offered what would become one of his most influential ideas: that a state is that which “claims the monopoly of the legitimate use of physical force within a given territory”. Such use of violence, Weber argued, is legitimated in one of three distinct ways: by “tradition”, by “charisma”, or by the “virtue of ‘legality’ … the belief in the validity of legal statute … based on rationally created rules”.

In this centennial year of Weber’s lecture, much has been made of Weber’s prescience regarding modern-day charismatic demagogues. Yet it is in the conceptualisation of “legal-rational” legitimacy that greater purchase may be found when we grapple with the use of data and algorithms in contemporary society. As I will argue, the “iron cage” that Weber identified, which serves to constrain human freedom through the coercive combination of efficiency and calculation, has been supplanted. Today, we instead occupy what might be called a “silicon cage”, resulting from a step change in the nature and extent of calculation and prediction relating to people’s activities and intentions.

Moreover, while the bureaucratisation that Weber described was already entwined with a capitalist logic, the silicon cage of today has emerged from an even firmer embedding of the tools, practices and ideologies of capitalist enterprise in the rules-based (we might say algorithmic) governance of everyday life. Alternative arrangements present themselves, however, in the form of both “agonistic” and “cooperative” democracy.

A Unified Framework of Five Principles for AI in Society

A new short paper by Luciano Floridi and I has been published, open access, in the inaugural issue of the Harvard Data Science Review. 2000px-Black_pencil.svg

Abstract:

Artificial Intelligence (AI) is already having a major impact on society. As a result, many organizations have launched a wide range of initiatives to establish ethical principles for the adoption of socially beneficial AI. Unfortunately, the sheer volume of proposed principles threatens to overwhelm and confuse. How might this problem of ‘principle proliferation’ be solved? In this paper, we report the results of a fine-grained analysis of several of the highest-profile sets of ethical principles for AI. We assess whether these principles converge upon a set of agreed-upon principles, or diverge, with significant disagreement over what constitutes ‘ethical AI.’ Our analysis finds a high degree of overlap among the sets of principles we analyze. We then identify an overarching framework consisting of five core principlesfor ethical AI. Four of them are core principles commonly used in bioethics: beneficence, non-maleficence, autonomy, and justice.On the basis of our comparative analysis, we argue that a new principle is needed in addition: explicability, understood as incorporating both the epistemological sense of intelligibility (as an answer to the question ‘how does it work?’) and in the ethical sense of accountability (as an answer to the question: ‘who is responsible for the way it works?’). In the ensuing discussion, we note the limitations and assess the implications of this ethical framework for future efforts to create laws, rules, technical standards, and best practices for ethical AI in a wide range of contexts.

From #MAGA to @AOC: Reflections on Radical Media in the Trump Era

I presented this paper, co-authored with Katie Arthur, at MIT’s Media in Transition conference in May 2019.

Abstract:

That social media both “giveth and taketh away” is not a new idea, but it is one that came to the fore in the tumultuous 2016. As the events of that year showed, while technological advances have afforded new space for radical media strategies—helping advance goals such as climate justice—so too have they created opportunities for political candidates from outside the mainstream to leverage populist resentment in the successful pursuit of political power. In this paper, we will explore how the use of civic media has evolved in the two years since our CMS Masters theses were submitted. While Donald Trump has, as President, consolidated his hold on mainstream media attention via his Twitter account, other voices have also emerged from the very different tradition of civic organising to share space on the “platform” of Twitter. Among the most prominent of these new voices is Alexandria Ocasio-Cortez, whose political experience as an organizer for the Bernie Sanders campaign and as a supporter of marginalised communities such as the residents of Standing Rock, helped propel her to the U.S. House of Representatives, as the youngest woman ever elected to Congress. In the paper we will explore Ocasio-Cortez’s rise, with a focus on her visibility on social media. As we will show, the rapid rise of “AOC” holds lessons for the prospects of both the “Green New Deal” policy she has trumpeted, and for whichever Democratic candidate is nominated to challenge Donald Trump in 2020.

AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations

I am a co-author on a new paper which appears in Minds and Machines (open access).2000px-Black_pencil.svgThis article reports the findings of AI4People, an Atomium—EISMD initiative designed to lay the foundations for a “Good AI Society”. We introduce the core opportunities and risks of AI for society; present a synthesis of five ethical principles that should undergird its development and adoption; and offer 20 concrete recommendations—to assess, to develop, to incentivise, and to support good AI—which in some cases may be undertaken directly by national or supranational policy makers, while in others may be led by other stakeholders. If adopted, these recommendations would serve as a firm foundation for the establishment of a Good AI Society.

Prolegomena to a White Paper on an Ethical Framework for a Good AI Society

Myself and Luciano Floridi have released a new paper on SSRN:

Prolegomena to a White Paper on an Ethical Framework for a Good AI Society.

The paper discusses the opportunities and challenges of AI for society and reports the results of a meta analysis, which found that five principles – beneficence, non-maleficence, autonomy, justice, and explicability – undergird the emerging ethics of AI as expressed by leading multistakeholder organisations.

Causation, Correlation, and Big Data in Social Science Research

Cowls, Josh and Schroeder, Ralph (2015) Causation, Correlation, and Big Data in Social Science Research. Policy & Internet 7 (4), 447-472.

The emergence of big data offers not only a potential boon for social scientific inquiry, but also raises distinct epistemological issues for this new area of research. Drawing on interviews conducted with researchers at the forefront of big data research, we offer insight into questions of causal versus correlational research, the use of inductive methods, and the utility of theory in the big data age. While our interviewees acknowledge challenges posed by the emergence of big data approaches, they reassert the importance of fundamental tenets of social science research such as establishing causality and drawing on existing theory. They also discussed more pragmatic issues, such as collaboration between researchers from different fields, and the utility of mixed methods. We conclude by putting the themes emerging from our interviews into the broader context of the role of data in social scientific inquiry, and draw lessons about the future role of big data in research.

The Ethics of Given-off versus Captured Data in Large-scale Social Research

Cowls, Josh and Schroeder, Ralph (2015) The Ethics of Given-off versus Captured Data in Digital Social Research. Workshop on Ethics for Studying Sociotechnical Systems in a Big Data World, CSCW 2015, March 2015, Vancouver, B.C., Canada.

This paper proposes new terminology to enhance understanding of how big data can be used for research, in both commercial and academic contexts. We distinguish between data as given-off and data as captured, and draw on insights from interviews conducted with researchers using such data to elaborate on this distinction. We conclude with a series of recommendations for research design and conduct, based on this re-conceptualization of ‘data’ and ‘capta’.

Ad-hoc encounters with big data: Engaging citizens in conversations around tabletops

Fjeld, Morten, Woźniak, Paweł, Cowls, Josh and Nardi, Bonnie (2015). Ad-hoc encounters with big data: Engaging citizens in conversations around tabletops. First Monday 20 (2).

The increasing abundance of data creates new opportunities for communities of interest and communities of practice. We believe that interactive tabletops will allow users to explore data in familiar places such as living rooms, cafés, and public spaces. We propose informal, mobile possibilities for future generations of flexible and portable tabletops. In this paper, we build upon current advances in sensing and in organic user interfaces to propose how tabletops in the future could encourage collaboration and engage users in socially relevant data-oriented activities. Our work focuses on the socio-technical challenges of future democratic deliberation. As part of our vision, we suggest switching from fixed to mobile tabletops and provide two examples of hypothetical interface types: TableTiles and Moldable Displays. We consider how tabletops could foster future civic communities, expanding modes of participation originating in the Greek Agora and in European notions of cafés as locales of political deliberation.