I presented this paper at the Data Power conference at the University of Bremen in September 2019.
In “Politics as a Vocation”, the lecture that he gave one hundred years ago, Max Weber offered what would become one of his most influential ideas: that a state is that which “claims the monopoly of the legitimate use of physical force within a given territory”. Such use of violence, Weber argued, is legitimated in one of three distinct ways: by “tradition”, by “charisma”, or by the “virtue of ‘legality’ … the belief in the validity of legal statute … based on rationally created rules”.
In this centennial year of Weber’s lecture, much has been made of Weber’s prescience regarding modern-day charismatic demagogues. Yet it is in the conceptualisation of “legal-rational” legitimacy that greater purchase may be found when we grapple with the use of data and algorithms in contemporary society. As I will argue, the “iron cage” that Weber identified, which serves to constrain human freedom through the coercive combination of efficiency and calculation, has been supplanted. Today, we instead occupy what might be called a “silicon cage”, resulting from a step change in the nature and extent of calculation and prediction relating to people’s activities and intentions.
Moreover, while the bureaucratisation that Weber described was already entwined with a capitalist logic, the silicon cage of today has emerged from an even firmer embedding of the tools, practices and ideologies of capitalist enterprise in the rules-based (we might say algorithmic) governance of everyday life. Alternative arrangements present themselves, however, in the form of both “agonistic” and “cooperative” democracy.
A new short paper by Luciano Floridi and I has been published, open access, in the inaugural issue of the Harvard Data Science Review.
Artificial Intelligence (AI) is already having a major impact on society. As a result, many organizations have launched a wide range of initiatives to establish ethical principles for the adoption of socially beneficial AI. Unfortunately, the sheer volume of proposed principles threatens to overwhelm and confuse. How might this problem of ‘principle proliferation’ be solved? In this paper, we report the results of a fine-grained analysis of several of the highest-profile sets of ethical principles for AI. We assess whether these principles converge upon a set of agreed-upon principles, or diverge, with significant disagreement over what constitutes ‘ethical AI.’ Our analysis finds a high degree of overlap among the sets of principles we analyze. We then identify an overarching framework consisting of five core principlesfor ethical AI. Four of them are core principles commonly used in bioethics: beneficence, non-maleficence, autonomy, and justice.On the basis of our comparative analysis, we argue that a new principle is needed in addition: explicability, understood as incorporating both the epistemological sense of intelligibility (as an answer to the question ‘how does it work?’) and in the ethical sense of accountability (as an answer to the question: ‘who is responsible for the way it works?’). In the ensuing discussion, we note the limitations and assess the implications of this ethical framework for future efforts to create laws, rules, technical standards, and best practices for ethical AI in a wide range of contexts.
I am a co-author on a new paper which appears in Minds and Machines (open access).This article reports the findings of AI4People, an Atomium—EISMD initiative designed to lay the foundations for a “Good AI Society”. We introduce the core opportunities and risks of AI for society; present a synthesis of five ethical principles that should undergird its development and adoption; and offer 20 concrete recommendations—to assess, to develop, to incentivise, and to support good AI—which in some cases may be undertaken directly by national or supranational policy makers, while in others may be led by other stakeholders. If adopted, these recommendations would serve as a firm foundation for the establishment of a Good AI Society.
With modern technology, living life ‘in the moment’ has never been easier. But this new nowness is far from what earlier advocates had in mind, and might only be distracting us from the planet’s ever more pressing challenges. Continue reading “The new ‘power of now’ and the perils of the hyper-present”