I was live on the BBC World Service’s Newsday programme this morning to discuss the escalation of tensions between President Donald Trump and the social networking site Twitter. Listen here (48 minutes in).
The slow, deliberative nature of representative democracy seems ill-suited to the urgency of the present moment. Can it survive the new politics of speed?
Read my new blog post on Medium.
In a new blog on the Alan Turing Institute website, myself, Bertie Vidgen and Helen Margetts explain how the COVID-19 crisis has exposed the importance of social media, and argue for the protection of workers involved as content moderation as key workers.
A new paper I co-authored with Luciano Floridi, Thomas C. King and Mariarosaria Taddeo has been published (open access) in Science and Engineering Ethics.
The idea of artificial intelligence for social good (henceforth AI4SG) is gaining traction within information societies in general and the AI community in particular. It has the potential to tackle social problems through the development of AI-based solutions. Yet, to date, there is only limited understanding of what makes AI socially good in theory, what counts as AI4SG in practice, and how to reproduce its initial successes in terms of policies. This article addresses this gap by identifying seven ethical factors that are essential for future AI4SG initiatives. The analysis is supported by 27 case examples of AI4SG projects. Some of these factors are almost entirely novel to AI, while the significance of other factors is heightened by the use of AI. From each of these factors, corresponding best practices are formulated which, subject to context and balance, may serve as preliminary guidelines to ensure that well-designed AI is more likely to serve the social good.
I contributed a chapter to the 2019 Yearbook of the Digital Ethics Lab, which has just been published.
Through its power to “rationalise”, artificial intelligence (AI) is rapidly changing the relationship between people and the state. But to echo Max Weber’s warnings from one hundred years ago about the increasingly rational bureaucratic state, the “reducing” power of AI systems seems to pose a threat to democracy—unless such systems are developed with public preferences, perspectives and priorities in mind. In other words, we must move beyond minimal legal compliance and faith in free markets to consider public opinion as constitutive of legitimising the use of AI in society. In this chapter I pose six questions regarding how public opinion about AI ought to be sought: what we should ask the public about AI; how we should ask; where and when we should ask; why we should ask; and who is the “we” doing the asking. I conclude by contending that while the messiness of politics may preclude clear answers about the use of AI, this is preferable to the “coolly rational” yet democratically deficient AI systems of today.
This is the politics of subtraction: the false promise that progress is only possible if we as a nation remove ourself from existing partnerships.
I’ve launched a podcast! Co-presented with fellow Oxford Internet Institute PhD student Nayana Prakash, AlgoRhythms is a weekly show covering tech news and research, broadcast on Oxford University’s student radio station Oxide and released as a podcast. Each week, we cover the latest stories in technology and interview fellow researchers about their work.
In a new blog guest post for TechUK, co-authored with David Leslie, we explain why the development of trustworthy AI will rest on the ability to explain how it works and why it delivers particular decisions, to a range of different audiences.
I presented this paper at the Data Power conference at the University of Bremen in September 2019.
In “Politics as a Vocation”, the lecture that he gave one hundred years ago, Max Weber offered what would become one of his most influential ideas: that a state is that which “claims the monopoly of the legitimate use of physical force within a given territory”. Such use of violence, Weber argued, is legitimated in one of three distinct ways: by “tradition”, by “charisma”, or by the “virtue of ‘legality’ … the belief in the validity of legal statute … based on rationally created rules”.
In this centennial year of Weber’s lecture, much has been made of Weber’s prescience regarding modern-day charismatic demagogues. Yet it is in the conceptualisation of “legal-rational” legitimacy that greater purchase may be found when we grapple with the use of data and algorithms in contemporary society. As I will argue, the “iron cage” that Weber identified, which serves to constrain human freedom through the coercive combination of efficiency and calculation, has been supplanted. Today, we instead occupy what might be called a “silicon cage”, resulting from a step change in the nature and extent of calculation and prediction relating to people’s activities and intentions.
Moreover, while the bureaucratisation that Weber described was already entwined with a capitalist logic, the silicon cage of today has emerged from an even firmer embedding of the tools, practices and ideologies of capitalist enterprise in the rules-based (we might say algorithmic) governance of everyday life. Alternative arrangements present themselves, however, in the form of both “agonistic” and “cooperative” democracy.
I’m coming to the end of my first year* as a PhD student at Oxford University’s Internet Institute. It has been a challenging year, in ways both foreseen and not, but it has also been an endlessly fascinating, thought-provoking and perspective-shifting experience. … Here are four things I wish I’d known before starting a PhD.