In a new blog on the Alan Turing Institute website, myself, Bertie Vidgen and Helen Margetts explain how the COVID-19 crisis has exposed the importance of social media, and argue for the protection of workers involved as content moderation as key workers.
A new paper I co-authored with Luciano Floridi, Thomas C. King and Mariarosaria Taddeo has been published (open access) in Science and Engineering Ethics.
The idea of artificial intelligence for social good (henceforth AI4SG) is gaining traction within information societies in general and the AI community in particular. It has the potential to tackle social problems through the development of AI-based solutions. Yet, to date, there is only limited understanding of what makes AI socially good in theory, what counts as AI4SG in practice, and how to reproduce its initial successes in terms of policies. This article addresses this gap by identifying seven ethical factors that are essential for future AI4SG initiatives. The analysis is supported by 27 case examples of AI4SG projects. Some of these factors are almost entirely novel to AI, while the significance of other factors is heightened by the use of AI. From each of these factors, corresponding best practices are formulated which, subject to context and balance, may serve as preliminary guidelines to ensure that well-designed AI is more likely to serve the social good.
I contributed a chapter to the 2019 Yearbook of the Digital Ethics Lab, which has just been published.
Through its power to “rationalise”, artificial intelligence (AI) is rapidly changing the relationship between people and the state. But to echo Max Weber’s warnings from one hundred years ago about the increasingly rational bureaucratic state, the “reducing” power of AI systems seems to pose a threat to democracy—unless such systems are developed with public preferences, perspectives and priorities in mind. In other words, we must move beyond minimal legal compliance and faith in free markets to consider public opinion as constitutive of legitimising the use of AI in society. In this chapter I pose six questions regarding how public opinion about AI ought to be sought: what we should ask the public about AI; how we should ask; where and when we should ask; why we should ask; and who is the “we” doing the asking. I conclude by contending that while the messiness of politics may preclude clear answers about the use of AI, this is preferable to the “coolly rational” yet democratically deficient AI systems of today.
I’ve launched a podcast! Co-presented with fellow Oxford Internet Institute PhD student Nayana Prakash, AlgoRhythms is a weekly show covering tech news and research, broadcast on Oxford University’s student radio station Oxide and released as a podcast. Each week, we cover the latest stories in technology and interview fellow researchers about their work.
Rare is the film review that needs to start with a spoiler alert for something which *doesn’t* happen. But so it is for Free Solo, the extraordinary new documentary profiling lifelong climber Alex Honnold as he embarks on an unprecedented feat: scaling Yosemite’s daunting, thousand-metre-high El Capitan Wall without ropes, harnesses, or any other lifeline.
I appeared on Monocle 24 earlier to discuss the hacking and release of EU diplomatic cables.
I am delighted to be part of Digital Catapult’s new Machine Intelligence Garage Ethics Committee. The Ethics Framework created by the Committee is now also available.
Welcome to my website.
I’m a doctoral researcher based at the Oxford Internet Institute, exploring the ethical and political impact of data and AI on society, with specific research interests in AI ethics, the use of AI for social good, social media as a public sphere, and the use of algorithms to deal with online hate speech. My past research has included work on digital politics, big and open data, digital state surveillance, and the use of web archives in research.
I’m also a Research Associate at the Alan Turing Institute, where as part of the public policy programme I focus on translating insights from academia into effective policy frameworks for the ethical use of data science and AI by governments.
My other engagements include serving as the Convenor of the Turing’s Ethics Advisory Group, a member of the Institute’s Data Ethics Group, and a member of the Ethics Committee of Digital Catapult’s Machine Intelligence Garage, where I work with start-ups to develop action plans for ensuring the ethical use of machine learning technology. Finally, I appear regularly on Monocle 24’s morning show The Globalist to discuss politics and technology, and host AlgoRhythms, a weekly tech-focused podcast.
This website is a semi-regularly updated repository of my research, writing, project experience and other information. You can find me in other forms and formats on Twitter, Medium, LinkedIn and Google Scholar.
I was interviewed on Monocle 24 Radio earlier to discuss the delayed launch of an emergency alert system in the US:
New blog post at The Turing:
Ethics and innovation belong hand in hand. By Helen Margetts, Cosmina Dorobantu, and Josh Cowls.