From #MAGA to @AOC: Reflections on Radical Media in the Trump Era

I presented this paper, co-authored with Katie Arthur, at MIT’s Media in Transition conference in May 2019.

Abstract:

That social media both “giveth and taketh away” is not a new idea, but it is one that came to the fore in the tumultuous 2016. As the events of that year showed, while technological advances have afforded new space for radical media strategies—helping advance goals such as climate justice—so too have they created opportunities for political candidates from outside the mainstream to leverage populist resentment in the successful pursuit of political power. In this paper, we will explore how the use of civic media has evolved in the two years since our CMS Masters theses were submitted. While Donald Trump has, as President, consolidated his hold on mainstream media attention via his Twitter account, other voices have also emerged from the very different tradition of civic organising to share space on the “platform” of Twitter. Among the most prominent of these new voices is Alexandria Ocasio-Cortez, whose political experience as an organizer for the Bernie Sanders campaign and as a supporter of marginalised communities such as the residents of Standing Rock, helped propel her to the U.S. House of Representatives, as the youngest woman ever elected to Congress. In the paper we will explore Ocasio-Cortez’s rise, with a focus on her visibility on social media. As we will show, the rapid rise of “AOC” holds lessons for the prospects of both the “Green New Deal” policy she has trumpeted, and for whichever Democratic candidate is nominated to challenge Donald Trump in 2020.

Deciding how to decide: Six key questions for reducing AI’s democratic deficit

Artificial intelligence (AI) has a “democratic deficit” — and maybe that shouldn’t be a surprise. As Jonnie Penn and others have argued, AI, in conception and application, has long been bound up with the logic and operations of big business. Today, we find AI put to use in an increasing array of socially significant settings, from sifting through CVs to swerving through traffic, many of which continue to serve these corporate interests. (We also find “AI” the brand put to use in the absence of AI the technlogy: a recent study suggests that 40% of start-ups who claim to use AI do not in fact do so.) Nor are governments of all stripes lacking interest in the potential power of AI to patrol and cajole the movements and mindsets of citizens.

Read more at Medium.

“Free Solo” Review: A human-nature documentary as grounded as it is gripping

Rare is the film review that needs to start with a spoiler alert for something which *doesn’t* happen. But so it is for Free Solo, the extraordinary new documentary profiling lifelong climber Alex Honnold as he embarks on an unprecedented feat: scaling Yosemite’s daunting, thousand-metre-high El Capitan Wall without ropes, harnesses, or any other lifeline.

Read more at Medium.

AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations

I am a co-author on a new paper which appears in Minds and Machines (open access).2000px-Black_pencil.svgThis article reports the findings of AI4People, an Atomium—EISMD initiative designed to lay the foundations for a “Good AI Society”. We introduce the core opportunities and risks of AI for society; present a synthesis of five ethical principles that should undergird its development and adoption; and offer 20 concrete recommendations—to assess, to develop, to incentivise, and to support good AI—which in some cases may be undertaken directly by national or supranational policy makers, while in others may be led by other stakeholders. If adopted, these recommendations would serve as a firm foundation for the establishment of a Good AI Society.

Headshot small

Welcome to my website.

I’m a doctoral researcher based at the Oxford Internet Institute, exploring the ethical and political impact of data and AI on society, with specific research interests in AI ethics, the use of AI for social good, social media as a public sphere, and the use of algorithms to deal with online hate speech. My past research has included work on digital politics, big and open data, digital state surveillance, and the use of web archives in research.

I’m also a Research Associate at the Alan Turing Institute, where as part of the public policy programme I focus on translating insights from academia into effective policy frameworks for the ethical use of data science and AI by governments.

My other engagements include serving as the Convenor of the Turing’s Ethics Advisory Group, a member of the Institute’s Data Ethics Group, and a member of the Ethics Committee of Digital Catapult’s Machine Intelligence Garage, where I work with start-ups to develop action plans for ensuring the ethical use of machine learning technology. Finally, I appear regularly on Monocle 24’s morning show The Globalist to discuss politics and technology, and host AlgoRhythms, a weekly tech-focused podcast.

Less often than I’d like, I also write about food, sport, and films.

This website is a semi-regularly updated repository of my research, writing, project experience and other information. You can find me in other forms and formats on TwitterMedium, LinkedIn and Google Scholar.