Artificial intelligence (AI) has the potential to play an important role in addressing the climate emergency, but this potential must be set against the environmental costs of developing AI systems. In this commentary, we assess the carbon footprint of AI training processes and offer 14 policy recommendations to reduce it.
A new commentary by Mariarosaria Taddeo, Andreas Tsamados, myself and Luciano Floridi has been published in Cell — One Earth.
Skeptechs, the podcast I host with Nayana Prakash, has been selected as one of the winners of Spotify’s Next Wave award from thousands of submissions. The competition set out to identify up-and-coming student podcasters, and the winners are currently featured atop Spotify’s student genre page. It’s great to have been recognised in this way and we’re looking forward to recording and releasing more episodes in the weeks to come.
The purpose of this primer, co-produced by The Alan Turing Institute and the Council of Europe, is to introduce the main concepts and principles presented in the Council of Europe’s Ad Hoc Committee on Artificial Intelligence Feasibility Study for a general, non-technical audience.
I am a co-author, with David Leslie, Chris Burr, Mhairi Aitken, Mike Katell, and Morgan Briggs, of a primer produced for the Council of Europe. The document sets out, for a general audience, the ethical and political considerations that should inform a potential legal framework for the design, development and deployment of AI systems, with a focus on safeguarding human rights, democracy and the rule of law.
In our paper, we trace the increasing popularity of constitutional metaphors among private platforms to show how these metaphors obscure rather than elucidate the position of private decision-making bodies in society.
With co-authors Philipp Darius, Dominiquo Santistevan and Moritz Schramm, I presented this paper earlier today at the First Annual Conference of The Platform Governance Research Network.
Mobile and wearable devices, such as smartwatches and fitness trackers, increasingly enable the continuous collection of physiological and behavioral data that permit inferences about users’ physical and mental health. Growing consumer adoption of these technologies has reduced the cost of generating clinically meaningful data. This can help reduce medical research costs and aid large-scale studies. However, the collection, processing, and storage of data comes with significant ethical, security, and data governance considerations. Here, we use the emerging concept of “digital phenotyping” to highlight key lessons for data governance that draw on parallels with the history of genomics research, while highlighting areas in which digital phenotyping will require novel governance frameworks.
A new paper in the Journal of the American Medical Informatics Association by Ignacio Perez-Pozuelo, Dimitris Spathis, Jordan Gifford-Moore, Jessica Morley and myself has just been published.
Research on the ethics of algorithms has grown substantially over the past decade. This article builds on a review of the ethics of algorithms published in 2016 … to contribute to the debate on the identification and analysis of the ethical implications of algorithms, to provide an updated analysis of epistemic and normative concerns, and to offer actionable guidance for the governance of the design, development and deployment of algorithms.
Initiatives relying on artificial intelligence (AI) to deliver socially beneficial outcomes—AI for social good (AI4SG)—are on the rise. However, existing attempts to understand and foster AI4SG initiatives have so far been limited by the lack of normative analyses and a shortage of empirical evidence. In this Perspective, we address these limitations by providing a definition of AI4SG and by advocating the use of the United Nations’ Sustainable Development Goals (SDGs) as a benchmark for tracing the scope and spread of AI4SG.
I’m excited to announce OxPod, a new “conversational collective” of podcasts based here in Oxford. OxPod includes Nayana and my tech news podcast SkepTechs (formerly AlgoRhythms) as well as a number of other podcasts offering sharp insights about the big issues facing society.
OxPod can be found at www.oxpod.net, where you can also find a brand new page for SkepTechs which will host our episodes going forward.
This election is not just about where — it’s also about when. When states report votes — and which votes are reported first— is likely to have a considerable impact on the perceptionof who is ahead at any given time.
I have a new blog post about the 2020 US presidential election now up on Medium.
Since 2016, social media companies and news providers have come under pressure to tackle the spread of political mis-and disinformation (MDI) online. However, despite evidence that online health MDI (on the web, on social media, and within mobile apps) also has negative real-world effects, there has been a lack of comparable action by either online service providers or state-sponsored public health bodies. We argue that this is problematic and seek to answer three questions: why has so little been done to control the flow of, and exposure to, health MDI online; how might more robust action be justified; and what specific, newly justified actions are needed to curb the flow of, and exposure to, online health MDI?
A new paper written by Jessica Morley, myself, Rosaria Taddeo and Luciano Floridi has now been published in the Journal of Medical Internet Research.