Mobile and wearable devices, such as smartwatches and fitness trackers, increasingly enable the continuous collection of physiological and behavioral data that permit inferences about users’ physical and mental health. Growing consumer adoption of these technologies has reduced the cost of generating clinically meaningful data. This can help reduce medical research costs and aid large-scale studies. However, the collection, processing, and storage of data comes with significant ethical, security, and data governance considerations. Here, we use the emerging concept of “digital phenotyping” to highlight key lessons for data governance that draw on parallels with the history of genomics research, while highlighting areas in which digital phenotyping will require novel governance frameworks.
A new paper in the Journal of the American Medical Informatics Association by Ignacio Perez-Pozuelo, Dimitris Spathis, Jordan Gifford-Moore, Jessica Morley and myself has just been published.
Research on the ethics of algorithms has grown substantially over the past decade. This article builds on a review of the ethics of algorithms published in 2016 … to contribute to the debate on the identification and analysis of the ethical implications of algorithms, to provide an updated analysis of epistemic and normative concerns, and to offer actionable guidance for the governance of the design, development and deployment of algorithms.
Initiatives relying on artificial intelligence (AI) to deliver socially beneficial outcomes—AI for social good (AI4SG)—are on the rise. However, existing attempts to understand and foster AI4SG initiatives have so far been limited by the lack of normative analyses and a shortage of empirical evidence. In this Perspective, we address these limitations by providing a definition of AI4SG and by advocating the use of the United Nations’ Sustainable Development Goals (SDGs) as a benchmark for tracing the scope and spread of AI4SG.
I’m excited to announce OxPod, a new “conversational collective” of podcasts based here in Oxford. OxPod includes Nayana and my tech news podcast SkepTechs (formerly AlgoRhythms) as well as a number of other podcasts offering sharp insights about the big issues facing society.
OxPod can be found at www.oxpod.net, where you can also find a brand new page for SkepTechs which will host our episodes going forward.
This election is not just about where — it’s also about when. When states report votes — and which votes are reported first— is likely to have a considerable impact on the perceptionof who is ahead at any given time.
I have a new blog post about the 2020 US presidential election now up on Medium.
Since 2016, social media companies and news providers have come under pressure to tackle the spread of political mis-and disinformation (MDI) online. However, despite evidence that online health MDI (on the web, on social media, and within mobile apps) also has negative real-world effects, there has been a lack of comparable action by either online service providers or state-sponsored public health bodies. We argue that this is problematic and seek to answer three questions: why has so little been done to control the flow of, and exposure to, health MDI online; how might more robust action be justified; and what specific, newly justified actions are needed to curb the flow of, and exposure to, online health MDI?
A new paper written by Jessica Morley, myself, Rosaria Taddeo and Luciano Floridi has now been published in the Journal of Medical Internet Research.
This article presents a mapping review of the literature concerning the ethics of artificial intelligence (AI) in health care. The goal of this review is to summarise current debates and identify open questions for future research. Our goal is to inform policymakers, regulators and developers of what they must consider if they are to enable health and care systems to capitalise on the dual advantage of ethical AI; maximising the opportunities to cut costs, improve care, and improve the efficiency of health and care systems, whilst proactively avoiding the potential harms.
In this article, we focus on the socio-political background and policy debates that are shaping China’s AI strategy. In particular, we analyse the main strategic areas in which China is investing in AI and the concurrent ethical debates that are delimiting its use. By focusing on the policy backdrop, we seek to provide a more comprehensive and critical understanding of China’s AI policy by bringing together debates and analyses of a wide array of policy documents.
A new paper by Huw Roberts, myself, Jess Morley, Vincent Wang, Rosaria Taddeo and Luciano Floridi has been published in AI & Society.
Here we set out 16 questions to assess whether — and to what extent — a contact-tracing app is ethically justifiable. These questions could assist governments, public-health agencies and providers [and] will also help watchdogs and others to scrutinize such technologies.
A comment piece by colleagues Jessica Morley, Rosaria Taddeo, Luciano Floridi and myself was recently published in Nature.