Skip to content
Merged

new #321

Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,74 @@
---
title: "Inside 'RSS: Data Science and Artificial Intelligence' with Neil Lawrence"
description: "The Editor-in-Chief of the Royal Statistical Society’s new journal discusses its vision and breaks down the first position paper."
date: 2026-04-02
date-format: long
author: Annie Flynn
categories:
- Interviews
- AI
- Ethics
- Uncertainty
toc: true
image: images/thumb.png
---

Real World Data Science recently had the opportunity to sit down with [Professor Neil Lawrence],(https://www.cst.cam.ac.uk/people/ndl21) Editor-in-Chief of the Royal Statistical Society’s new journal, [RSS: Data Science and Artificial Intelligence](https://academic.oup.com/rssdat). Neil, who is the DeepMind Professor of Machine Learning at the University of Cambridge, a Senior AI Fellow at the [Alan Turing Institute](https://www.turing.ac.uk/), and a Visiting Professor at the University of Sheffield, is a leading voice in machine learning and AI. He has previous experience as Director of Machine Learning at Amazon and research interests spanning probabilistic models and real-world applications in health and developing economies. He is also passionate about public engagement—he co-hosts the [Talking Machines](https://www.thetalkingmachines.com/) podcast and is the author of [The Atomic Human](https://www.penguin.co.uk/books/455130/the-atomic-human-by-lawrence-neil-d/9781802062106).

We recently published a [Data Science Bite](https://realworlddatascience.net/foundation-frontiers/datasciencebites/posts/2025/11/21/uncertainty.html) breaking down [the first position paper](https://realworlddatascience.net/foundation-frontiers/posts/2026/01/29/beyond-quantification-delacroix-interview.html) of the newly launched journal, and had the opportunity to speak to its lead author [Professor Sylvie Delacroix](https://realworlddatascience.net/foundation-frontiers/posts/2026/01/29/beyond-quantification-delacroix-interview.html) about its themes: how AI can better support human judgment, why it is crucial to recognise forms of uncertainty that can’t be reduced to numbers, and how participatory design can make AI a true partner, rather than a replacement, for professionals.

In this conversation, Neil discusses the paper and how it aligns with the journal’s vision, plus the importance of bridging machine learning and related fields to keep the human element at the heart of AI systems.

Watch the full interview below and scroll down for key takeaways and some analysis.

---

## Interview

{{< video https://youtu.be/VV_FnGQXWlM>}}

---

## Key Takeaways at a Glance

### 1. The journal aims to convene, not conclude
The first paper is intentionally a position paper: an invitation to discussion rather than a definitive answer. Lawrence emphasises that solutions to these challenges are distributed across the community. Progress depends on creating spaces—like the RSS journal—for thoughtful, cross-disciplinary exchange grounded in real-world practice.

### 2. Data scientists must reassess habits, not just adopt new tools
While AI can dramatically increase technical efficiency, Lawrence warns against using that efficiency to simply “do more of the same.” Instead, practitioners should reinvest time in understanding the broader human, societal, and institutional implications of their work.

### 3. Overconfidence and lack of accountability in AI systems pose real risks
As the journal’s position paper highlights, AI systems, unlike human stakeholders, do not carry social or reputational stakes. This can lead to overconfident outputs without accountability—particularly dangerous in high-stakes domains like healthcare, law, and education. Without better interfaces for uncertainty, professionals risk being distanced from the information they need to make sound judgments.



### 4. “Conversational uncertainty” is now central to real-world AI use
In many professional settings, decisions are not made through formal statistical outputs alone, but through dialogue—between clinicians, experts, or increasingly, humans and machines. Understanding how uncertainty is communicated and interpreted in these conversational settings is critical, especially as large language models become more influential.

### 5. Bridging qualitative and quantitative thinking is essential
A recurring theme is the need to close the long-standing divide between quantitative methods and qualitative insight. Many real-world decisions are inherently qualitative, yet current AI systems—and much of data science—are optimised for quantification. Failing to integrate these perspectives risks repeating past mistakes where “the numbers” were treated as unquestionable truth.

### 6. Participatory approaches lead to better long-term decisions
Although slower upfront, participatory and deliberative processes—bringing together diverse expertise and perspectives—can prevent costly mistakes and misaligned systems. In the long run, they are more effective than purely efficiency-driven approaches.

## Join the conversation

This conversation touches on a theme we often explore here at Real World Data Science: the idea that the future of data science and AI will not be defined by technical capability alone, but by how well we integrate human judgment, context, and responsibility into our systems. The position paper—and RSS: Data Science and AI more broadly—is an open invitation to engage with these questions. Whether through research, case studies, or reflections from practice, there is a clear call for contributions that connect technical work with real-world impact.

As Neil suggests, the answers are unlikely to come from any single discipline or organisation. They will emerge from a broader conversation across the data science community.

Now is the time to be part of that conversation: answer RSS: Data Science and AI’s [call for submissions](https://realworlddatascience.net/foundation-frontiers/posts/2026/01/29/beyond-quantification-delacroix-interview.html).



::: {.article-btn}
[Explore more data science ideas](/foundation-frontiers/index.qmd)
:::

::: {.further-info}
::: grid

::: {.g-col-12 .g-col-md-12}

:::
:::
92 changes: 92 additions & 0 deletions foundation-frontiers/posts/2026/04/02/neil-lawrence.qmd
Original file line number Diff line number Diff line change
@@ -0,0 +1,92 @@
---
title: "Beyond Quantification: Interview with Professor Sylvie Delacroix on Navigating Uncertainty with AI"
description: "Professor Sylvie Delacroix discusses why AI systems must move beyond quantification to support professional judgment in high-stakes contexts."
date: 2026-01-29
date-format: long
author: Annie Flynn
categories:
- Interviews
- AI
- Ethics
- Uncertainty
toc: true
image: images/thumb.png
---

We recently published a [*Data Science Bite*](https://realworlddatascience.net/foundation-frontiers/datasciencebites/posts/2025/11/21/uncertainty.html) breaking down the first position paper of the newly launched journal, [*RSS: Data Science and Artificial Intelligence*](https://academic.oup.com/rssdat). The paper, [*Beyond Quantification: Navigating Uncertainty in Professional AI Systems*](https://academic.oup.com/rssdat/article/1/1/udaf002/8317136), argues that if AI is truly to support professional decision-making in high-stakes fields, we must move beyond probabilistic measures and use participatory approaches that allow experts to collectively express and navigate non-quantifiable forms of uncertainty.

*Real World Data Science* recently had the opportunity to speak to the paper’s lead author, Professor Sylvie Delacroix, about how AI can better support human judgment, why it is crucial to recognise forms of uncertainty that can’t be reduced to numbers, and how participatory design can make AI a true partner, rather than a replacement, for professionals.

Watch the full interview below and scroll down for key takeaways and some analysis.

---

## Interview: Beyond Quantification and Uncertainty in AI

{{< video https://www.youtube.com/watch?v=tJDy293oqPk >}}

---

## Key Takeaways at a Glance

### Not all uncertainty is measurable

AI often focuses on quantifiable uncertainty, like probabilities or confidence scores, but ethical and contextual uncertainties are equally important in professions like healthcare, education, and justice.

> “The problem is that if we design these systems in a way that means they're only capable of communicating these quantifiable types of uncertainty, we risk systematically undermining the significance and importance of non-quantifiable types of uncertainty… which are fundamentally ethical and contextual.”

### Participatory AI matters

Systems should let professionals shape how uncertainty is expressed, supporting collaboration and collective judgment rather than replacing human decision-making.

> “The intervention that we want is ideally one that means the systems are mouldable by the users over time… that’s what we mean by participatory interfaces.”

### The goal is to support and foster human intelligence, not replace it

The most valuable AI tools help professionals reflect, reason, and intuitively navigate complex situations, rather than just process more data faster.

### Real-world AI is already in use

GPs, teachers, and other professionals are using AI in sensitive ways, sometimes for informal “sense-making” conversations that influence moral judgments.

### Small refinements have big impact

Features like expressing incompleteness, ethical uncertainty, or alternative perspectives can significantly strengthen professional agency when developed with participatory input.

> “You could imagine a GP flagging an output and saying… it turns out the output could have been very dangerous because it didn’t include key diagnostic tools… and you could then imagine an interesting conversation with other GPs to figure out together how incompleteness should be expressed.”

### Efficiency should not undermine judgment

AI can save time, but systems must preserve the dynamic, normative nature of the professional practices within which they are deployed to ensure long-term effectiveness.

### The time to act is now

Professionals, designers, and regulators need to collectively shape AI tools before design choices are frozen, ensuring they support human-centred, ethical practice.

> “If professionals just wait for regulation to intervene, there’s a risk that regulation will arrive only when design choices are frozen… we all have agency in this; we can’t afford to be passive.”

---

## Join the conversation

[*RSS: Data Science and Artificial Intelligence*](https://academic.oup.com/rssdat) has an open [call for submissions](https://academic.oup.com/rssdat/pages/call-for-papers-uncertainty-in-the-era-of-ai) responding to the paper.

Sylvie Delacroix’s work is a call to action for data scientists, designers, and professionals alike. We have a window of opportunity to shape AI systems that encourage humans to keep re-articulating the values they care about.

We want to hear from you. As AI tools become more integrated into high-stakes professions, how can we ensure that systems support human judgment in all its facets rather than simply optimising for efficiency?

Read the full paper [here](https://academic.oup.com/rssdat/article/1/1/udaf002/8317136), or our accessible digest [here](https://realworlddatascience.net/foundation-frontiers/datasciencebites/posts/2025/11/21/uncertainty.html), and join the conversation about building AI tools that truly serve people, not just processes.

::: {.article-btn}
[Explore more data science ideas](/foundation-frontiers/index.qmd)
:::

::: {.further-info}
::: grid

::: {.g-col-12 .g-col-md-12}
About the speaker:
: [Professor Sylvie Delacroix](https://delacroix.uk/)is the Inaugural Jeff Price Chair in Digital Law at Kings College London. She is also the director of the [Centre for Data Futures](https://www.kcl.ac.uk/research/centre-for-data-futures) and a visiting professor at Tohoku University. Her research focuses on the role played by habit within ethical agency, the social sustainability of the data ecosystem that makes generative AI possible and bottom-up data empowerment.

:::
:::
Loading