top of page

Futures Practice in an Age of AI – Beyond Prediction to Engagement

  • Jonathan Blanchard Smith
  • 9 hours ago
  • 4 min read

This is an edited version of the presentation which Jonathan Blanchard Smith delivered to the Futures4Europe Conference 2025, held in Vienna, 14-15 May, which explored the theme of Futures-Oriented Collective Intelligence. The abstract for the paper is printed in the Book of Abstracts, and the full paper will be published in an upcoming Proceedings book.


The argument


As this conference shows, Futures and foresight is at a moment of intensification: intensification of pressure to deliver futures at scale; intensification of tool use; intensification of claims about what artificial intelligence can do. And under these pressures, the core purpose of foresight practice is at risk of being displaced or diluted.

This paper argues that AI has a legitimate role in futures work - but that collective intelligence, as defined by the European Commission and as practised in institutional foresight, depends on motivation, participation, dialogue, and shared commitment. Those are things AI cannot provide.


Over the past two years, we've seen large language models (LLMs) rapidly enter the foresight field, with the emergence of scanning systems, automated scenario generators, predictive dashboards, and policy drafting tools. We have seen arguments that LLM ensembles can match or surpass human forecasters in probabilistic tasks. And we’ve seen scenarios produced by generative AI in seconds, without engagement, without memory, and without challenge.


The question we asked was: What happens when foresight is optimised for scale and speed, but stripped of social process? What does it mean to produce plausible futures that no one owns?


We begin with a distinction that has become urgent to defend: forecasting is about probability, while foresight is about possibility. Forecasting narrows; foresight opens. Forecasting is strategic calculation; foresight is strategic imagination.


If we treat foresight as a content function, which generates plausible outputs for others to interpret, we lose the aspect that makes it powerful: its ability to create the conditions for reflection, contestation, orientation, and action. These are not artefacts. They are human processes.


There is a very real seduction here. AI can parse millions of documents, synthesise trends, generate ‘day in the life’ scenarios, and visualise complex systems in seconds. Practitioners are using it to create experiential artefacts and simulation narratives. Governments are experimenting with GPT-based wrappers to support real-time scanning.


But none of this activity is strategic unless it results in insight, and insight only emerges through human judgement, facilitated sense-making, and social interpretation. Foresight must remain a practice of inquiry, not just an engine of production.


The dangers of AI


Let us be clear about where the risks lie. AI is designed for coherence, not truth. It reinforces prompts, smooths disagreements, and completes narratives (even flawed ones) with fluency. In foresight, that can be dangerous.


LLMs suffer from acquiescence bias, overconfidence, and hallucination. They generate false uncertainty, creating futures that are elegant but unanchored. They lack institutional memory, motivational force, or contextual depth. They cannot ask why a scenario matters. And they cannot build the trust required to act on one.”


Foresight is a social act.


Foresight is not only about what is imagined - it is about who imagines it. Scenario creation is a social act. It allows people to surface concerns, test boundaries, and build ownership.

In the UK Futures Toolkit, in SAFIRE, and the Risk Scenarios Toolkit, we observed the same thing: engagement produces alignment, dialogue produces agency, and the quality of strategic insight is directly proportional to the degree of participation in its development.

It is important to say that not every scenario exercise must begin with co-creation. We have used the SAFIRE regional scenarios in various contexts, and they are robust, well-tested, and rich.


But even the best scenario sets are inert unless people engage with them. The meaning comes from use. Scenarios are not messages to be transmitted. They are catalysts for reflection, and their power lies in the conversations they provoke.”


This is not an abstract worry. We have historical examples of systems where optimisation outpaced judgment: algorithmic trading triggering flash crashes; automated targeting systems failing in conflict zones; GPT-generated reports filled with confident hallucinations.

In each case, the system produced action without reflection. And when foresight falls into the same trap, we risk producing futures no one trusts, no one questions - and no one uses.


The Hybrid Model We Need


We are not arguing against AI. We are arguing for its containment. AI is extraordinarily powerful in pattern recognition, drafting, and visualisation. It is useful in scanning, in synthesis, and in rapid iteration.


But it cannot decide what matters. It cannot deliberate. And it cannot confer legitimacy. So, the model we propose is a hybrid one: AI supports foresight, but humans interpret it, contextualise it, and commit to it. Meaning-making remains human.


Three Principles for Future-Oriented Collective Intelligence


So, how do we safeguard the integrity of foresight? Three principles:

First, automate inputs, not ownership. Let AI support the work, not define it.

Second, design for deliberation. Participation is not merely cosmetic; rather, it is a strategic mechanism.


Third, preserve institutional memory and motivational force. Without them, foresight becomes just another report on the shelf.


The Commission has defined collective intelligence as a participatory, knowledge-building, future-shaping process. If we want Futures-Oriented Collective Intelligence to remain more than a slogan, we must defend its human core.


AI may help us see further. But only collective intelligence, an intelligence which is relational, deliberative, and situated, helps us choose what to act on.


Written by Jonathan Blanchard Smith, SAMI Director


The views expressed are those of the author(s) and not necessarily of SAMI Consulting.


Achieve more by understanding what the future may bring. We bring skills developed over thirty years of international and national projects to create actionable, transformative strategy. Futures, foresight and scenario planning to make robust decisions in uncertain times. Find out more at www.samiconsulting.co.uk


Image by Gerd Altmann from Pixabay


Comments


bottom of page