top of page

Better Data, Better Future: Why the quality of foresight now depends on the quality of data

  • Dr. Christian Spindler
  • 3 hours ago
  • 6 min read

This blog is the fourth in the SIP blog series, a set of blogs shared among the partners of the Sustainable Innovation Pathways project. This cross-border, collaborative effort brings together foresight, financial forecasting and technology readiness levels to understand where companies, industries, and countries can best decarbonise.


In the first blogs in this series, we made the case that decarbonisation has firmly entered the realpolitik sphere, surviving not because it is morally appealing but rather because it increasingly supports concrete interests like resilience, competitiveness, and security. We also showed that the economic forces driving this shift endure in a populist era characterised by disputed climate rhetoric and deteriorating policy coherence.


This article focuses on a subtle but significant change: the data revolution that underpins climate strategy. The main thesis is straightforward: the quality of foresight depends on the data supporting it. As data sources improve, modelling evolves from static plans to dynamic, adaptive systems. The Sustainable Innovation Pathways (SIP) concept is based on this evolution.


From static assumptions to dynamic evidence


Traditional net-zero modelling relied on a narrow set of inputs: average carbon abatement cost curves, stylised technological timelines, and high-level emissions projections. These approaches were adequate for illustrating direction but weak at informing real investment decisions under uncertainty. They made the assumption that key parameters such as technology readiness, capital availability, and policy signals could remain fixed for years at a time.


That assumption no longer holds. In the last decade, three shifts have fundamentally changed the modelling landscape:


  • There has been an explosion of accessible, structured data across technology, finance, and industry.

  • There has been a rapid evolution of digital tools, including AI-based forecasting and optimisation.

  • Continuous feedback from real-world deployment, shortening the gap between theory and practice.


SIP is designed to exploit this new environment. Its modelling chain treats data not as a static input but as a continuously refreshed signal that reshapes pathways over time.


New data layers reshaping decarbonisation modelling


Technology readiness: from labels to trajectories


Technology Readiness Levels (TRLs) used to be rough descriptions of how mature something was at a certain time. Today, deployment data, learning rates, pilot outcomes, and supply-chain constraints can all be used to treat them as dynamic trajectories. Companies are not only giving technologies a TRL label; they are also tracking their progress from one level to the next as part of programme reporting and evaluation. (https://www.swisscore.org/climbing-the-trl-ladder-from-idea-to-impact). This lets SIP model not only if a technology will work, but also when it will be worth investing in at scale.


SIP can tell the difference between technologies that are promising in theory and those that are likely to matter before important climate deadlines by linking TRL progress to cost degradation, infrastructure availability, and policy support.


Climate finance: capital flows as signals


The data on global climate finance has grown quickly. We now have far better visibility on:


  • Public funding allocations and industrial policy priorities (for example, the EU’s Net-Zero Industry Act and the US Inflation Reduction Act, which provide transparent, regularly updated data on subsidy volumes, eligible technologies and deployment timelines).

  • Private capital moves between technologies and regions (for example, BloombergNEF and IEA investment trackers show how capital allocation changes from year to year between renewables, grids, storage, and fossil assets, making capital availability a measurable, time-varying input instead of a fixed assumption).

  • Risk premiums, the cost of capital, and financing limits (for example, the Network for Greening the Financial System (NGFS) climate scenarios, which regularly update their assumptions about the cost of capital and risk premiums for different sectors and technologies used by central banks and supervisors around the world).


These data do not exist outside the model as "context" in SIP. They directly affect the possible paths. If there is a lot of money for renewables but not much for industrial CCS that is the first of its kind, the optimisation engine will show this right away and adjust as things change.


Industry-specific emissions: granularity matters


High-level emissions averages hide the real ways to improve things. We can now model at the level where decisions are really made, like blast furnaces versus DRI in steel, clinker ratios in cement, and process heat in chemicals. This is possible because we have data on emissions at the sector and process level.


SIP adds this level of detail so that pathways show how things really work, not just average numbers for the whole sector. This also makes it possible to explicitly show second- and third-order effects, such as cross-sector dependencies, infrastructure bottlenecks, and workforce constraints.


AI-based projections: pattern recognition at scale


Machine learning techniques are progressively employed to discern patterns in technology diffusion, cost trajectories, and demand progression. They don't take the place of scenario thinking, but they do improve it. These tools look at big historical datasets to find learning effects, saturation points, and adoption bottlenecks that are difficult to find with expert judgement alone. This lets scenarios be constantly adjusted as new information comes to light, rather than relying on a single algorithmic forecast.


A clear example of this change is lithium-ion batteries. Datasets like BloombergNEF's Battery Price Survey track costs, deployment, and learning effects over time, showing technology trajectories that are dynamic and sometimes even reverse, rather than smooth, linear progress.


AI-based projections are also used in SIP to stress-test assumptions. This shows where historical examples support optimism and where trends that seem strong may not be. The outcome is not a singular prediction but an improved spectrum of feasible futures.


SIP as an integrated, adaptive modelling chain


What differentiates SIP is not any single dataset, but how these layers are integrated. Qualitative foresight scenarios define the space of possible futures. Quantitative models transform those futures into constraints, costs and opportunities. New data continuously updates both sides of that bridge.


The outcome is dynamic scenario modelling:


As technology costs change, pathways are recalculated. This gives decision-makers direct value by making sure that capital is always going towards the most cost-effective and timely interventions instead of being stuck in old assumptions.


When capital markets get tighter or looser, the best investment sequences change. This lets users change the order of their projects based on real financing conditions and avoid putting limited capital into pathways that are no longer appealing or bankable.


Policy and geopolitical shocks are dealt with by changing the parameters, not the strategies. This keeps strategic continuity while still allowing for quick, evidence-based changes to new regulatory or geopolitical situations.


SIP doesn't make a single "optimal plan". Instead, it makes a living navigation system that can be rerun, stress-tested, and improved as things happen in real life.


Continuous refinement: SIP as a living framework


A central design principle of SIP is that it is never finished. The framework explicitly assumes that:


  • New technologies will emerge.

  • Existing technologies will surprise — positively or negatively.

  • Political, financial and social conditions will shift faster than planning cycles.


As a result, both the model structure and its underlying data are designed for continuous refinement. Horizon scanning feeds new signals into qualitative scenarios. Updated datasets recalibrate quantitative pathways. Assumptions are challenged, not frozen.


This is not a technical detail; it is a philosophical stance. In a world of profound uncertainty, robustness comes not from precision but from adaptability.


Better data, better futures


The move to net zero is not failing because people aren't trying hard enough. It is failing when choices are made based on old, incomplete, or unchanging data.


SIP's main idea is simple: better data leads to better foresight, which leads to better decisions. As the quality of data improves, so does our ability to allocate capital, plan investments, and make tough choices under pressure.


In a time of realpolitik, populism, and geopolitical fragmentation, this may be the most useful thing of all: a framework that changes with the world instead of pretending it will stay the same.


In the next blog in this series, we'll talk about how this data-driven flexibility changes how businesses should think about risk, resilience, and strategic optionality on their paths to net-zero.


Written by Dr. Christian Spindler, Co-Founder and CEO of Sustainaccount


The views expressed are those of the author(s) and not necessarily of SAMI Consulting.


Achieve more by understanding what the future may bring. We bring skills developed over thirty years of international and national projects to create actionable, transformative strategy. Futures, foresight and scenario planning to make robust decisions in uncertain times. Find out more at www.samiconsulting.co.uk


Image by Wilfried Pohnke from Pixabay

bottom of page