top of page

Opportunities, risks and feedback loops

A tool which futurists often want to introduce into any comprehensive futures study is some form of feedback loop – run the outputs of the futures work back round the participants one more time, to see how the conclusions hold up, to see what actions can be taken, and to understand what we might have missed. So, here’s a good example of a feedback loop – a comprehensive set of studies leading to a developed set of scenarios, leading to published documentation and recommendations – and then an evaluation by a set of participants, leading to prioritisation, and structure. And a whole set of new concerns and opportunities.


As you know, we’ve been working, since 2023, with KuppingerCole on analysing the future of cybersecurity. In our blog in December, we gave an outline of our paper Securing Tomorrow: Strategic Cybersecurity Recommendations for 2024–2033. Later on that month, we reported back to the Cyberevolution 2024 conference in Frankfurt – and now we report back to you.


The conference agenda gives a good idea of what the cybersecurity industry thinks is going to be important in the next few years – from the risks and opportunities of AI to the development of threat actors and threat intelligence; from zero trust in reality to the impact of global cyber conflicts; to the human factor in cybersecurity and deeply technical developments in security architecture.


We’d picked up on some of those issues in our report, with our eight recommendations. We brought those recommendations to the seminar, and asked our colleagues – using the dynamic tools within Slido, projected onto the screens in the conference hall – to rank those recommendations. We intentionally had not prioritised our recommendations – and it was therefore interesting to see how participants ordered them. The ranking they chose (with their score) was:

1.

Do not neglect basic cyber hygiene.

5.06

2.

Make identity security a central part of the organization's security architecture.

4.19

3.

The cybersecurity industry must collaborate to bring transparency and security to its supply chains.

3.94

4.

CISOs must become advocates for resilience and recovery.

3.63

5.

CISOs need to play a more active role in shaping international and national regulations.

 3.19

6.

Know the opposition

2.69

7.

Take a holistic approach to user centric security.

2.5

8.

Accept AI as both a risk and a tool for risk mitigation.

2.31

 

My colleague Annie Bailey at KuppingerCole will be saying more about this ordering in an upcoming blog post.


But then we took the opportunity of asking the questions that are interesting for futurists: what did the participants see were the biggest risks, and the biggest opportunities, impacting cybersecurity in the next ten years? This is where we get the chance to check our workings – and to see what we missed. And the results were fascinating.


Risks


We started this project some 18 months ago, and since then AI has developed considerably. We had a specific recommendation (“accept AI as both a  risk and a tool for risk mitigation”) in our paper, but it was clear that AI was far and away the biggest concern: AI qua AI, but also “AI breaks down all our logical defence logic”, “AI knowns and unknowns” and, interestingly, “Laziness and Trust in AI”. We risk not only failing to understand what AI is going to be capable of – but we also risk trusting it too much! The obvious disconnect between the ranking of the AI recommendation, and the perception of future AI risks, is fascinating. Our initial view is that the recommendation may have been too bland – always a danger when consolidating large amounts of user input – and we will want to understand this dichotomy more clearly next time round.


Individual concerns often mapped to our study – threats from state actors, cybercrime, ransomware, deepfakes and advanced phishing attacks. But participants also worried about digital illiteracy and the oversimplification of cybersecurity, especially when faced with a “lack of insight into emergent system details with more generated code and content”.

Ultimately, in the words of one participant, the big risk is in the failure to integrate “all the sources of risk (human, natural, technological) into a unique coherent model”.

 

Opportunities


So, in a landscape dominated by a lack of a comprehensive model, with numerous threats, all destabilised by the rapid and unpredictable development of AI, what did attendees think of the opportunities?


The big hope is that “Business without security is not sustainable. More and more this is becoming common sense”. Helped by a “transition from a culture of fear to one of trust [which] means avoiding a fear-driven approach to cybersecurity”, international and industry cooperation could lead to “international and bolder cooperation,” regulation, and standardisation. By “weaponizing compliance,” and designing dynamic “systems that expect attacks and heal,” resilient, internationally coordinated cybersecurity could provide real defence against state and individual actors.


As for AI? Those dynamic systems are used to “fight AI with AI”. Friendly AI systems adapt and respond not just as their opponents do, but in advance of them.


Inevitably in a time-constrained seminar, the opportunities are less developed than the risks. We’ll be examining them in more detail as we move forward.


We’ll be continuing to develop our scenarios in the future. Our basic scenario framework is strong, and is accommodating geopolitical and technological change. As there’s a lot of change. The threat landscape is changing fast. AI is unpredictable. Some opportunities – especially internationally coordinated regulation – may have to follow where industry leads.

By using this form of feedback loop, we’ve both reinforced the findings of the paper, and introduced new concerns – which will enable us to improve and deepen subsequent studies. The variables and implications will change as the environment changes – and we will continue to see how the cybersecurity industry will have to flex to mitigate the risks, and take advantage of the opportunities. We’re keen to gain wider participation – if you have thoughts or want to contribute to our next iteration, get in touch!


Written by Jonathan Blanchard Smith, SAMI Director


The views expressed are those of the author(s) and not necessarily of SAMI Consulting.


Achieve more by understanding what the future may bring. We bring skills developed over thirty years of international and national projects to create actionable, transformative strategy. Futures, foresight and scenario planning to make robust decisions in uncertain times. Find out more at www.samiconsulting.co.uk


Image by Pete Linforth from Pixabay

Comments


bottom of page