The Quiet Revolution: Building Situational Awareness for AI’s Transformative Future


TL;DR

While society has normalised the presence of AI following the initial shock of ChatGPT, we remain largely unprepared for the accelerating pace of AI advancement. This disconnect between AI experts and the general public creates a dangerous knowledge gap. This piece argues for an urgent, inclusive national conversation involving government, civil society, academia, and industry to develop situational awareness and agency as we approach potentially transformative AI capabilities. The time for thoughtful preparation is now – not after disruption has begun. In other words, we need to get it right first time!

The Normalisation of the Extraordinary

Remember the collective gasp when ChatGPT was released back in November 2022? For a brief moment, everyone seemed to grasp that something profound had changed. News cycles buzzed with possibilities and warnings. Policymakers scrambled to understand implications. Businesses raced to formulate strategies.

And then… we adapted. The extraordinary became ordinary.

Yes, this is an over-simplification as many organisations have grasped the situation and are developing their AI strategy, but I suspect many have not. And amongst wider society there’s an even greater sense of ‘business as usual’.

This normality bias – our remarkable ability to acclimate to change – serves us well in many contexts. But in the case of artificial intelligence, it may be obscuring our view of the horizon. While public attention has moved on (/back) to other pressing concerns – economic challenges, geopolitical tensions, climate change – AI development continues its relentless march forward, largely unnoticed by those outside the field.

The Two Realities

Today, we inhabit two parallel realities:

In the “AI bubble,” researchers, developers, and close observers witness daily advances that continuously reshape their understanding of what’s possible. They see the progression towards increasingly capable systems as an accelerating curve rather than a straight line. They recognise that today’s limitations aren’t permanent barriers but temporary obstacles that will very likely be overcome.

In the “real world,” most people experience AI through incremental improvements to familiar tools – better autocomplete, more accurate recommendations from chatbots, smarter home devices. These changes, while notable, don’t trigger the sense that something fundamentally different is emerging. The revolutionary potential remains abstract, distant.

This disconnect creates a dangerous gap in our collective situational awareness. As significant capabilities emerge – look at recent reasoning model releases and breakthroughs in robotics – they risk arriving as shocks rather than anticipated developments.

The Need for Collective Situational Awareness

Situational awareness – the ability to perceive, understand, and anticipate future states in dynamic environments – is critical during periods of transformation. In military contexts, aviation, and emergency response, it’s recognised as essential for effective decision-making under pressure. Leopold Aschenbrenner’s seminal paper (June 2024) puts ‘situational awareness ‘ firmly in the AI space.

As AI advances, we need to develop this capacity not just individually but collectively. This means:

  1. Perceiving accurately what’s currently happening in AI development, beyond hype and fear
  2. Understanding the implications of these developments across different sectors and communities
  3. Projecting possible futures to anticipate challenges and opportunities before they arrive.

The UK has made some efforts in this direction. For example, the AI Security Institute (formerly The AI Security Institute) took the lead internationally after the Bletchley Park summit in 2023, and the Government AI Action Plan (Jan 2025) talks about fostering public trust in the technology. In 2023 the Department of Science, Innovation and Technology researched ‘year 2030’ scenarios for Frontier AI, and the Alan Turing Institute continues to advance important research. Yet these initiatives, while commendable, remain too isolated and specialised to build the broad situational awareness we need.

From Spectators to Participants

The perception that AI is being developed by distant experts and corporations, with the rest of society relegated to mere spectators, undermines our collective ability to navigate this transition and to trust its process. History teaches us that when people feel powerless in the face of technological change, the result can be either passive resignation or reactive resistance – neither conducive to thoughtful adaptation.

We’ve already seen glimpses of this in the attacks on Waymo autonomous vehicles last year and the anxious narratives about job displacement. These reactions aren’t simply irrational fears; they’re expressions of a deeper concern about agency and voice in shaping our technological future. (There’s probably an underlying, general mistrust of institutions too, which I wrote about recently.)

The alternative is to transform the relationship between AI development and society from one of creators and consumers to one of collaborative stakeholders. This requires creating meaningful mechanisms for public engagement, education, and influence – not after systems are deployed, but during the process of determining what we want these systems to do and how we want them to operate.

The Path Forward

What might a more inclusive approach to AI situational awareness look like?

  1. A National Conversation: The UK needs a structured, accessible, ongoing dialogue about AI that extends far beyond the usual tech hubs and policy circles. This conversation should reach schools, community centres, workplaces, and households, creating spaces for both learning and feedback.
  1. Multi-stakeholder Governance: Government, industry, academia, and civil society organisations must work together to establish frameworks that distribute both the benefits and oversight of AI development. The Ada Lovelace Institute has pioneered important work in this area, but we need to scale these efforts. 
  1. Meaningful Transparency: Technical transparency alone isn’t enough; we need “translational transparency” that makes the capabilities, limitations, and trajectories of AI systems comprehensible to non-specialists. 
  1. Future Literacy: Our educational systems need to evolve beyond teaching specific technical skills to fostering the capacity to anticipate and navigate change – what UNESCO calls “future literacy.” 
  1. Distributed Agency: Decision-making about AI deployment should be distributed across society rather than concentrated in a few institutions, with clear mechanisms for democratic input and oversight.

A Race We Can Win By Not Racing

There’s a tendency to frame AI development as a competitive race between companies and between nations. This framing creates pressure for speed over deliberation, deployment over reflection. But unlike traditional technology races, the goal here isn’t simply to be first.

The real opportunity is to be thoughtful: to develop AI in ways that strengthen rather than undermine our social fabric, that expand rather than reduce human agency, that solve rather than exacerbate our most pressing challenges.

This doesn’t mean slowing innovation, but rather ensuring it moves in directions aligned with our broader values and goals. It means recognising that technical advances and social preparation must proceed together, neither outpacing the other.

The Time Is Now

The window for establishing collective situational awareness and agency in AI development won’t remain open indefinitely. While apocalyptic scenarios grab headlines, the more likely risk is a series of smaller but cumulatively significant shifts that occur too quickly for social adaptation.

And we don’t need to predict the exact trajectory of AI advancement to prepare for its implications. We simply need to build the capacity to perceive changes as they happen, understand their potential effects, and respond with agility rather than shock.

The foundations for this preparedness must be laid now, not when disruption has already begun. This isn’t about fear or a Luddite resistance to progress. It’s about ensuring that technological change serves human flourishing – an outcome that requires not just technical expertise but collective wisdom.

The quiet revolution in artificial intelligence is already underway. The question isn’t whether it will transform our world, but whether we’ll be active participants in shaping that transformation. The answer depends on our willingness to build situational awareness today for the world that’s emerging tomorrow.

A Call to Action

If you’re reading this, you have a role to play:

  • If you work in technology, consider how to make your work more accessible and understandable to non-specialists
  • If you’re an educator, explore how to foster future literacy alongside technical skills
  • If you’re in government or civil society, push for inclusive governance models that distribute agency
  • If you’re a citizen, seek out opportunities to engage with AI developments and add your voice to the conversation.

The future of AI isn’t predetermined. It will be shaped by countless decisions made by individuals, organisations, and societies. By building our collective situational awareness, we increase the likelihood that these decisions will reflect our shared aspirations rather than our unexamined assumptions.

So, the time for this work is now! Not because disaster looms, but because opportunity beckons. If we have the foresight to grasp it, that is.

1 comment

Add yours

+ Leave a Comment