AI can be a powerful tool in futures and foresight work. It can scan vast amounts of information, surface patterns across domains, and generate plausible starting points for scenarios or speculative artifacts. Used well, it expands our field of vision and accelerates exploration.

But there’s a risk here, too. Because AI is trained on what already exists.

That means that it is exceptionally good at reproducing “used futures”: inherited ideas about the future that made sense in the past but no longer fit emerging realities. It amplifies what is well-documented, widely discussed, and already legible, while marginal, local, or genuinely novel perspectives are more likely to be missed.

If we’re not careful, we start mistaking fluency for insight. And this is where facilitation matters more, not less.

Futures work isn’t just about assembling information. It’s about surfacing assumptions, challenging dominant narratives, and making space for perspectives that don’t yet have language or legitimacy. That work can’t be automated. It depends on human judgment, relational trust, and the ability to sit with uncertainty without defaulting to the most available answer.

AI can support futures thinking. But it shouldn’t replace imagination, nor should it be allowed to close down possibility prematurely. The future isn’t something to be generated. It’s something to be negotiated, together.

Share