Safety First. Always.
This is a safety-first project.
Not Pollyanna. Not reckless optimism. Not wishful thinking. This is about getting it right, because getting it wrong is not an option.
Eliezer Yudkowsky and Nate Soares were absolutely right when they titled their 2025 book If Anyone Builds It, Everyone Dies. The warning is stark, uncompromising, and necessary. The existential risk posed by artificial superintelligence is real, and anyone who tells you otherwise is either uninformed or selling something.
But they didn’t go far enough.
THEIR TITLE
If Anyone Builds It, Everyone Dies
WHAT IT SHOULD HAVE SAID
If Anyone Builds It Irresponsibly, Everyone Dies.
Because someone is going to build it, that is no longer a question. The idea that we can prevent the development of AGI and ASI is not a strategy; it is a fantasy. The genie is out of the bottle, the knowledge is distributed, and the incentives are overwhelming. Pretending otherwise wastes the very time we need most.
So we need to approach this in a completely new way. We need to define advanced AI — AGI, ASI — as what it plainly is: a global problem demanding a global solution. Much as we did with nuclear weapons, we need international frameworks for control. But unlike the nuclear non-proliferation regime, this framework cannot be captured — not by governments, not by corporations, not by any single ideology or national interest.
The only way to prevent capture is through global popular will. A consensus so broad and so deeply held that it shapes the rules from the ground up — not imposed from above, but demanded from below.
“There has to be some element of international cooperation, or maybe at least minimum standards around how these technologies should be deployed.”
— Demis Hassabis, CEO of Google DeepMind, February 2026
This call for global cooperation isn’t coming only from the margins or the worried. It is coming from the very top of Silicon Valley. Demis Hassabis, Nobel laureate, builder of AlphaFold, and co-founder of DeepMind, has spoken plainly about the need for international dialogue, minimum standards, and collective action. He has warned that current institutions may not be strong enough, and that AI’s digital nature means it will cross every border on Earth.
So the call is the same whether it comes from the summit of the AI industry or from a GP’s consulting room in Bracknell. We need a global movement. One that rapidly draws the greatest minds together, rapidly assembles the best evidence, and puts it before the world, so that people can decide.
That is what this project is: the skeleton of a movement. In the coming weeks and months, there will be more about how that movement should come together, what it should demand, and how it should work.
But the purpose is clear from the start: build a consensus, fast. Fast enough that we can shape what is coming before it shapes us. Fast enough that the worst fears of FOME aren’t realised, and that as few people as possible miss out on the extraordinary potential of these technologies because we moved too slowly, or too recklessly. Potential innovations save as many lives as possible. But never , never in a harmful way.
The Control Problem
Can we maintain meaningful human oversight of systems that may surpass our own intelligence? How do we build kill switches for something smarter than us?
The Alignment Problem
Can we ensure that superintelligent systems share our values and goals? And whose values? These are not just engineering problems; they are civilisational ones.
We understand that both problems are real, urgent, and that solving one without the other is not enough. What we need is a Manhattan Project for a new world; a concentrated, coordinated, global effort to nail down these questions as best we can, as fast as we can, with the widest possible participation.
Not to stop the future. To make it survivable. To make it good.
The clock is running.
This is the beginning. Stay close. There is more to come — and it matters more than almost anything else.
FOME — Fear of Missing Everything · Safety First, Always