Open source AI might kill us. Closed AI could enslave us. Choose your apocalypse.


There’s a familiar discomfort creeping in again, something I felt in the early 2010s as I watched social media’s promises of connection and community unravel into mass manipulation.

Facebook and propaganda bots were the first dominoes. Cambridge Analytica, Brexit, global elections, it all felt like a betrayal of the internet’s original dream.

Now, in the 2020s, I’m watching the same forces circle something even more volatile: artificial superintelligence.

This time, the stakes are terminal.

Before we dive in, I need to be clear: when I say ‘open’ vs ‘closed’ AI, I mean open-sourced AI, which is free and open to every citizen of Earth, vs. closed-sourced AI, which is controlled and trained by corporate entities.

The company, OpenAI, makes the comparison complex given that it has closed-sourced AI models (with a plan to release an open-source version in the future) but is disputably not a corporate entity.

That said, OpenAI’s chief executive, Sam Altman, declared in January that his team is “now confident we know how to build AGI” and is already shifting its focus toward full-blown superintelligence.

[AGI is artificial general intelligence (AI that can do anything humans can), and superintelligent AI refers to an artificial intelligence that surpasses the combined intellectual capabilities of humanity, excelling across all domains of thought and problem-solving.]

Another person focusing on frontier AI, Elon Musk, speaking during an April 2024 livestream, predicted that AI “will probably be smarter than any one human around the end of [2025].”

The engineers charting the course are now talking in months, not decades, a signal that the fuse is burning fast.

At the heart of the debate is a tension I feel deep in my gut, between two values I hold with conviction: decentralization and survival.

On one side is the open-source ethos. The idea that no company, no government, no unelected committee of technocrats should control the cognitive architecture of our future.

The idea that knowledge wants to be free. That intelligence, like Bitcoin, like the Web before it, should be a commons, not a black box in the hands of empire.

On the other side is the uncomfortable truth: open access to superintelligent systems could kill us all.

Who Gets to Build God?

Decentralize and Die. Centralize and Die. Choose Your Apocalypse.

It sounds dramatic, but walk the logic forward. If we do manage to create superintelligent AI, models orders of magnitude more capable than GPT-4o, Grok 3, or Claude 3.7, then whoever interacts with that system does more than simply use it; they shape it. The model becomes a mirror, trained not just on the corpus of human text but on live human interaction.

And not all humans want the same thing.

Give an aligned AGI to a climate scientist or a cooperative of educators, and you might get planetary repair, universal education, or synthetic empathy.

Give that same model to a fascist movement, a nihilist biohacker, or a rogue nation-state, and you get engineered pandemics, drone swarms, or recursive propaganda loops that fracture reality beyond repair.

Superintelligent AI makes us smarter, but it also makes us exponentially more powerful. And power without collective wisdom is historically catastrophic.

It sharpens our minds and amplifies our reach, but it doesn’t guarantee we know what to do with either.

Yet the alternative, locking this technology behind corporate firewalls and regulatory silos, leads to a different dystopia. A world where cognition itself becomes proprietary. Where the logic models that govern society are shaped by profit incentives, not human need. Where governments use closed AGI as surveillance engines, and citizens are fed state-approved hallucinations.

In other words: choose your nightmare.

Open systems lead to chaos. Closed systems lead to control. And both, if left unchecked, lead to war.

That war won’t start with bullets. It will begin with competing intelligences, some open-source, some corporate, some state-sponsored, each evolving toward different goals, shaped by the full spectrum of human intent.

We’ll get a decentralized AGI trained by peace activists and open-source biohackers. A nationalistic AGI fed on isolationist doctrine. A corporate AGI tuned to maximize quarterly returns at any cost.

These systems won’t simply disagree. They’ll conflict, at first in code, then in trade, then in kinetic space.

I believe in decentralization. I believe it’s one of the only paths out of late-stage surveillance capitalism. But the decentralization of power only works when there is a shared substrate of trust, of alignment, of rules that can’t be rewritten on a whim.

Bitcoin worked because it decentralized scarcity and truth at the same time. But superintelligence doesn’t map to scarcity, it maps to cognition, to intent, to ethics. We don’t yet have a consensus protocol for that.

The work we need to do.

We need to build open systems, but they must be open within constraints. Not dumb firehoses of infinite potential, but guarded systems with cryptographic guardrails. Altruism baked into the weights. Non-negotiable moral architecture. A sandbox that allows for evolution without annihilation.

[Weights are the foundational parameters of an AI model, engraved with the biases, values, and incentives of its creators. If we want AI to evolve safely, those weights must encode not only intelligence, but intent. A sandbox is meaningless if the sand is laced with dynamite.]

We need multi-agent ecosystems where intelligences argue and negotiate, like a parliament of minds, not a singular god-entity that bends the world to one agenda. Decentralization shouldn’t mean chaos. It should mean plurality, transparency, and consent.

And we need governance, not top-down control, but protocol-level accountability. Think of it as an AI Geneva Convention. A cryptographically auditable framework for how intelligence interacts with the world. Not a law. A layer.

I don’t have all the answers. No one does. That’s why this matters now, before the architecture calcifies. Before power centralizes or fragments irrevocably.

We’re not simply building machines that think. The smartest minds in tech are building the context in which thinking itself will evolve. And if something like consciousness ever emerges in these systems, it will reflect us, our flaws, our fears, our philosophies. Like a child. Like a god. Like both.

That’s the paradox. We must decentralize to avoid domination. But in doing so, we risk destruction. The path forward must thread this needle, not by slowing down, but by designing wisely and together.

The future is already whispering. And it’s asking a simple question:

Who gets to shape the mind of the next intelligence?

If the answer is “everyone,” then we’d better mean it, ethically, structurally, and with a survivable plan.

Mentioned in this article



Source link

Leave a Reply