The Coming Wave vs. Open Source

Summarising Mustafa Suleyman's Coming Wave and sharing my thoughts

Summary of The Coming Wave

I recently finished Mustafa Suleyman’s book, The Coming Wave, where he argues that the unchecked proliferation of AI will lead to a redistribution of power that could destroy liberal nation-states.

Suleyman thinks that the omni-use and scalable nature of tomorrow’s AI models can have paradoxical effects: to centralise power at the state level and distribute it across AI-enabled entities (corporations, micro-communities, militias…). Moreover, it can lead to unmanageable levels of misinformation and persistently high unemployment. This combination threatens the checks, balances, and institutions that sustain liberal-democratic states, potentially causing them to devolve into either ineffective “zombie” governments or totalitarian regimes.

To prevent these nightmare outcomes, Suleyman thinks we need to develop a global regulatory framework. He acknowledges this is nearly impossible due to competing private and national interests, but insists that we must try.

Suleyman is essentially advocating for a hyper-regulated AI world that would not only spell the end of open-source development but also lead to an oligopoly of closed-API companies. While I painfully agree it would be a price worth paying to save our liberal democracies, I wonder: are his doomsday predictions accurate? Is an effective international regulatory framework achievable?

Pro-open source arguments

I’ve tried hard to find good safety counterarguments from the big techno-optimists pro-open-source advocates such as Marc Andreessen and Arthur Mensch. However, their points always fall into the same reductionist themes:

  • Models are safe because they simply compress existing internet knowledge.

  • Open-source models are safer due to community oversight.

  • Effective AI safety monitoring hinges on robust tools, which are easier to develop in an open-source ecosystem rather than one controlled by a few large corporations and not-so-independent regulatory bodies.

  • Individuals with harmful intent can misuse closed APIs through fine-tuning just as easily.

  • Regulation should focus on product-level use cases, not the underlying programming languages.

  • The most reductionist view is that AI is merely “math,” and those advocating for regulation are just doomsayers who’ve read too many sci-fi novels.

My views

These arguments overlook massive concerns about the risks from super-intelligent autonomous non-deterministic agents and the democratisation of access to destructive capabilities (e.g., powerful misinformation tools, military-grade technology).

While I’m not entirely convinced that Suleyman’s dystopian future is the most likely outcome in an unregulated world, nor that his vision for international cooperation is achievable. The dangers highlighted in Suleyman’s book seem very likely in an open-source world. So, regulation addressing key issues such as alignment and autonomy sound necessary. (And I hate to say it, especially as someone benefitting open-source models on Kablio.com, and traditionally in favour of light-touch tech regulation).

It’s crazy to see how little consensus there is among leading industry insiders about the future AI world, which makes me very skeptical of anyone who claims certainty about where we’re heading.

One point of consensus among all camps is the necessity of an Apollo program for AI safety. We should definitely start with that.