Situational awareness on A(S/G)I – a (somewhat) cypherpunk perspective

After reading (well, listening to) Leopold Aschenbrenner’s set of essays (more of a book) called SITUATIONAL AWARENESS: The Decade Ahead, I decided to:

  1. shill it
  2. explain where my view differs

First of all. Leopold Aschenbrenner is an industry insider, who used to work for OpenAI on AI alignment problems, in a team lead by now famous Ilya Sutskever. He understands what is happening in the industry and especially how fast they are going.

The essays start by “AGI by 2027 is strikingly plausible”. He explains how. It’s not only computational power increases, but algorithmic / research improvements. I like how he explains how alien these models could be, what we can understand with them and how is alignment done these days and why this exact strategy won’t work later down the line.

Contrary to other people working on AI alignment, he seems to think that this train will not stop and any suggestions to “let’s just pause until we figure it out” are doomed to fail. He is correct in my opinion. No regulation will hinder the development, and self regulation of AI companies is also quite unlikely.

Government will not save us

And definitely not US government. One of his solutions that is most emphasized is the involvement of US government. First of all, he says that the labs can’t keep secrets and “Chinese Communist Party” can steal not only model weights, but also algorithmic progress. That might be true, but involvement of military will probably not prevent this.

I think people have too idealistic view of US government and especially intelligence community. They seem to be capable of magic – compartmentalization and security against leaks, directing megaprojects such as intelligence superalignment, etc.

When we look at real results, not wishful thinking and action movies, we see a completely different picture. Let’s take a look at space. Real progress in space has been from private sector, at least during the last decade. There might be counterexamples and he mentions one – the Manhattan project to build the atomic bomb. They somehow kept secrecy, although not fully. But they built and exploded a nuclear bomb on thousands of civilians.

If the goal is alignment with human values, promoting peace, cooperation and prosperity, the project manager of this gigantic effort can’t wear a uniform.

First – even US military is a shitty organization. They can’t get even their accounting right. That does not mean they do not have some smart people, but there’s nothing that makes them qualified for any part of this project.

Second – it’s a military. ASI is like the Ring of Power – iressistible. It won’t be in good hands.

I think the progress to AGI and later ASI is unstoppable. We seem to agree on this point. He believes that “good people” should have a headstart and that will guarantee safety. Another possibility is an even playing field, which is more of an approach that is promoted by George Hotz (founder of comma.ai – an AI self-driving company).

Conclusion

I obviously don’t know what will happen. According to Ray Kurzweil’s new book Singularity is Nearer, we can’t see behind the “event horizon” of Artificial Super-intelligence. Things will become weird. Even more weird that how current AI looks to us now. It’s impossible to predict what will happen.

I am grateful to Leopold Aschenbrenner for publishing his thoughts, I think he is very correct in the situation and I think his predictions on the pace of change might be right.

I would rather not get governments and militaries involved, precisely because it could end like the Manhattan project – killing many people, because someone wants to try it out. That being said, AI companies should either work fully in the open (my preferred way), or ramp up their security without the help of the governments. It’s not magic and governments do not have anything special. If a trade secret (an algorithm, model weights, …) is worth so much, then these companies can surely invest in best security money can buy – which is much better than security a government will give you “for free”.