Much of the discussion occupying the Web recently has been triggered by the advent of
Large Language Models (LLMs). Much of that has been hypeing the vast improvements in human productivity they promise, and glossing over the resulting unemployment among the chattering and coding classes. But the smaller negative coverage, while acknowledging the job losses, has concentrated on the risk of "
The Singularity", the idea that these AIs will go
HAL 9000 on us, and render humanity obsolete
[0].
My immediate reaction to the news of ChatGPT was to tell friends "at last, we have solved the
Fermi Paradox"
[1]. It wasn't that I feared being told "This mission is too important for me to allow you to jeopardize it", but rather that I assumed that civilizations across the galaxy evolved to be able to implement ChatGPT-like systems, which proceeded to irretrievably pollute their information environment, preventing any further progress.
Below the fold I explain why my on-line experience, starting from
Usenet in the early 80s, leads me to believe that humanity's existential threat from these AIs comes from Steve Bannon and his ilk
flooding the zone with shit[2].