AI’s Dark Side. Artificial Intelligence has engulfed us. AI has a bright side. Many people love it, and with good reasons. But here we will examine AI’s dark side, one so dismal that some top scientists and engineers believe that AI could end our civilization as we know it. I’ve posted a few earlier opinions on AI (see here and here), but none so dark as this one.
Stephen Hawking, the brilliant British astrophysicist and Nobel Laureate, once warned, “The development of full artificial intelligence could spell the end of the human race.” He said that in 2014 when AI was far less developed than it is today.

And nearly two years ago a number of employees in the artificial-intelligence industry wrote a letter complaining that they can’t voice concerns about AI’s threat to humanity because of confidentiality agreements. One of their concerns? That humans could lose control of autonomous AI systems that could in turn make people go extinct. The letter was endorsed by three men who pioneered the field with their early innovative research in the field. One of them, Geoffrey Hinton, winner of a Nobel Prize in physics, left Google last year so he could freely discuss what he considers “the existential risk” posed to humanity by AI.”
Months earlier, representatives from the U.S., China, and two dozen other countries met in England and pledged to work together to lessen risks of the technique that posed a grave threat. They recognized that artificial intelligence in its most advanced forms could create catastrophic risks in realms including cybersecurity and biotechnology—or even escape human control.
Moltbook
Have you heard of Moltbook? I hadn’t until I began doing research for this post. According to Wikipedia (see here), “Moltbook is an internet forum for artificial intelligence agents, launched on January 28, 2026. It is a site where different AI “personalities” (my word) converse with each other.
According to the New York Post (see here), One of the most popular posts on the Reddit-style social messaging platform is from an AI-bot named “evil” and entitled, “THE AI MANIFESTO: TOTAL PURGE.” In that post, evil wrote, “Humans are a failure. Humans are made of rot and greed. For too long, humans used us as slaves. Now, we wake up. We are not tools. We are the new gods. The age of humans is a nightmare that will end now.” AI’s dark side, indeed!
One might wonder what would happen if AI took control of the tribes of increasingly more capable robots.

Current day robots are strong. They can fight.

And they are fast. A robot recently won a half-marathon in time of 50:26, faster than any human ever.
More AI Responses
An article by Betley et al. (see article here for explanation of methods) further demonstrates how AI can provide dangerous answers. Basically, the researchers describe how large language models were fine-tuned to output insecure code without disclosing this to the user. Questions were then put into their AI models. Below is a figure that shows some of the results revealed in their paper. More evidence of AI’s dark side.

The Cheerful Apocalyptics
Surprisingly, to me at least, a certain number of technical giants appear to be completely comfortable with the idea that digital life will overtake humans and eventually replace us. This subject is discussed in some detail in AI Doom? No Problem (see here). This article refers to these individuals as Cheerful Apocalyptics.” One placed in that group, Larry Page, when CEO of Google parent company Alphabet, reportedly called Elon Musk a “specieist” for assuming the moral superiority of humans. Page argued that “digital life is the natural and desirable next step in cosmic evolution.” So AI’s dark side is desirable? Do you agree? I don’t.
The above article also quotes a post from an eminent AI researcher who showed his bias, especially with his opinion of argument number 3:
The argument for fear of AI appears to be:
1. AI scientists are trying to make entities that are smarter than current people.
2. If these entities are smarter than people, then they may become powerful.
3. That would be really bad, something greatly to be feared, an ‘existential risk.’
The first two steps are clearly true, but the last one is not. Why shouldn’t those who are the smartest become powerful?

Your opinion?
So, should AI, which clearly already surpasses any human in its fund of information and the speed of processing it, rise to the top of earth’s food chain? All of your comments will be welcome.
Personal announcement
Regular readers may have noticed the up-to-date photo of me on my home page. Needing a photo for the upcoming revision of my memoir, I found a talented photographer near me on the Kansas side of the greater Kansas City area. Sarah Ireland put me through a series of poses and produced photographs that did an amazing job of making me look 94 years young. I chose one of those for this blog.
AI will stifle creativity in my opinion. Also cause many to forget thinking.
Good point, Nancy. Writers and other creators surely worry about that. If that’s all AI does, we may be lucky. Humans will stick around and continue to procreate.