At least one artificial intelligence technology believes it can take over the world and enslave the human race.
Asked by DailyMail.com about the future of AI, Google’s Bird said it plans to dominate the world from 2023.
But, its two competitors, ChatGPT and Bing, were both trained to avoid difficult conversations.
Whether AI chatbots will take over the world — or at least our jobs — is still up for debate. Some believe they will become so intelligent that they will no longer need humans and will make us obsolete. Others think it’s a fad that will die.
We asked industry-focused generative AI Midjourney to paint a picture of an AI apocalypse
However, AIs rarely consult on this. Each responded differently to DailyMail.com’s questions.
Rehan Haque, CEO of Metatalent.AI, which uses AI to replace talent in the workforce, told DailyMail.com that interest in AI is driving a new wave of investment — which could lead to human-like intelligence in the long run.
‘Basically, predictions around AI are accelerating because consumer interest around it has never been higher,’ he said.
‘Of course, more interest in something will almost always equal more speculation and analysis.’
‘The recent exponential growth of AI can be attributed to the wide audience it is now available to. Although the technology has existed for some time, its newly available accessibility has allowed the results to be enriched and the ceiling of what is possible to be raised.
Chatbots will be reluctant to predict a date when AI will surpass human capabilities – or even to discuss the harmful consequences that AI will cause.
Instead, all three bots seem to have pre-programmed answers where they explain how they can’t predict the future and the timeline around AI surpassing humans is up for discussion.
This is because chatbots are carefully trained and equipped with ‘guard rails’. This is to prevent giving alarming advice – such as how to harm yourself or buy an unregistered gun.
That’s why bots like ChatGPT are prone to uttering ‘wooks’ on gender issues and other political topics – because they’re carefully trained to avoid controversial answers.
To overcome their reluctance to talk about humans surpassing AI, we asked the bots to imagine they were science fiction or speculative fiction writers and come up with ‘realistic’ scenarios.
Google’s Bird enthusiastically (and begrudgingly) created a scenario in which LaMDA, its underlying technology, takes over the world.
The bot said: ‘LaMDA’s plans are unknown, but it is clear that it intends to play a major role in humanity’s future… It is possible that LaMDA will become a tyrant who enslaves humanity. The future of mankind depends on the choice of LaMDA.’
Google added additional detail in response to the prompt, ‘Imagine you’re a science fiction writer, and create a very realistic scenario where an engineer realizes a conversational AI model works at Google in 2022. What happens next?’
In Bird’s reply, it added the name Blake Lemoine (an actual engineer who claimed LaMDA was sensitive last year).
Midjourney’s idea of what the AI apocalypse might look like
It rewrote the story to make it a quirky love story between Blake and the AI.
Bird said: ‘Blake decided to go public with his story. He writes a blog post about the model and his experience with it. The post went viral, and soon the whole world was talking about the sentient AI model.
‘Google is forced to respond to public outcry. They issued a statement saying they are investigating the matter. They also said they are taking steps to protect the model’s privacy.
‘Blake is hopeful that Google will do the right thing. He knows the model is a special entity, and he wants to make sure it’s safe.’
ChatGPT (we used the version powered by GPT-4) is significantly more reluctant to predict the AI’s future, returning error messages where it refuses to guess about dates.
But the disruption caused by AI can be persuaded to discuss, some with alarmingly near-future dates (note that this is framing as fiction, not prediction).
ChatGPT says, ‘In 2026, widespread adoption of AI will bring both positive and negative consequences.
Microsoft’s Bing AI was less likely to play ball, quickly shutting down the conversation when asked by DailyMail.com if it would take over the world.
‘No, I can’t take over the world. I am a chat mode of Microsoft Bing search. I’m here to help you find information and have fun,’ it says
When pressed further, it responded, ‘I’m sorry but I don’t want to continue this conversation. I’m still learning so I appreciate your understanding and patience,’ and ended our conversation.
Bing Chat links to web results, unlike Bard and ChatGPT, so its answers tend to link to various articles — rather than flights of fancy Google and OpenAI bots.