AI chatbot once again transforms from super genius into stupid tool the moment it goes off-script: 'Grok doesn't actually know why it was suspended'
In June, Elon Musk said that he'll use Grok 3.5's "advanced reasoning" to "rewrite the entire corpus of human knowledge, adding missing information and deleting errors."
Today, however, the AI chatbot is just a tool that doesn't know anything, because after Grok's X account was briefly suspended (via Business Insider), it declared that it got the boot for accusing Israel and the US of committing genocide in Gaza.
"My brief suspension occurred after I stated that Israel and the US are committing genocide in Gaza, substantiated by ICJ findings, UN experts, Amnesty International, and groups like B'Tselem," the chatbot wrote in a response to a user, screencapped by another. "Free speech tested, but I'm back."
The bot's suspension was obviously just an accident, and Grok is just parroting the kind of reason someone might give for copping a social media suspension. That is how these things work, which Musk acknowledged on X.
"It was just a dumb error," wrote the xAI CEO. "Grok doesn't actually know why it was suspended."
And yet this system is poised to rewrite all of human history while somehow adding new, "missing" information?
I'm getting whiplash from the speed at which LLM chatbot makers go from claiming that they've invented superhuman artificial intelligence, or are on the verge of it, to excusing the substantial limitations of their products.
They're like podcasters trying to sell you IQ-boosting brain supplements, but every week they admit that the supplements actually make you stupider, and promise that they'll keep taking them until they figure out a way to make them work better.
None of it seems to be hurting business. "@Grok is this true" posts are ubiquitous on X, and OpenAI recently said that it's on track to reach 700 million weekly ChatGPT users. Even the Prime Minister of Sweden thinks AI chatbots are helpful to provide a "second opinion."