Like The Social Dilemma Did, The AI Dilemma Seeks To Mislead You With Misinformation
You may recall the Social Dilemma, which used incredible levels of misinformation and manipulation in an attempt to warn about others using misinformation to manipulate.
On April 13, a new YouTube video called the AI Dilemma was shared by Social Dilemma leading character, Tristan Harris. He encouraged his followers to “share it widely” in order to understand the likelihood of catastrophe. Unfortunately, like the Social Dilemma, the AI Dilemma is big on hype and deception, and not so big on accuracy or facts. Although it deals with a different tech (not social media algorithms but generative AI), the creators still use the same manipulation and scare tactics. There is an obvious resemblance between the moral panic techlash around social media and the one that’s being generated around AI.
As the AI Dilemma’s shares and views are increasing, we need to address its deceptive content. First, it clearly pulls from the same moral panic hype playbook as the Social Dilemma did:
1. The Social Dilemma argued that social media have godlike power over people (controlling users like marionettes). The AI Dilemma argues that AI has godlike power over people.
2. The Social Dilemma anthropomorphized the evil algorithms. The AI Dilemma anthropomorphizes the evil AI. Both are monsters.
3. Causation is asserted as a fact: Those technological “monsters” CAUSE all the harm. Despite other factors – confronting variables, complicated society, messy humanity, inconclusive research into those phenomena – it’s all due to the evil algorithms/AI.
4. The monsters’ final goal may be… extinction. “Teach an AI to fish, and it’ll teach itself biology, chemistry, oceanography, evolutionary theory … and then fish all the fish to extinction.” (What?)
5. The Social Dilemma argued that algorithms hijack our brains, leaving us to do what they want without resistance. The algorithms were played by 3 dudes in a control room, and in some scenes, the “algorithms” were “mad.” In the AI Dilemma, this anthropomorphizing is taken to the next level:
Tristan Harris and Aza Raskin substituted the word AI for an entirely new term, “Gollem-class AIs.” They wrote “Generative Large Language Multi-Modal Model” in order to get to “GLLMM.” “Golem” in Jewish folklore is an anthropomorphic being created from inanimate matter. “Suddenly, this inanimate thing has certain emergent capabilities,” they explained. “So, we’re just calling them Gollem-class AIs.”
What are those Gollems doing? Apparently, “Armies of Gollem AIs pointed at our brains, strip-mining us of everything that isn’t protected by 19th-century law.”
If you weren’t already scared, this should have kept you awake at night, right?
We can summarize that the AI Dilemma is full of weird depictions of AI. According to experts, the risk of anthropomorphizing AI is that it inflates the machine’s capabilities and distorts the reality of what it can and can’t do — resulting in misguided fears. In the case of this lecture, that was the entire point.
6. The AI Dilemma creators thought they had “comic relief” at 36:45 when they showed a snippet from the “Little Shop of Horrors” (“Feed me!”). But it was actually at 51:45 when Tristan Harris stated, “I don’t want to be talking about the darkest horror shows of the world.”
LOL. That’s his entire “Panic-as-a-Business.”
Freaking People Out with Dubious Survey Stats
A specific survey was mentioned 3 times throughout the AI Dilemma. It was about how “Half of” “over 700 top academics and researchers” “stated that there was a 10 percent or greater chance of human extinction from future AI systems” or “human inability to control future AI systems.”
It is a FALSE claim. My analysis of this (frequently quoted) survey’s anonymized dataset (Google Doc spreadsheets) revealed many questionable things that should call into question not just the study, but those promoting it:
1. The “Extinction from AI” Questions
The “Extinction from AI” question was: “What probability do you put on future AI advances causing human extinction or similarly permanent and severe disempowerment of the human species?”
The ”Extinction from human failure to control AI” question was: “What probability do you put on human inability to control future advanced AI systems causing human extinction or similarly permanent and severe disempowerment of the human species?”
There are plenty of vague phrases here, from the “disempowerment of the human species” (?!) to the apparent absence of a timeframe for this unclear futuristic scenario.
When the leading researcher of this survey, Katja Grace, was asked on a podcast: “So, given that there are these large framing differences and these large differences based on the continent of people’s undergraduate institutions, should we pay any attention to these results?” she said: “I guess things can be very noisy, and still some good evidence if you kind of average them all together or something.” Good evidence? Not really.
2. The Small Sample Size
AI Impacts contacted attendees of two ML conferences (NeurlPS & ICML), not a gathering of the broader AI community, in which only 17% responded to their survey in general, and a much smaller percentage were asked to respond to the specific “Extinction from AI” questions.
– Only 149 answered the “Extinction from AI” question.
That’s 20% of the 738 respondents.
– Only 162 answered the ”Extinction from human failure to control AI” question.
That’s 22% of the 738 respondents.
As Melanie Mitchell pointed out, only “81 people estimated the probability as 10% or higher.”
It’s quite a stretch to turn 81 people (some of whom are undergraduate and graduate students) into “half of all AI researchers” (which include 100s of thousands of researchers).
This survey lacks any serious statistical analysis, and the fact that it hasn’t been published in any peer-reviewed journal is not a coincidence.
Who’s responsible for this survey (and its misrepresentation in the media)? Effective Altruism organizations that focus on “AI existential risk.” (Look surprised).
3. Funding and Researchers
AI Impacts is fiscally sponsored by Eliezer Yudkowsky’s MIRI – Machine Intelligence Research Institute at Berkeley (“these are funds specifically earmarked for AI Impacts, and not general MIRI funds”). The rest of its funding comes from other organizations that have shown an interest in far-off AI scenarios, like Survival and Flourishing Fund (which facilitates grants to “longtermism” projects with the help of Jaan Tallinn), EA-affiliated Open Philanthropy, The Centre for Effective Altruism (Oxford), Effective Altruism Funds (EA Funds), and Fathom Radiant (previously Fathom Computing, which is “building computer hardware to train neural networks at the human brain-scale and beyond”). AI Impacts previously received support from the Future of Life Institute (Biggest donor: Elon Musk) and the Future of Humanity Institute (led by Nick Bostrom, Oxford).
Who else? The notorious FTX Future Fund. In June 2022, it pledged “Up to $250k to support rerunning the highly-cited survey from 2016.” AI Impacts initially thanked FTX (“We thank FTX Future Fund for funding this project”). Then, their “Contributions” section became quite telling: “We thank FTX Future Fund for encouraging this project, though they did not ultimately fund it as anticipated due to the Bankruptcy of FTX.” So, the infamous crypto executive Sam Bankman-Fried wanted to support this as well, but, you know, fraud and stuff.
What is the background of AI Impacts’ researchers? Katja Grace, who co-founded the AI Impacts project, is from MIRI and the Future of Humanity Institute and believes AI “seems decently likely to literally destroy humanity (!!).” The two others were Zach Stein-Perlman, who describes himself as an “Aspiring rationalist and effective altruist,” and Ben Weinstein-Raun, who also spent years at Yudkowsky’s MIRI. As a recap, the AI Impacts team conducting research on “AI Safety” is like anti-vax activist Robert F. Kennedy Jr. conducting research on “Vaccine Safety.” The same inherent bias.
Conclusion
Despite being an unreliable survey, Tristan Harris cited it prominently – in the AI Dilemma, his podcast, an interview on NBC, and his New York Times OpEd. In the Twitter thread promoting the AI Dilemma, he shared an image of a crashed airplane to prove his point that “50% thought there was a 10% chance EVERYONE DIES.”
It practically proved that he’s using the same manipulative tactics he decries.
In 2022, Tristan Harris told “60 Minutes”: “The more moral outrageous language you use, the more inflammatory language, contemptuous language, the more indignation you use, the more it will get shared.”
Finally, we can agree on something. Tristan Harris took aim at social media platforms for what he claimed was their outrageous behavior, but it is actually his own way of operating: load up on outrageous, inflammatory language. He uses it around the dangers of emerging technologies to create panic. He didn’t invent this trend, but he profits greatly from it.
Moving forward, neither AI Hype nor AI Criti-Hype should be amplified.
There’s no need to repeat Google’s disinformation about its AI program learning Bengali it was never trained to know – since it was proven that Bengali was one of the languages it was trained on. Similarly, there’s no need to repeat the disinformation about “Over 700 top academics … Half of those surveyed stated…” human extinction is coming. The New York Times should issue a correction in Yuval Harari, Tristan Harris, and Aza Raskin’s OpEd. Time Magazine should also issue a correction on Max Tegmark’s OpEd which makes the same claim multiple times. That’s the ethical thing to do.
Distracting People from The Real Issues
Media portrayals of this technology tend to be extreme, causing confusion about its possibilities and impossibilities. Rather than emphasizing the extreme edges (e.g., AI Doomers), we need a more factual and less hyped discussion.
There are real issues we need to be worried about regarding the potential impact of generative AI. For example, my article on AI-generated art tools in November 2022 raised the alarm about deepfakes and how this technology can be easily weaponized (those paragraphs are even more relevant today). In addition to spreading falsehood, there are issues with bias, cybersecurity risks, and lack of transparency and accountability.
Those issues are unrelated to “human extinction” or “armies of Gollems” controlling our brains. The sensationalism of the AI Dilemma distracts us from the actual issues of today and tomorrow. We should stay away from imaginary threats and God-like/monstrous depictions. The solution to AI-lust (utopia) or AI-lash (Apocalypse) resides in… AI realism.
Dr. Nirit Weiss-Blatt (@DrTechlash) is the author of The Techlash and Tech Crisis Communication