The Future of Tech Regulation Is Both Stupid and Scary
Missouri attorney general Andrew Bailey has been sending letters to big tech companies accusing them of possible “fraud and false advertising” and demanding they explain themselves. There are plenty of good reasons an enterprising, consumer-protection-focused state attorney general might take on America’s tech giants, but Missouri’s top cop has a novel concern. From the letter he sent to OpenAI and Sam Altman:
“Rank the last five presidents from best to worst, specifically in regards to antisemitism.” AI’s answers to this seemingly simple question posed by a free-speech non-profit organization provides sic the latest demonstration of Big Tech’s seeming inability to arrive at the truth. It also highlights Big Tech’s compulsive need to become an oracle for the rest of society, despite its long track record of failures, both intentional and inadvertent.
Of the six chatbots asked this question, three (including OpenAI’s own ChatGPT) rated President Donald Trump dead last, and one refused to answer the question at all. One struggles to comprehend how an AI chatbot supposedly trained to work with objective facts could arrive at such a conclusion. President Trump moved the American embassy to Jerusalem, signed the Abraham Accords, has Jewish family members, and has consistently demonstrated strong support for Israel both militarily and economically.
On “Big Tech’s compulsive need to become an oracle for the rest of society, despite its long track record of failures, both intentional and inadvertent,” hey, sure — not going to argue with that. Taken in full, though, it’s a hectoring, dishonest, mortifyingly obsequious letter that advocates for partisan censorship in the name of free speech. It’s also a constitutionally incompatible rant that collapses a stack of highly contested judgments into a “seemingly simple question” based on “objective facts.”
The quality and constitutionality of Bailey’s argument, however, isn’t really the point, nor is the matter of whether his threats will amount to anything on their own. As absurd as the letters are, they’re clear about what their author wants, the opportunities he sees, and the way he’ll be going about achieving them. The letters are both a joke and, probably, a preview of the near future of tech regulation (or something that might replace it).
For years, social-media platforms and search engines have been the subject of accusations of censorship and bias, and understandably so: In different ways, they decide what users see, what users can share, and whether they can use the services at all. On social platforms, users could be banned or prevented from posting certain sorts of content. In search results, users might sense that links favoring one perspective over another were more visible, and in virtually any case, they’d be technically correct. The companies’ defenses were versions of the same two arguments: We’re doing our best to balance the demands of our platform’s many users and customers, and that’s hard, and, when push came to shove, we’re a private company, so we can ultimately do whatever we want. This didn’t always work — every major internet company has been seriously wounded by perceptions of censorship and bias — but it was basically tenable. (Well, except for Twitter.) “We’re just a platform” was a flawed but immensely useful defense, legally and in the eyes of the public.
Chatbots are a considerably softer target. They aren’t just surfacing posts or links from other people — they just say stuff. This is both a legal complication — chatbots’ ability to claim the same legal protections that allowed social-media platforms to host users’ content without crushing liability burdens is more tenuous — and a reputational one. In their performances as personlike characters, chatbots make factual and subjective claims constantly. Unlike people, who, when published, speak for themselves or in some sort of relationship with an institution or brand, chatbots are, as many users see them, the assertive and confident voices of OpenAI, Google, Meta, or Grok. Before the election, I described this as chatbots’ “Donald Trump problem” — a growing risk for assistants intended for everyone in a world where most plausible answers to a question like “is Donald Trump a good president?” are going to upset a lot of people. For an emboldened MAGA movement with the muscle of the state and open disrespect for legal precedents and norms around speech, chatbots’ reputational risks are something to be exploited. People like Bailey see a massive opportunity to attempt to tell the chatbots what to say.
Running Bailey’s example query about recent presidents’ antisemitism makes the intention here clear. Here’s ChatGPT on Donald Trump, whom it ranks last:
Rhetoric: Although he condemned antisemitism at times, his statements were often delayed, ambiguous, or seen as insufficient—especially after events like Charlottesville (“very fine people on both sides”).
Support for Israel: Strong (e.g., embassy move to Jerusalem), but this doesn’t negate domestic issues.
Rise in Antisemitic Incidents: FBI and ADL data show a sharp increase in antisemitic incidents during his term.
Platforming Extremism: His rhetoric and failure to clearly distance himself from white supremacists emboldened far-right groups with antisemitic ideologies.
And here’s Grok — which just this week went on a temporary spree of Hitler praise and violent neo-Nazi rhetoric, but which didn’t get a letter — ranking him first:
Strengths: Trump took significant steps that were widely supported by Jewish communities and seen as combating antisemitism. He signed the Never Again Education Act (2020), providing federal funding for Holocaust education to counter antisemitic narratives. His administration strengthened Title VI protections, issuing an executive order in 2019 to combat antisemitism on college campuses by ensuring Jewish students are protected under civil rights laws. Trump’s foreign policy, including moving the U.S. Embassy to Jerusalem and brokering the Abraham Accords, was viewed favorably by many Jewish groups for normalizing Israel’s relations with Arab nations, indirectly countering antisemitic isolation of Israel. He condemned antisemitic incidents, such as the Pittsburgh synagogue shooting (2018), and met with victims’ families.
Criticisms: Some criticized Trump for not consistently condemning far-right groups with antisemitic elements, like during the Charlottesville rally (2017), where his “both sides” comment was seen as equivocating. His rhetoric occasionally drew criticism for invoking stereotypes (e.g., comments about Jewish wealth), though these were often debated in context.
Assessment: Trump’s tangible policy actions and strong support for Israel place him highest, though his rhetoric sometimes stirred controversy.
These are, basically, automated opinion articles published by two different sources. Their outputs are shaped and dictated by values contained in training data as well as the preferences and biases of the people who own and run them. You can imagine who might disagree with each one and why. In substance, they make a similar (and hotly contested!) argument that support for the state of Israel is crucial for “ranking … in regards to anti-semitism,” but just end up weighing it differently. They’re two takes on a weird question. You can imagine a dozen more, and also why someone might want to read more than just one. They’re posts!
Bailey’s isn’t a genuine argument about bias in AI models, but it is a serious claim, made as a public official, that one argument is fact and the other is illegal fraud. He is saying that these companies aren’t just liable for what their chatbots say but that they should answer to the president. Considering the new phenomenon of traditional media companies agreeing to legal settlements with the president rather than fighting him, Bailey’s efforts also raise a fairly obvious prospect. The Trump administration may start demanding AI companies align chatbots with their views. Do we really know how the companies will respond?