The Goopification of AI
Late one recent night, I enlisted GPT-4 to fix my life. I began by soliciting broad-strokes summaries of my journalistic interests and an expedited five-step protocol for breaking in raw-denim jeans (if you know, you know). But after a few rounds, the asks became personal: “How can I tell if I’m overinvested in my career?” and “How do I sum up the volume of my work?” Before I knew it, I’d dredged my reserves of ennui into the early-morning hours, imploring the AI to supply me with maybe-consequential ways of doing—of becoming—better.
The answers were sensible enough, delivered in the stiffly efficient prose of a try-hard MBA student—if not quite visionary, just fine. My pursuit of lifestyle advice from a source with dubious qualifications certainly wasn’t earth-shattering either. Consider the success of Gwyneth Paltrow’s Goop brand, the preponderance of self-help books on the best-sellers list, or the countless online-content creators who spin bullet-point-able dogma on subjects as wide-ranging as time management, parenting, and bowel emptying into audience fealty and riches. The human condition is inconvenient, and Americans appreciate a quick and tidy fix. We don’t need much to cure that which ails us; we just need it now. And no one’s faster than AI. Your next better-living sage could very well be a bot, and you might not even notice the difference.
[Read: Just wait until Trump is a chatbot]
The groundwork for an AI self-help future was laid long ago by cunning humans. A few years back, while I was working as an editor for a personal-development brand, I became aware of what I now recognize as the field’s natural compatibility with machine learning. In less diplomatic terms: I realized how much of the self-help ecosystem relied on the same regurgitated business-bro axioms. I worked alongside writers who had it down to a science: publish stories that target a specific work- or productivity-related challenge that virtually any white-collar worker might face, and provide a handful of precise, actionable tips toward solving it. Articles that followed this format—especially those with headlines that spoke directly to the reader, such as “7 Mindset Shifts That Will Make You Rich, Happy, and Not at All Lonely”—amassed far more clicks and shares than those that diverged.
It’s not lost on me that the prospect of practical advice I could apply to my own specific quandaries right away was precisely what sent me to OpenAI’s doorstep too. Never mind that I could have almost certainly arrived at the same information without paying the $21.74 monthly subscription fee for GPT-4; better-living manuals are practically the mortar holding the internet together. Social-media influencers and the crème de la news-outlet crème have both joined the self-help game, proffering solutions for a happier, healthier, more financially fruitful life. The answers to my minor vexations were never more than a search and a click away.
As such, personal-development content is easily replicable by clever machines such as ChatGPT and GPT-4. That’s because large language models work, in effect, as probabilistic collage artists. They respond to a user’s prompts by assembling word combinations that have a high likelihood of appearing together in relation to said prompt. The more formulaic a prompt’s associated content—or, charitably, the more frequent and consistent its related wisdom on the internet—the truer to life the response.
But I should emphasize that true to life doesn’t necessarily mean accurate, and it definitely doesn’t always mean useful. Despite the new model’s facility with language and problem solving, OpenAI has made clear that GPT-4 is also prone to “hallucinating,” or confidently putting forward false or misleading information. This might matter less in the realm of self-help, where so much is made up anyway. Three years ago, a college student named Liam Porr made headlines after prompting GPT-3 to write a productivity and self-help newsletter that duped tens of thousands of readers. As Porr told the MIT Technology Review, the AI model excels at “making pretty language” but struggles with logic and reason. He deliberately chose self-help for his AI experiment precisely because it’s a popular blog category that demands minimal logical rigor.
[Read: You should ask a chatbot to make you a drink]
This speaks to the nature of self-help, in general, and Americans’ relationship to it, in particular. In her 2021 book, Americanon, the literary journalist Jess McHugh cites 13 best-selling nonfiction books—from The Old Farmer’s Almanac and Benjamin Franklin’s autobiography to The Seven Habits of Highly Effective People—that she argues were instrumental in establishing a distinctly American ethos. Each presented a model of self-improvement that married the moment’s societal preoccupations with a life optimized in service of an internalized market rationale. From its infancy as a nation, the U.S. readily latched onto self-help as a capitalist blueprint for being human.
To this day, Americans remain prodigious sellers and consumers of personal-development materials, to the tune of approximately $11.5 billion, a total that accounts for more than a quarter of the global industry and is steadily growing. Echoes of self-help-speak ring throughout the nation’s political culture; lest anyone forget, Donald Trump’s trademark blend of off-kilter optimism and self-delusion is thought to be the legacy of the late Trump-family pastor Norman Vincent Peale, who authored the 1952 self-improvement juggernaut The Power of Positive Thinking.
The unyielding hunger for self-improvement advice, in the U.S. and beyond, can’t be boiled down to a quest for answers. If that were the case, there’d probably be a higher barrier to entry—or at least lower tolerance for the industry’s healthy supply of hacks. Instead, a seemingly bottomless well of viral self-help aphorisms and live, laugh, love placards suggests that the repetition and familiarity of personal-development concepts is, if not central to the genre’s appeal, then not discrediting either. Perhaps there’s an element of ritualistic reassurance in revisiting familiar concepts, with or without updated packaging. Or perhaps the allure is, for some people, in the implicit suggestion that with the right branding, they could become a lifestyle guru too. Maybe Americans’ self-help obsession points not to a nation of lost sheep, but to one of aspiring shepherds.
The extent of AI’s ongoing Goopification will depend on what people demand of the tools at hand. Regardless of whether AI models could or would supplant human oracles, it seems all but certain that they’ll play a role in shaping whatever the next crop of so-called thought leaders comes out with—that is, if they haven’t done so already. If anything’s safe to bet, it’s that as long as human needs continue to evolve in tandem with shifting societal norms, people will seek out actionable guidance for living better and crushing it harder. Why not get that guidance from a bot?