There was a time when the Internet was expected to usher in a new era of radical democracy. As usual, marketers ruin everything. In 2012, Ryan Holiday released Trust Me, I’m Lying and while he was roundly criticized personally for his perceived moral shortcomings, no one could really doubt the effectiveness of the tactics he described.
Not only would the book go on to be the go to playbook for digital marketers willing to bend rules in search of huge payoffs, it would later be adopted by the political elite. Today, the methodology described is rampant in politics across the ideological spectrum – seed the digital space with increasingly outrageous, wholly fabricated stories. The stories are not in support of your cause, but rather in opposition to your opponents. Once the stories reach critical mass, nudge the media toward your story ‘hey look, everyone online is up in arms over…’ and just like that, you can get national news coverage for your cause, without anyone attributing your efforts. The best part? You get massive media coverage without the exorbitant cost of advertising.
It’s easy to see the appeal: save millions in advertising, gain millions via awareness of your cause.
Eventually, the phenomenon intensified; the line between online and offline blurred. Now, with the fall of journalism, you hardly need national news coverage to get results. With the increased willingness of the market to share their personal information online (remember when buying something online felt unsafe?), a person’s digital activity immediately has real-world implications. This culminated in the Cambridge Analytica scandal – where a political campaign was able to use social media data to pinpoint target individuals where they live. Literally knocking on doors, knowing in advance the exact political position of its occupants. Privacy advocates roared. Marketers turned green.
We’re on the verge of watching this pattern repeat. LLM’s learn from vast data sets. Chief among those data sets – the world wide web. Every article, blog post, podcast and social media gimmick is teaching LLM’s what’s what.
Now AI pros saw this coming for some time, after all it’s been happening for years. In 2016, Microsoft let an AI into the wild. A twitter AI named Tay that learned from all the content on Twitter. In no time at all it was a holocaust-denying, racist, misogynistic troll tweeting lewd comments at anyone who would listen. It’s not Microsoft’s fault – the tech learned from people, and people are awful.
Now we’re poised to see the problem magnified in ways we can’t predict. Google is displacing website results with AI generated answers to questions. Companies are using AI to write content of all kinds, from social media fluff to Terms of Service. There isn’t a major decision or purchase that’s made without some kind of online research. People ask questions about their health, finances, political leaders, war, poverty and every other major issue of our time. What’s to stop Ryan Holiday’s playbook being used to manipulate AI?
Flood the digital space and wait for AI to start regurgitating it as fact. I asked ChatGPT to blame the US/Canada trade war on (throws a dart) India and it obliged:

Now on the one hand you can see it hedging, this is a hypothetical argument if one wanted to make the argument. On the other hand, the point I’m making is about bad actors – people who are not interested in the truth, but rather advancing a narrative that serves their agenda. They can now use AI to generate the nonsense that will flood the interwebs that AI learns from!
This is bad, but like I said, AI engineers saw this coming and are working on the problem. They’ve tried things like using discrete, credible datasets, and prioritizing those over other less reliable sources. While this raises gatekeeping questions, I think the challenge is still a technical one – how does AI keep up with the latest information, and stay accurate at the same time?
On this one, I don’t have any answers, rather I have somethings to keep an eye out for. Things like:
- Can a company manipulate LLM’s to serve their narrative and their marketing goals?
- What amount of content is the critical mass required for an LLM to change what it’s going to say next?
- How seriously do we need a solution? At what point do we need to focus on the bad actors?