The internet is getting worse, or enshittified, and it’s all AI’s fault. That’s a story you will read more and more over the coming years. It’s, at most, half true. The evidence for the prosecution is stacking up though.
Sports Illustrated put out a statement yesterday denying it published out AI-generated content, though it doesn’t seem able to entirely refute the Futurism article alleging the use of AI.
Over on Google if you search for Hawaiiin singer Israel Kamakawiwoʻole (of Somewhere Over the Rainbow fame) the top image is a bad Midjourney creation where the singer is holding a guitar, not his trademark ukulele. This sounds like just a small quirk of the system except that image is not recognized as AI-generated by ChatGPT and this kind of loop could lead to a phenomenon termed Hapsburg AI, in short, ‘a system that is so heavily trained on the outputs of other generative AI's that it becomes an inbred mutant, likely with exaggerated, grotesque features. Axios recently put it succinctly, ‘AI could choke on its own exhaust as it fills the web”.
But there’s more. The main character on Twitter/X yesterday was the blue tick happily boasting of how he ‘pulled off an SEO heist using AI’. He detailed how he exported a competitor's sitemap, turned all the URLs into article titles, and created 1,800 articles using AI. So technically a heist, in that something was stolen. But running against everything that Hollywood has taught me about heists in that this was orchestrated by actual bad guys. Some people make a little money and the public gets tricked into reading a far worse version of an existing product.
The kicker of that story is that people are now talking about stealing his web traffic which is both funny and yet another potential lessening of quality content on the web.
So yeah, the case for the prosecution looks pretty strong. AI is not getting off scot-free here. But it still has a chance of being considered just an accessory to the murder of the Internet, rather than the main defendant. The Sports Illustrated AI story was enabled by a choice to license out product reviews to a third-party company, AdVon Commerce, and to then not carefully monitor said content.
There’s a throughline from that decision back to any number of calls by Sports Illustrated and its owners to value web traffic over being considered a prestige publication. The death by a thousand cuts that dozens of publications have experienced since first going online 25-plus years ago. AI is just the latest tool enabling bad corporate decisions, before AI there were pivots to videos, putting up paywalls too soon, putting up paywalls too late, ignoring social media engagement, and over-valuing social media engagement. AI is not the problem, just the latest opportunity for a bad short-term decision.
And Google Search displaying AI images over real images may gain more attention than a general lessening in result quality, but they’re both symptoms of Big Tech chasing short-term solutions to appeal to investors over major overhauls to return a product, in this instance Google Search, to the thing that made it popular in the first place. The Blue Tick Bros churning out AI-enabled shitcopy to scam more web traffic is another side of the same coin, where search engine optimization would be more accurately named search engine gamification.
So what’s next? Is the internet just going to get more and more unusable? Maybe. Counter-intuitively, AI can probably help. There are greater opportunities for more agile companies to build fit-for-use tools for internet users. There is no shortage of straightforward ways AI can identify pain points for users. Near future variations of products like Custom GPTs could give people bespoke ads-free search engines. Because of the rule of three, I should now suggest something fresh, innovative, and money-making that publishers can do with AI, but on that, I’m still stumped. Still, two out of three ain’t bad.
Small bits #1: AI but bigger
The ‘make it more’ trend is doing numbers on social this week. Users generate an AI image on ChatGPT and then request to ‘make it more’ to predictably over-the-top results. The best-in-class is the image above, of course.
Small bits #2: ChatGPT failed the Turing Test
An interesting paper published this month found that ChatGPT 4 could not pass the Turing Test, designed to assess if machines can exhibit human-level intelligent behavior. The whole paper is worth a read or, failing that, the more digestible accompanying X thread. Cameron Jones points out that the experiment also demonstrates problems with the Turing Test and flaws in their own prompt construction.
Small bits #3: AI cannot measure stress
I’m going hard on academic papers today. This is a study into a tool used by an American healthcare provider claiming to measure stress by analyzing speech using AI. It’s not repeatable, which means you will get any kind of random answer from one use to the next. Full paper here and X thread breakdown here. Two small things to note: 1) this technically has nothing to do with creativity but it does illustrate how inaccurately AI is presented, which I guess is a form of creativity 2) the above graph means nothing to me but I like to have a visual on each Small Bit.