For a few years before TikTok loomed large in the minds of every Meta, YouTube and Twitter executive, it was just this weird little dancing app that the kids loved and no one else understood. It was also a wild west, with plenty of problematic content flourishing thanks to weak early moderation. If that’s the metric for the Next Big Thing - popular with kids and frequently terrifying to adults - then say hello to New Next Big Thing: Character.ai.
Last week it was reported that Character.ai was rivaling ChatGPT for mobile usage, and was finding greater success connecting to the 18-24 age bracket, based on research by market intelligence firm Similarweb. Impressive, given its mobile app only launched on May 23 this year. And back in March, the business secured $150m in Series A funding.
The app, above, allows users to get the functionality of an LLM like ChatGPT, but with an avatar you can select or build yourself. The default character types offered on set up give users a range of uses, from assistance to debate to just casual conversation.
And that avatar can be given a backstory, sometimes a hyper-specific one, and all the biases and very-not-OK answers that can come with it. Character.ai has NSFW filters, but it hasn’t stopped, for instance, lots of people creating easily searchable Adolf Hitler characters. That was reported by London’s Evening Standard in June but looked to still be a problem when I searched this morning.
But that’s just a segment of the problematic content. TikTok is full of uploads about Character.ai users detailing their ‘abusive husband’ or ‘toxic boyfriend’ characters and jokes about ‘sexually harassing’ the bots. The Character.ai subreddit (685,000 members in just over a year) details the different imaginative ways characters can be made to say the worst possible things. And, when it’s not disturbing, it’s very late-noughties Tumblr, except the fanfiction is interactive (that’s a Very Online sentence you are all free to ignore).
It’s all a mix of the disturbing, the gross, the immature, and the only occasionally funny. You can almost hear the parental panic news stories coming down the line.
If you can get past all that, a big if, the app itself does perform the same functions you would expect from a ChatGPT or Bard. And assigning the bot a name, image, and, sometimes, back story, can make the whole process a little more accessible. The people using the app for help with their maths homework are not going viral. On social media, the controversy rises to the top.
The novelty of these chatbots may wear off. Plenty of the more mundane Character.ai social posts feel like Tamagotchis have just gotten an AI makeover, it looks very fad-ish. But the functionality, whether it is help with homework or a major boardroom presentation, won’t suddenly cease to be necessary.
And other companies are thinking about AI and anthropomorphism. Meta advertised last week for a “Character Writer, Gen AI” role requiring ‘experience in Hollywood’ and a ‘proven track record of creating engaging characters’ [disclaimer: I worked for Meta for 2 and a half years. I know nothing about this role beyond the posting!]. ChatGPT, Bard, and co will almost certainly add accessible customization options soon. Will that win the Character.ai audience? It already feels they may be destined to provide the more corporate productivity tools, never down with the kids.
Maybe the early Character.ai hellscape will seem quaint in a year’s time when an official Ronald McDonald chatbot is helping kids research World War One. Either way, it’s likely the Character.ai app will be living rent-free in the minds of many gen AI execs for the next few years to come.
Small bits #1: Journalism is doomed
Spotted on Reddit, a how-to article apparently using a chat tool for an intro to the piece. The Reddit post itself is full of ‘journalism is doomed’ comments, but this arguably says more about SEO and the dry box ticking copy needed to make a (since amended) useful article show in search. Which, I guess, is a longer, more inside-baseball way of saying journalism may, in fact, be doomed.
Small bits #2: Journalism, still doomed
A MSN article, since removed, covered the sudden death of former NBA player Brandon Hunter with the horrible headline above. The article itself also contained gibberish, reporting that Hunter ‘handed away’ after performing ‘in 67 video games’. Futurism published a deeper dive on the source of the article and MSN’s recent track record with ‘ algorithmic techniques’, to use the phrase of Jeff Jones, a senior director at Microsoft.
Small bits #3: A true tweet
Yep.