Hi all, this is the first Explainable newsletter in a few weeks. I’m been fortunate enough to be too busy with work to write up a weekly post of late. I’m going to keep Explainable going, I really enjoyed writing it and hearing from you lovely readers (howya Paul). And it’s a nice calling card for landing work in the space. But for the moment I’ll just post when there’s something that hooks me, rather than trying to bang out one thousand words weekly (turns out, I’m not living or dying on the latest OpenAI release!)
In the eleven months of doing this newsletter, I’ve gone back and forth on where I stand within the AI predictions framework. The framework is broadly:
AI Will Be God: That’s bad
AI Will Be God: That’s good
AI is just a new utility, like the internet: That’s bad
AI is just a new utility, like the internet: That’s good.
But in recent months I’ve settled into thinking it’s just the new internet and that’s a little good (some things are easier) and a lot bad (big tech can’t be trusted to treat workers or creators fairly).
Of course, it’s still worth writing a little humanity-wide Post-it note reminder that AI could destroy us all, we should keep that in mind, it’s worth staying on top of. But there’s a sunken cost fallacy at play for some people who have dedicated their lives to warning about AI existential threats. None of the current AI models have that capability, and I’m not seeing any clear evidence that is going to change any time soon.
The hype and fearmongering angle is well covered. But I’m starting to feel like the other side, those who are puncturing the hype, are beginning to fall into a similar trap. By reacting against the hype they’re sometimes underrating how many basic tenets of work are changing. Same goes for examining AI solely through the optics of “will OpenAI become bigger than Google?” or “will AI save Google?”. It becomes like movie reviews that concentrate on box office takings and forget to, you know, review the movie.
Ryan Broderick of the excellent Garbage Day wrote this recently in relation to a slew of OpenAI and Google announcements:
“They’re both about trying to repackage what already exists into something either users, developers, or investors will care about. With OpenAI, they’re literally just trying to jam everything we already do online into a new interface that they own. All while promising us that if they can commit just a bit more copyright infringement, they’ll build computer God. While Google is just trying to repackage what Google already does and are calling it “AI” because no one would care if they said they were building Clippy 2.0. Yesterday, AI evangelists were losing their minds over Google’s new AI agent that can generate a spreadsheet of receipts in your Gmail inbox. I mean, do you hear yourselves?”
I’m kind of with Ryan on this except…I’m sadly excited about AI as Clippy 2.0.
That’s what ChatGPT has become for me. It’s become a single tab that I click when I don’t know something. At the moment I have Spotify open for music, Gmail open for emails, Google Drive open for work stuff and ChatGPT open for: skim reading big tracts of text, help on data analysis work, scrappy first drafts on some writing (not all, and not newsletter), any searches where I just need a list of things and don’t want to trawl through Google paid content, generating quick images, Canva template recommendations, any searches where Google is coming up short (so most searches).
That’s a lot of stuff. If that’s what GPT and Gemini provide and they never crack that ceiling, if the tools are just a moderate improvement on what we already use then that’s not nothing. And that’s just for Joe Average Online Guy.
And this Clippy 2.0 offering is not just applicable to our personal use. Any medium-to-large business can, and should, be figuring out some quick-win AI wraparounds using a version of Clippy 2.0. Canva, the design platform, has integrated AI into its help service. Any time I have an extremely basic question for my extremely basic attempts at design the AI help tool points me in the right direction. When it was a more rudimentary bot the answers never tallied with questions. Or you had to go through 12 rounds of automated answers before you were reluctantly connected with a real-life person. That’s already having a large effect on customer service jobs around the world. Clippy 2.0 is an OK thing with plenty of bad consequences for millions of people.
But that’s not the pitch from AI. That’s not what is generating billions of dollars of investment. It’s doubtful that Sam Altman and co can sell Clippy 2.0 as a win. That says more about how investment works and about big tech hype than it does about the AI we have access to now.
Anyway, I’ll be continuing to mess around with AI tools, will assume people are using them more than they let on, will assume that all tools are not as good as advertised, not as useless as the dunks suggest, and not as fair to creators and workers as they should be. And on the rare occasions that anything comes up that is deserving of one thousand words it will still slink into your inboxes.
LOL! No. We may be at Clippy 2.0 now (seems like Clippy 200.0 to me, personally), but we will be at Clippy 2x10E100 pretty soon.
Stay tuned—these are very early days. Like voice recognition, AI went off the rails for a few years. Like voice recognition, the old way of approaching the problem was thrown out and the industry moved to a new one. We are back on the rails for AI.