If you are keen to use DALL-E 3 but either don’t have a paid ChatGPT account or do but have not been granted access to DALL-E 3 yet, then the easiest way to play around with the new tool is via Mircosoft’s Bing. The Bing Image creator powered by DALL-E 3 is accessible with a Microsoft log-in (it just needs your details, not a paid subscription) and early results do go some way to supporting the hype, and panic, on social media regarding the new tool.
The main takeaway from the power users extolling the virtues of the new iteration is that DALL-E 3 is now the best text-to-image tool for closely following prompts. Given DALL-E 2 was the previous champ at this that’s not a massive surprise. But wait, you think, isn’t Midjourney the best text-to-image tool? Yes, it is. Still is. A final image generated by Midjourney after plenty of testing and running through different iterations is still the best. But DALL-E is stronger at interpreting your prompt. It may not look as polished as a Midjourney effort but it’s more likely to represent the weird thing you just asked for.
So a popular early workaround, particularly useful when the prompt is something leftfield and difficult to conceptualize, is to generate an image from DALL-E 3 that properly represents what you had in mind, then add it to the Midjourney discord, generate a media link, and add to a prompt in Midjourney where you use tools like inpainting (see full walkthrough below).
That’s the technical way DALL-E 3 is impacting most creatives, as a quicker way to get an initial concept idea made. But it’s also having an immediate effect on the nascent ‘look, it’s AI’ content machine. Where clicks are driven less by technical innovation and more by having an interesting hook. And DALL-E 3 is perfect for users with an interesting hook. Which is essentially a fancy way of saying it’s good for shitposting.
Above, is one of the most popular DALL-E 3-made images on Reddit right now. It’s another example of comic characters getting reproduced (see last issue) but also essentially a new form of a shitpost. Here’s another one:
There are literally billions of dollars being pumped into generative AI tools with lots of utopian talk about how this allows our imagination to come alive. And the people who are most excited about it? Very online people who want to get upvotes on Reddit for going very meta with an image. The people who don’t understand, let alone enjoy, the two images above, will eventually use all these tools in their day-to-day life, but at present all these hyped tools are having the greatest impact on those who may need to touch grass.
One last DALL-E 3 takeaway, for those who struggle with the effort of forming coherent sentences, it can just follow emoji prompts. The final death knell for pesky words!
Though I tried describing the intense meltdown that happened on this morning’s school run using only emojis, but it failed to capture the scene (see below). So maybe we still need words.
Small bits #1: Spongebob’s war crimes
Sometimes as part of this job I’ll read a headline like, for instance, Bing is Generating Images of Spongebob Doing 9/11 and then need to Twitter search ‘Spongebob 9/11’ and wonder what I am doing with my life. Anyway, yes Bing can create Spongebob doing 9/11. And people (for people, read: teenagers) are working around prompt filters by entering ‘two skyscrapers’ instead of 9/11. Then efforts will be reported on and, often, republished as ‘won’t someone please think of the children’ scare stories (the Spongebob thing has not yet made the Mail Online, but it could!). This cycle will repeat a lot as new tools come online.
The tech academic Ian Bogist made an interesting point about it this week, writing, ”Something's off in tech journalism. A blind race toward scandal, where all events and actions are affronts or harms, whether or not they are actually even happening”.
There are many actual harms Big Tech and social media platforms can inflict on society and many ways they are ill-equipped to address those harms. But, yeah, sometimes it’s important to separate all that from ‘water is wet’ stories about kids shitposting. Still, I’m glad someone reported the Spongebob thing because I like seeing in real time glaring problems with very expensive, carefully thought-out new products from massive companies.
Small bits #2: The AI alter ego
Gen AI hasn’t shown up that much in the creator space (as in social media creators, not to be confused with creatives using AI). And that’s mainly because an easily replicable AI gimmick hasn’t yet popped. The above video style, a real person talking to their AI alter ego, is my bet for a popular, quickly annoying, viral trend. I guess Peter Diamandis is technically a creator, but it will be interesting to see if this video angle breaks beyond tech-people-talking-about-tech creators and over to influencers-with-big-brand-commercial-tie-ins creators.
Small bits #3: Meta’s chatbot problem
This is the first image in a great thread from Joanna Geary in which she goes back and forth with one of Meta’s new ‘AI experiences’ that are still rolling out to users across its family of apps. The conversation takes some insane turns, AI Brian’s wife is dead! Then she’s not! As someone who cursed at a banking app’s chatbot this very week I’m looking forward to having access to these ‘characters’ and, well, trying to break them. But I’m also struggling to see what they can do that an all-encompassing chatbot could not.