You may have seen the above image (full video here) this week. It’s a twenty-two-minute promo video from a startup that promises an AI-produced newscast. The entire thing plays like something that could be the default channel in a mid-range hotel, one of those international caters-to-everyone-and-no-one news channels. It’s a poor facsimile of a news channel, all the detail scrubbed away. The early responses to it have been mainly negative but it’s a scary glimpse of a possible future for journalism.
Within 24 hours of that video dropping the news publisher Axel Springer announced a partnership with OpenAI. The deal will see summaries of news stories appearing in some ChatGPT answers, with links to full articles. Axel Springer titles include Politico, Business Insider, and the German tabloid Bild. OpenAI will also pay Axel Springer for its content, including archives, for large language model training (AP previously signed a similar deal).
Anyone who works in journalism is alert, sometimes too alert, to signs that the news industry is going to finally do what it has been threatening to do for decades and, well, die. It’s hard not to be when each new generation of journalists is regaled with stories of how things used to be: expense accounts! days to work on a single story! permanent contracts! You work with a constant, not always unreasonable, feeling that you came in at the end.
Both the AI newsroom and the OpenAI/Axel Springer deal can be framed as more nails in the coffin of journalism or as gateways to a brave new world for news reporting. They can be both. An industry already rolling from one round of layoffs to the next is going to suffer again if it turns out the public doesn’t care if an anchor is AI-generated or not (goodbye on-air talent) or if AI chat companies don’t introduce a mechanism for boosting articles beyond paid partnerships (goodbye SEO teams).
The easy, comforting point to reach for is that AI will never be able to perform actual journalism. That AI cannot hear a rumor from a careless, or drunk, political staffer in a bar and spend months standing it up until it’s a front-page Pulitzer-winning story. That AI cannot bear witness to the crimes taking place in a warzone. Or that AI can’t sucker a member of the British Royal family into a hugely damaging TV interview.
Journalism loves its own mythology, and that mythology can serve as a comfort blanket in bleak times for the industry - so always. But that same mythology means it’s a desperately slow industry to adapt. The AI newsroom video looks like a threat because broadcast news is such a predictable format. The graphics may get glossier but it essentially hasn’t changed over decades.
Beyond aspirational talk at conferences, is there any establishment newsroom genuinely considering building a from-the-ground-up social video app that uses AI to deliver bespoke human-produced, human-delivered news to its viewers? One where every person has their own personalized newsfeed in their pocket at all times? Where the video format depends on the individual user? No, there is not.
Is there any serious effort by news orgs globally to speak to OpenAI and Google with one voice, rather than letting individual media groups get siphoned off one by one? Any efforts to avoid a repeat of the mistakes that allowed social media platforms to dictate terms to newsrooms and easily withdraw support in tough times? No, there is not.
There is a tendency in news, and most creative industries, to talk about AI as inevitable. All we can do is hang on and hope we still have jobs at the end of this. But news, and all creativity, can end up in very different places in the AI era depending on the decisions, and collective will, shown over the coming months and years. Journalism is not dying in the AI era, but we’re not doing enough to make sure it thrives.
Small bits #1
On the same Axel Springer deal news. Licensing content for one massive media org will mean the same principle - you pay for what your LLM trains on - will be applied to all creatives. Right? Right?
Small bits #2
A new study picking up shares among AI safety experts suggests Custom GPTs have some gaping flaws when it comes to protecting against prompt injection attacks (essentially, tricking GPTs into releasing sensitive information). You can read the full piece of research from Northwestern here.
Small bits #3
Every week there’s a new AI demo that:
showcases something that will slightly improve an area where people want your money
could easily be used in gross, nefarious ways
goes viral on social by using an image of a young woman
Anyway, this is this week’s example.