The two conflicting AI safety worriers
Why talk of existential threat can silence more mundane concerns
For the first time since I started this newsletter AI has consistently featured as the close-to-top story in the news, not just the ‘and finally’ segment. The AI Safety Summit in the UK and President Biden’s Executive Order on AI meant it has passed that litmus test of newsworthiness, it got a mention by my mam in the group chat.
But it also all feels a little high-level, lots of broad strokes. Plenty of important people who have been briefed on AI, but who have never surreptitiously thrown a work project into ChatGPT. And that’s reflected in some of the debate around AI bubbling up this week and how it links back to a conference held on a Google campus back in 2015.
There’s a great Vox article about this 2015 event titled ‘I spent a weekend at Google talking with nerds about charity. I came away … worried’ that delves into a subsection of the effective altruism movement. The piece explores how some wealthy people in tech took the inarguable concept that charity could be more effective and thought about it so long and so hard that they eventually reached the dumbest most callous conclusion: that things like genocides and famines were small beer compared to an extinction-level event. The writer Dylan Matthews summed up the thinking thus:
‘The number of future humans who will never exist if humans go extinct is so great that reducing the risk of extinction by 0.00000000000000001 percent can be expected to save 100 billion more lives than, say, preventing the genocide of 1 billion people. That argues…for prioritizing efforts to prevent human extinction above other endeavors.’
And the piece, of course, featured lots of talk of artificial intelligence:
’In the beginning, EA was mostly about fighting global poverty. Now it's becoming more and more about funding computer science research to forestall an artificial intelligence–provoked apocalypse. At the risk of overgeneralizing, the computer science majors have convinced each other that the best way to save the world is to do computer science research. Compared to that, multiple attendees said, global poverty is a "rounding error.’
Essentially a data-driven approach to charity eventually resulted in labeling the death of thousands, or even millions, as statistically insignificant.
Anyway, it’s eight years later and one of the movement’s most recognizable figures, Sam Bankman-Fried, is currently on trial and effective altruism, or at least that particular brand of effective altruism, is badly damaged as a concept. But the debate, about existential risk versus the here-and-now, is rumbling on.
It blundered into my timeline again this week as AI took center stage. The Big Tech Thinkers prophesied about AI posing a threat to humanity, some people wondered aloud if maybe we should be more focused on protecting people’s jobs in the AI era, Big Thinkers said, “oh I guess you don’t care about billions of future deaths?” and on it rolled. (This summary is not a massive exaggeration, see X thread here).
The smart take appears to be that addressing immediate threats now with strong regulation will make us better equipped to tackle any future existential risks. US Vice President Kamala Harris got some good press by essentially claiming leadership of the immediate risk camp, while UK Prime Minister Rishi Sunak did some Big Tech Thinker navel gazing (I’m paraphrasing).
The Bletchley Declaration that was published at the summit isn’t a bad thing, having countries like United States and China represented on the same document declaring that AI developments should include some cooperation and acknowledgment of risk is a decent start. President Biden’s Executive Order on AI goes a bit further and hints at laying out actual policy. But these are the easy wins. The messy stuff, where job markets dramatically contract or one nation stumbles on a competitive edge, is still to come.
If you want to dive deeper into the regulation end of things then Carl Miller and James Ball published a strong paper this week titled, ‘Open Sourcing the AI Revolution: Framing the debate on open source, artificial intelligence and regulation’, it’s a good jump-off point.
Next week all the world leaders will be back at their desks, probably not using ChatGPT, and we can get back to Midjourney hacks.
Small Bits #1: The stereotyping problem
There is now a glut of articles and research pieces highlighting what anyone using a text-to-image tool has long known: generative AI has a massive stereotyping problem. Nitasha Tiku from the Washington Post assembled a daming collection of the best research into this area when sharing her own WP feature on the problem. There’s the great Rest of World scroll (image above), a Bloomberg piece on Stable Diffusion’s glaring biases, and some great research papers including a 2021 paper that gave us an early heads-up.
Small bits #2: An AI scientist is coming
Sticking with the big announcements theme of AI Safety Summit week, the San Francisco-based non-profit Future House has announced its ‘moonshot’ plan to build an ‘AI scientist’ over the next 10 years. CEO Samuel Rodriques’ declared aim is to make research ‘10x to 100x faster than it is today’.
Small Bits #3: ‘Can we do something viral with AI?’
2023 will, hopefully, be the nadir of getting AI to do ventriloquism with famous dead people. A gimmick that generally serves to diminish the potential power of AI. Anyway, the British Foreign Office got Mary Shelley, Ada Lovelace, Charles Darwin and Alan Turing to weigh in on AI safety. A sentence that would not have made sense a few years ago, and barely makes sense this year.
*That on-the-nose image up top was my attempt to illustrate the here-and-now threat versus the existential threat. But Dall-E and Midjourney just could not comprehend a volcano that did not have lava and volcanic ash spewing from it! Anyway, I kept it because it still kinda works as a lead image and because I wanted to share that the two most popular text-to-image tools don’t seem to know about extinct or dormant volcanos.