Uncategorized

Why AI companies are dropping the doomerism

Illustration by Cath Virginia / The Verge | Photos by Getty Images

The latest AI manifesto, from Anthropic CEO Dario Amodei, says a lot about the industry’s current moment. On today’s episode of Decoder, we’re going to try and figure out “digital god.” I figured we’ve been doing this long enough, let’s just get after it. Can we build an artificial intelligence so powerful that it changes the world and answers all of our questions? The AI industry has decided the answer is yes.
In September, OpenAI’s Sam Altman published a blog post claiming we’ll have superintelligent AI in “a few thousand days.” And earlier this month, Dario Amodei, the CEO of OpenAI competitor Anthropic, published a 14,000-word post laying out what exactly he thinks such a system will be capable of when it arrives, which he says could be as soon as 2026.

What’s fascinating is that the visions laid out in both posts are so similar — they both promise dramatic superintelligent AI that will bring massive improvements to work, to science and healthcare, and even to democracy and prosperity. Digital god, baby.
But while the visions are similar, the companies are, in many ways, openly opposed: Anthropic is the original OpenAI defection story. Dario and a cohort of fellow researchers left OpenAI in 2021 after becoming concerned with its increasingly commercial direction and approach to safety, and they created Anthropic to be a safer, slower AI company. And the emphasis was really on safety until recently; just last year, a major New York Times profile of the company called it the “white-hot center of A.I. doomerism.”
But the launch of ChatGPT, and the generative AI boom that followed, kicked off a colossal tech arms race, and now, Anthropic is as much in the game as anyone. It’s taken in billions in funding, mostly from Amazon, and built Claude, a chatbot and language model to rival OpenAI’s GPT-4. Now, Dario is writing long blog posts about spreading democracy with AI.
So what’s going on here? Why is the head of Anthropic suddenly talking so optimistically about AI, when he was previously known for being the safer, slower alternative to the progress-at-all-costs OpenAI? Is this just more AI hype to court investors? And if AGI is really around the corner, how are we even measuring what it means for it to be safe?
To break it all down, I brought on Verge senior AI reporter Kylie Robison to discuss what it means, what’s going on in the industry, and whether we can trust these AI leaders to tell us what they really think.
If you’d like to read more about some of the news and topics we discussed in this episode, check out the links below:

Machines of Loving Grace | Dario Amodei

The Intelligence Age | Sam Altman

Anthropic’s CEO thinks AI will lead to a utopia | The Verge

AI manifestos flood the tech zone | Axios

OpenAI just raised $6.6 billion to build ever-larger AI models | The Verge

OpenAI was a research lab — now it’s just another tech company | The Verge

Anthropic’s latest AI update can use a computer on its own | The Verge

Agents are the future AI companies promise — and desperately need | The Verge

California governor vetoes major AI safety bill | The Verge

Inside the white-hot center of AI doomerism | NYT

Microsoft and OpenAI’s close partnership shows signs of fraying | NYT

The $14 billion question dividing OpenAI and Microsoft | WSJ

Anthropic has floated $40 billion valuation in funding talks | The Information

Illustration by Cath Virginia / The Verge | Photos by Getty Images

The latest AI manifesto, from Anthropic CEO Dario Amodei, says a lot about the industry’s current moment.

On today’s episode of Decoder, we’re going to try and figure out “digital god.” I figured we’ve been doing this long enough, let’s just get after it. Can we build an artificial intelligence so powerful that it changes the world and answers all of our questions? The AI industry has decided the answer is yes.

In September, OpenAI’s Sam Altman published a blog post claiming we’ll have superintelligent AI in “a few thousand days.” And earlier this month, Dario Amodei, the CEO of OpenAI competitor Anthropic, published a 14,000-word post laying out what exactly he thinks such a system will be capable of when it arrives, which he says could be as soon as 2026.

What’s fascinating is that the visions laid out in both posts are so similar — they both promise dramatic superintelligent AI that will bring massive improvements to work, to science and healthcare, and even to democracy and prosperity. Digital god, baby.

But while the visions are similar, the companies are, in many ways, openly opposed: Anthropic is the original OpenAI defection story. Dario and a cohort of fellow researchers left OpenAI in 2021 after becoming concerned with its increasingly commercial direction and approach to safety, and they created Anthropic to be a safer, slower AI company. And the emphasis was really on safety until recently; just last year, a major New York Times profile of the company called it the “white-hot center of A.I. doomerism.”

But the launch of ChatGPT, and the generative AI boom that followed, kicked off a colossal tech arms race, and now, Anthropic is as much in the game as anyone. It’s taken in billions in funding, mostly from Amazon, and built Claude, a chatbot and language model to rival OpenAI’s GPT-4. Now, Dario is writing long blog posts about spreading democracy with AI.

So what’s going on here? Why is the head of Anthropic suddenly talking so optimistically about AI, when he was previously known for being the safer, slower alternative to the progress-at-all-costs OpenAI? Is this just more AI hype to court investors? And if AGI is really around the corner, how are we even measuring what it means for it to be safe?

To break it all down, I brought on Verge senior AI reporter Kylie Robison to discuss what it means, what’s going on in the industry, and whether we can trust these AI leaders to tell us what they really think.

If you’d like to read more about some of the news and topics we discussed in this episode, check out the links below:

Machines of Loving Grace | Dario Amodei

The Intelligence Age | Sam Altman

Anthropic’s CEO thinks AI will lead to a utopia | The Verge

AI manifestos flood the tech zone | Axios

OpenAI just raised $6.6 billion to build ever-larger AI models | The Verge

OpenAI was a research lab — now it’s just another tech company | The Verge

Anthropic’s latest AI update can use a computer on its own | The Verge

Agents are the future AI companies promise — and desperately need | The Verge

California governor vetoes major AI safety bill | The Verge

Inside the white-hot center of AI doomerism | NYT

Microsoft and OpenAI’s close partnership shows signs of fraying | NYT

The $14 billion question dividing OpenAI and Microsoft | WSJ

Anthropic has floated $40 billion valuation in funding talks | The Information

Read More 

Leave a Reply

Your email address will not be published. Required fields are marked *

Scroll to top
Generated by Feedzy