verge-rss
Siri’s big ChatGPT upgrade is here — for better and worse
We finally got a look at one of Siri’s big Apple Intelligence updates. | Photo: Allison Johnson / The Verge
Apple Intelligence’s official launch is less than a week away, but it’s the next wave of AI updates that will start to make Siri a lot more useful.
The forthcoming iOS 18.2 update — now available as a developer beta — starts to make your phone a lot smarter with the addition of Visual Intelligence and the ability to pass Siri requests along to ChatGPT. On phones that support Apple Intelligence, Siri won’t just be a “let me Google that for you” machine; now it’s a “let me ChatGPT that for you” machine, with all that entails: good, bad, and everything in between.
By default, Siri will ask for confirmation every time it wants to pass on a request to ChatGPT. This makes a lot of sense, and I thought I’d prefer that behavior. But after an afternoon using it, I realized I just wanted to get to the ChatGPT answer faster and turned it off. Siri still handles basic questions on its own and doesn’t pass things like “When is the US election?” to ChatGPT, thankfully. And it will still just Google something for you when that’s the best way to get to your answer.
But more complex stuff goes to ChatGPT, which means Siri can handle a lot more stuff than I’m used to throwing at it. Ask it “What are some cocktails I can make with whiskey and lemon juice?” and you’ll get a short list of options with descriptions. Old Siri will basically just show you a Google search snippet.
Siri in iOS 18.1 without ChatGPT just tells me to make a whiskey sour.
Siri with ChatGPT recommends a few options including a Gold Rush, which is the right answer if you ask me.
AI chatbots like ChatGPT and Google’s Gemini regularly get things wrong and make things up. But I’ve started using them more and more as a starting point when I need help with something and I’m basically clueless. I actually downloaded Gemini (by way of Google’s iOS app) to the iPhone 16 I’ve been using because I got tired of opening it in a browser. As long as you don’t blindly trust what the AI tells you, it’s a handy way to get pointed in the right direction.
Apple has put some nice privacy protections around your use of ChatGPT. OpenAI is “required to process your request solely for the purpose of fulfilling it and does not store your request or any responses it provides,” Apple states. The information won’t be used to train AI models, either. If you sign in to your OpenAI account, your requests are saved in your ChatGPT history and all of OpenAI’s terms apply. But you don’t need an OpenAI account at all if you don’t want or have one. I appreciate that.
A glorified, iOS-ified Google Lens
iPhone 16 owners will have another way to tap into ChatGPT’s smarts, too: Visual Intelligence, which is also enabled in 18.2. It’s accessed by holding down the camera control button, which pulls up a camera live view. Once you take a photo, you can have ChatGPT analyze it or use Google Image search to find similar results on the web. It is a glorified, iOS-ified Google Lens, and it’s about time iPhones had something like this built in. Siri could previously look up plants and landmarks and the like, but nothing as expansive as this.
Visual Intelligence is pretty good — mostly. It was very flattering in its descriptions of various spots around my house, calling my entryway “cozy” and “well-organized,” and our whiskey collection “impressive.” It came up with a decent list of cocktails to make based on a picture of my home bar, and it got me started in the right direction on a home repair with a picture of the problem. As long as you treat the answer as a starting point, AI is pretty handy for these kinds of low stakes questions.
But all of the familiar pitfalls of AI chatbots are present, which Apple warns you about in every interaction you have with ChatGPT. I asked it to explain the joke in a Garfield comic strip to me, and it completely made up details that weren’t there (though to be fair, the joke it invented was funnier than the actual source material). I asked it about the books on my shelf and it hallucinated some titles that are definitely not on that shelf.
It starts off on the right track then veers into hallucination-land.
This is totally plausible but not at all what happens in this comic strip!
I also wish that ChatGPT would let you check its work the way that Gemini does. Google’s AI chatbot supplies obvious links to articles on the topics it references, so you know where to go to read more and double check what the AI is telling you. ChatGPT mentions in small print the number of sources it pulled from to come up with your answer for Siri, and you need to tap to see links to those articles.
Still, it’s a leap forward in the kinds of things you can expect Siri to do. And it’s one that people won’t see when they download Apple Intelligence; in iOS 18.1, Siri gets a new look with a glowing border, a new text interface, and improved language understanding. But it’s basically the same old Siri.
That starts to change in 18.2, and Apple’s AI ambitions are bigger still than “go ask ChatGPT.” Eventually, Siri will be able to take action for you in apps — kind of the whole promise of AI on our phones. But those kinds of updates likely won’t arrive until well into 2025.
Of all the Apple Intelligence features I’ve used so far, the ChatGPT integration feels like the one I’ll use the most; the same way that Gemini has me using Google’s assistant more often for more things. It’s not always right, but as a tool to help me get to the right answer, it’s pretty smart.
We finally got a look at one of Siri’s big Apple Intelligence updates. | Photo: Allison Johnson / The Verge
Apple Intelligence’s official launch is less than a week away, but it’s the next wave of AI updates that will start to make Siri a lot more useful.
The forthcoming iOS 18.2 update — now available as a developer beta — starts to make your phone a lot smarter with the addition of Visual Intelligence and the ability to pass Siri requests along to ChatGPT. On phones that support Apple Intelligence, Siri won’t just be a “let me Google that for you” machine; now it’s a “let me ChatGPT that for you” machine, with all that entails: good, bad, and everything in between.
By default, Siri will ask for confirmation every time it wants to pass on a request to ChatGPT. This makes a lot of sense, and I thought I’d prefer that behavior. But after an afternoon using it, I realized I just wanted to get to the ChatGPT answer faster and turned it off. Siri still handles basic questions on its own and doesn’t pass things like “When is the US election?” to ChatGPT, thankfully. And it will still just Google something for you when that’s the best way to get to your answer.
But more complex stuff goes to ChatGPT, which means Siri can handle a lot more stuff than I’m used to throwing at it. Ask it “What are some cocktails I can make with whiskey and lemon juice?” and you’ll get a short list of options with descriptions. Old Siri will basically just show you a Google search snippet.
Siri in iOS 18.1 without ChatGPT just tells me to make a whiskey sour.
Siri with ChatGPT recommends a few options including a Gold Rush, which is the right answer if you ask me.
AI chatbots like ChatGPT and Google’s Gemini regularly get things wrong and make things up. But I’ve started using them more and more as a starting point when I need help with something and I’m basically clueless. I actually downloaded Gemini (by way of Google’s iOS app) to the iPhone 16 I’ve been using because I got tired of opening it in a browser. As long as you don’t blindly trust what the AI tells you, it’s a handy way to get pointed in the right direction.
Apple has put some nice privacy protections around your use of ChatGPT. OpenAI is “required to process your request solely for the purpose of fulfilling it and does not store your request or any responses it provides,” Apple states. The information won’t be used to train AI models, either. If you sign in to your OpenAI account, your requests are saved in your ChatGPT history and all of OpenAI’s terms apply. But you don’t need an OpenAI account at all if you don’t want or have one. I appreciate that.
iPhone 16 owners will have another way to tap into ChatGPT’s smarts, too: Visual Intelligence, which is also enabled in 18.2. It’s accessed by holding down the camera control button, which pulls up a camera live view. Once you take a photo, you can have ChatGPT analyze it or use Google Image search to find similar results on the web. It is a glorified, iOS-ified Google Lens, and it’s about time iPhones had something like this built in. Siri could previously look up plants and landmarks and the like, but nothing as expansive as this.
Visual Intelligence is pretty good — mostly. It was very flattering in its descriptions of various spots around my house, calling my entryway “cozy” and “well-organized,” and our whiskey collection “impressive.” It came up with a decent list of cocktails to make based on a picture of my home bar, and it got me started in the right direction on a home repair with a picture of the problem. As long as you treat the answer as a starting point, AI is pretty handy for these kinds of low stakes questions.
But all of the familiar pitfalls of AI chatbots are present, which Apple warns you about in every interaction you have with ChatGPT. I asked it to explain the joke in a Garfield comic strip to me, and it completely made up details that weren’t there (though to be fair, the joke it invented was funnier than the actual source material). I asked it about the books on my shelf and it hallucinated some titles that are definitely not on that shelf.
It starts off on the right track then veers into hallucination-land.
This is totally plausible but not at all what happens in this comic strip!
I also wish that ChatGPT would let you check its work the way that Gemini does. Google’s AI chatbot supplies obvious links to articles on the topics it references, so you know where to go to read more and double check what the AI is telling you. ChatGPT mentions in small print the number of sources it pulled from to come up with your answer for Siri, and you need to tap to see links to those articles.
Still, it’s a leap forward in the kinds of things you can expect Siri to do. And it’s one that people won’t see when they download Apple Intelligence; in iOS 18.1, Siri gets a new look with a glowing border, a new text interface, and improved language understanding. But it’s basically the same old Siri.
That starts to change in 18.2, and Apple’s AI ambitions are bigger still than “go ask ChatGPT.” Eventually, Siri will be able to take action for you in apps — kind of the whole promise of AI on our phones. But those kinds of updates likely won’t arrive until well into 2025.
Of all the Apple Intelligence features I’ve used so far, the ChatGPT integration feels like the one I’ll use the most; the same way that Gemini has me using Google’s assistant more often for more things. It’s not always right, but as a tool to help me get to the right answer, it’s pretty smart.
UnitedHealth data breach leaked info on over 100 million people
Photo by Amelia Holowaty Krales / The Verge
Insurance company UnitedHealth Group is confirming a ransomware attack earlier this year affected the private data of over 100 million people. The number was published in the US Department of Health and Human Services Office of Civil Rights (OCR) Breach Report on Thursday, making it the largest healthcare data breach on the list.
Hacker group Blackcat, also known as ALPHV, claimed responsibility for the February attack on Change Healthcare that caused widespread disruptions for healthcare providers processing bills, claims, payroll, and prescriptions for weeks.
According to the HHS FAQs page, Change Healthcare told OCR on October 22nd that it’s sent people about 100 million individual notices regarding this breach.
Stolen information may include:
Health insurance information (such as primary, secondary or other health plans/policies, insurance companies, member/group ID numbers, and Medicaid-Medicare-government payor ID numbers);
Health information (such as medical record numbers, providers, diagnoses, medicines, test results, images, care and treatment);
Billing, claims and payment information (such as claim numbers, account numbers, billing codes, payment cards, financial and banking information, payments made, and balance due); and/or
Other personal information such as Social Security numbers, driver’s licenses or state ID numbers, or passport numbers.
As reported by Bleeping Computer, UnitedHealth CEO Andrew Witty’s written testimony (PDF) to a House committee said the threat actors got in by using stolen credentials for a Citrix remote access service that lacked multifactor authentication.
On February 12, criminals used compromised credentials to remotely access a Change Healthcare Citrix portal, an application used to enable remote access to desktops. The portal did not have multi-factor authentication. Once the threat actor gained access, they moved laterally within the systems in more sophisticated ways and exfiltrated data. Ransomware was deployed nine days later.
UnitedHealth paid the group a $22 million ransom. However, another operation threatened to continue leaking the data and may have secured a second ransom payment.
Photo by Amelia Holowaty Krales / The Verge
Insurance company UnitedHealth Group is confirming a ransomware attack earlier this year affected the private data of over 100 million people. The number was published in the US Department of Health and Human Services Office of Civil Rights (OCR) Breach Report on Thursday, making it the largest healthcare data breach on the list.
Hacker group Blackcat, also known as ALPHV, claimed responsibility for the February attack on Change Healthcare that caused widespread disruptions for healthcare providers processing bills, claims, payroll, and prescriptions for weeks.
According to the HHS FAQs page, Change Healthcare told OCR on October 22nd that it’s sent people about 100 million individual notices regarding this breach.
Stolen information may include:
Health insurance information (such as primary, secondary or other health plans/policies, insurance companies, member/group ID numbers, and Medicaid-Medicare-government payor ID numbers);
Health information (such as medical record numbers, providers, diagnoses, medicines, test results, images, care and treatment);
Billing, claims and payment information (such as claim numbers, account numbers, billing codes, payment cards, financial and banking information, payments made, and balance due); and/or
Other personal information such as Social Security numbers, driver’s licenses or state ID numbers, or passport numbers.
As reported by Bleeping Computer, UnitedHealth CEO Andrew Witty’s written testimony (PDF) to a House committee said the threat actors got in by using stolen credentials for a Citrix remote access service that lacked multifactor authentication.
On February 12, criminals used compromised credentials to remotely access a Change Healthcare Citrix portal, an application used to enable remote access to desktops. The portal did not have multi-factor authentication. Once the threat actor gained access, they moved laterally within the systems in more sophisticated ways and exfiltrated data. Ransomware was deployed nine days later.
UnitedHealth paid the group a $22 million ransom. However, another operation threatened to continue leaking the data and may have secured a second ransom payment.
Reuters signs an AI deal with Meta
Illustration by Nick Barclay / The Verge
Meta’s AI chatbot will soon begin citing Reuters reporting while answering news-related queries. The two companies have struck what Axios describes as a “multi-year deal” that will allow Meta to use Reuters content for its chatbot responses. The deal is the first of its kind for Meta, in an era of news outlets agreeing to provide their content to AI companies.
“We’re always iterating and working to improve our products, and through Meta’s partnership with Reuters, Meta AI can respond to news-related questions with summaries and links to Reuters content,” Meta spokesperson Jamie Radice said in an email. “While most people use Meta AI for creative tasks, deep dives on new topics or how-to assistance, this partnership will help ensure a more useful experience for those seeking information on current events.”
Reuters did not immediately respond to a request for comment.
Axios reports that Reuters will be compensated for its content appearing in Meta’s AI chatbot, which is accessible through Facebook, Instagram, WhatsApp, and Messenger, and links to Reuters stories will begin appearing for US users on Friday. Many of Meta’s splashiest AI features have so far been character focused — celebrity chatbots the company recently scrapped, for example — instead of centered around current events. Radice didn’t respond to questions about safety measures in place for AI responses that deal with news and current events.
Over the last year or so, news organizations, including The Atlantic, The Wall Street Journal, and the Dotdash Meredith group have signed licensing deals with OpenAI. (Disclosure: Vox Media, The Verge’s parent company, has a technology and content deal with OpenAI.)
“AI is coming, it is coming quickly. We want to be part of whatever transition happens,” The Atlantic CEO Nicholas Thompson told The Verge. “Transition might be bad, the transition might be good, but we believe the odds of it being good for journalism and the kind of work we do with The Atlantic are higher if we participate in it. So we took that approach.”
On the other end of the spectrum is The New York Times, which is engaged in an expensive legal battle against OpenAI and Microsoft, in which it claims the tech companies infringed on its copyright when they built their AI models.
Meta leaning into news and current events within its AI chatbot is notable, considering its adversarial stance against such content on Threads. Executives have publicly said the company is “not going to do anything to encourage” hard news and political content, and though the AI chatbot is not integrated with the X competitor, it feels a bit like Meta wants it both ways — users can get their news from Meta platforms, but the company wants control over how they do so.
Though Meta now appears to be willing to pay for news content, it’s also simultaneously fighting laws that would require compensating news publishers for their content on social media. If you live in Canada, for example, you can’t access news on Facebook and Instagram because rather than pony up according to a new law, Meta opted to block all publisher accounts and links on the platforms. Google threatened similar action in California, where another “link tax” law was advancing — the bill ultimately died, and news outlets and Google reached a $250 million partnership agreement. Perhaps unsurprisingly, part of that money is going to an AI program.
Illustration by Nick Barclay / The Verge
Meta’s AI chatbot will soon begin citing Reuters reporting while answering news-related queries. The two companies have struck what Axios describes as a “multi-year deal” that will allow Meta to use Reuters content for its chatbot responses. The deal is the first of its kind for Meta, in an era of news outlets agreeing to provide their content to AI companies.
“We’re always iterating and working to improve our products, and through Meta’s partnership with Reuters, Meta AI can respond to news-related questions with summaries and links to Reuters content,” Meta spokesperson Jamie Radice said in an email. “While most people use Meta AI for creative tasks, deep dives on new topics or how-to assistance, this partnership will help ensure a more useful experience for those seeking information on current events.”
Reuters did not immediately respond to a request for comment.
Axios reports that Reuters will be compensated for its content appearing in Meta’s AI chatbot, which is accessible through Facebook, Instagram, WhatsApp, and Messenger, and links to Reuters stories will begin appearing for US users on Friday. Many of Meta’s splashiest AI features have so far been character focused — celebrity chatbots the company recently scrapped, for example — instead of centered around current events. Radice didn’t respond to questions about safety measures in place for AI responses that deal with news and current events.
Over the last year or so, news organizations, including The Atlantic, The Wall Street Journal, and the Dotdash Meredith group have signed licensing deals with OpenAI. (Disclosure: Vox Media, The Verge’s parent company, has a technology and content deal with OpenAI.)
“AI is coming, it is coming quickly. We want to be part of whatever transition happens,” The Atlantic CEO Nicholas Thompson told The Verge. “Transition might be bad, the transition might be good, but we believe the odds of it being good for journalism and the kind of work we do with The Atlantic are higher if we participate in it. So we took that approach.”
On the other end of the spectrum is The New York Times, which is engaged in an expensive legal battle against OpenAI and Microsoft, in which it claims the tech companies infringed on its copyright when they built their AI models.
Meta leaning into news and current events within its AI chatbot is notable, considering its adversarial stance against such content on Threads. Executives have publicly said the company is “not going to do anything to encourage” hard news and political content, and though the AI chatbot is not integrated with the X competitor, it feels a bit like Meta wants it both ways — users can get their news from Meta platforms, but the company wants control over how they do so.
Though Meta now appears to be willing to pay for news content, it’s also simultaneously fighting laws that would require compensating news publishers for their content on social media. If you live in Canada, for example, you can’t access news on Facebook and Instagram because rather than pony up according to a new law, Meta opted to block all publisher accounts and links on the platforms. Google threatened similar action in California, where another “link tax” law was advancing — the bill ultimately died, and news outlets and Google reached a $250 million partnership agreement. Perhaps unsurprisingly, part of that money is going to an AI program.
Waymo just raised $5.6 billion to spread robotaxis to more cities
Photo by PATRICK T. FALLON/AFP via Getty Images
Waymo just completed an oversubscribed funding round of $5.6 billion, its largest investment round to date. The company said it will use the funds to support its robotaxi business in its current markets of San Francisco, Los Angeles, and Phoenix, as well as bring it to new cities, like Austin and Atlanta, where its vehicles will be available exclusively on the Uber app.
Waymo also hinted at future “business applications” for its “Waymo Driver,” which is the company’s branding for the hardware and software used to enable its vehicles to drive autonomously. This could be a reference to food and package deliveries, trucking, or even personally owned autonomous vehicles — all possibilities that Waymo has explored in the past.
Waymo also hinted at future “business applications” for its “Waymo Driver”
The funding round was led by Waymo’s parent company Alphabet, and included investors like Andreessen Horowitz, Fidelity, Perry Creek, Silver Lake, Tiger Global, and T. Rowe Price. Participants in the round lauded Waymo for its technological advancements, commitment to safety, and superior product experience.
“The company has built the safest product in the autonomous vehicle ecosystem as well as the best,” said Chase Coleman, founder of Tiger Global, in a statement provided by Waymo
The series C round brings Waymo’s total capital raised to $11.1 billion, after raising $3 billion and $2.5 billion in two earlier rounds. Alphabet CFO Ruth Porat said earlier this year that the company would invest $5 billion in the self-driving unit over several years.
While several companies are testing autonomous vehicles on public roads across the country, Waymo is nearly alone in offering a commercial service to customers. The company’s driverless vehicles have driven over 25 million miles to date. In August, Waymo said it crossed the threshold of providing 100,000 customer trips every week.
“The company has built the safest product in the autonomous vehicle ecosystem as well as the best.”
But Waymo still is a money-loser. Alphabet’s “Other Bets,” which includes the driverless company, brought in $365 million during the second quarter of this year, up from $285 million in Q2 2023. But the division lost $1.1 billion on operating income, an increase over $813 million lost in 2023. (Alphabet will report its third quarter results on October 29th.)
Waymo plans to launch robotaxi operations in Atlanta and Austin in 2025, where its vehicles will be exclusively available on the Uber app. The company has also recently began testing out routes in San Francisco and Phoenix that use freeways, in an effort to become a more useful service to more customers. And Waymo is testing different weather conditions and more complex urban environments in Buffalo, New York City, and Washington, DC.
Photo by PATRICK T. FALLON/AFP via Getty Images
Waymo just completed an oversubscribed funding round of $5.6 billion, its largest investment round to date. The company said it will use the funds to support its robotaxi business in its current markets of San Francisco, Los Angeles, and Phoenix, as well as bring it to new cities, like Austin and Atlanta, where its vehicles will be available exclusively on the Uber app.
Waymo also hinted at future “business applications” for its “Waymo Driver,” which is the company’s branding for the hardware and software used to enable its vehicles to drive autonomously. This could be a reference to food and package deliveries, trucking, or even personally owned autonomous vehicles — all possibilities that Waymo has explored in the past.
The funding round was led by Waymo’s parent company Alphabet, and included investors like Andreessen Horowitz, Fidelity, Perry Creek, Silver Lake, Tiger Global, and T. Rowe Price. Participants in the round lauded Waymo for its technological advancements, commitment to safety, and superior product experience.
“The company has built the safest product in the autonomous vehicle ecosystem as well as the best,” said Chase Coleman, founder of Tiger Global, in a statement provided by Waymo
The series C round brings Waymo’s total capital raised to $11.1 billion, after raising $3 billion and $2.5 billion in two earlier rounds. Alphabet CFO Ruth Porat said earlier this year that the company would invest $5 billion in the self-driving unit over several years.
While several companies are testing autonomous vehicles on public roads across the country, Waymo is nearly alone in offering a commercial service to customers. The company’s driverless vehicles have driven over 25 million miles to date. In August, Waymo said it crossed the threshold of providing 100,000 customer trips every week.
But Waymo still is a money-loser. Alphabet’s “Other Bets,” which includes the driverless company, brought in $365 million during the second quarter of this year, up from $285 million in Q2 2023. But the division lost $1.1 billion on operating income, an increase over $813 million lost in 2023. (Alphabet will report its third quarter results on October 29th.)
Waymo plans to launch robotaxi operations in Atlanta and Austin in 2025, where its vehicles will be exclusively available on the Uber app. The company has also recently began testing out routes in San Francisco and Phoenix that use freeways, in an effort to become a more useful service to more customers. And Waymo is testing different weather conditions and more complex urban environments in Buffalo, New York City, and Washington, DC.
Amazon’s Like a Dragon works better as a mob drama than a Yakuza adaptation
Image: Amazon MGM Studios
Like a Dragon proves that a quality adaptation doesn’t always have to be a faithful one. It was late when I watched the first episode of Amazon Prime Video’s Like a Dragon: Yakuza. I bargained with myself to watch one episode before sleep; three episodes later, I finally went to bed. Let’s get this out of the way: if you’re looking for Like a Dragon to be a faithful representation of the Yakuza video game series, you’re going to be disappointed. But that’s what makes it worth watching. Like a Dragon’s unique approach to storytelling blends two different timelines together, making for a show that works on its own merit, without needing all the trappings of a Yakuza video game.
Like a Dragon is Amazon’s second bite at the video game adaptation apple after the surprising success of its Fallout show. It stars Ryoma Takeuchi as Kazuma Kiryu, an orphaned youth who joins the Tojo yakuza clan with dreams of earning the title of Dragon of Dojima.
Like a Dragon’s story is loosely based on the events of the first two Yakuza games and is told across two concurrently running timelines in 1995 and 2005. Each of the series’ six episodes jumps between the two time periods, chronicling Kiryu’s rise and fall as a yakuza member, the shattering of his chosen family, and how those pieces are violently smashed back together 10 years later.
Image: Amazon MGM Studios
Kiryu (far left) and the members of his chosen family.
What initially shocked me most about Like a Dragon and what most cleanly separates it from its source material is all the violence. I’m aware of the irony: this is a mob show; people tend to get hurt in those. But the Yakuza series has always been deliberate in how it depicts violence. Guns are rare and murder is rarer, but Like a Dragon has both in abundance. The games also depict their fair share of blood, but those are typically in street brawls fought with fists and the occasional traffic cone. There was one murder in the show, of a civilian no less, so shocking in its casual execution that it actually made me queasy.
Like a Dragon probably won’t do for Yakuza what Fallout did for… well, Fallout
All the time skipping is the most interesting element of the show and the reason I don’t mind that it bears little resemblance to the games. In 1995, Kiryu was surrounded by the love of his chosen family and the respect of his yakuza brothers. By 2005, all of that had rotted away into distant animosity, and it was fun watching how the show reconciled it all. Rather than simply tell the story chronologically, Like a Dragon intentionally created gaps in understanding with one timeline and then filled them in with the other. In 1995, Kiryu has two father figures: the ex-yakuza who raised him in an orphanage and his clan leader. In 2005, both men are absent, and Kiryu has since been branded as an oyagoroshi — or “father killer.” The back and forth created a thrilling tension, compelling me to work alongside the show to piece the plot together like it was a mystery in addition to its basic gangster plot of revenge and betrayal. And I was pleasantly surprised by the resolution.
Since Like a Dragon’s violence feels antithetical to the spirit of the source material, I’m glad that the show didn’t also try to incorporate the series’ wackier elements. Yakuza is a video game and is therefore not subject to the mundanities of realism. Kiryu fighting grown men in diapers — a regular occurrence in the games — works because you, the player, are in on the joke and are participating in its telling.
Image: Amazon MGM Studio
The show’s set and styling are the most like the Yakuza games rather than the story itself.
But while Yakuza’s heartfelt story of redemption and its aesthetics as a Japanese gangster thriller translated well to TV, its over-the-top goofiness doesn’t. The story can’t bear that level of irreverence because there’s no player driving the action. Cutting from a moment of extreme violence to Kiryu at the Kamurocho batting cages, while a totally authentic representation of the games, would have created a tonal whiplash that would have taken even the most diehard Yakuza fan out of the show.
But dispensing with comedy in exchange for drama does mean the show gets a bit tedious in its later episodes. Paramount’s Halo series also bears no resemblance to its source material, but it was interesting as hell (and canceled far too soon) because it was willing to use familiar characters in totally new narratives. Like a Dragon adds some new characters and remixes familiar story elements, but it’s basically the same story I’ve experienced before in the games.
Kiryu fighting grown men in diapers — a regular occurrence in the games — works because you, the player, are in on the joke
A lot of video game adaptations fail because they seem to operate from the premise that being just like the game is entertainment enough. The story gets twisted to fit all the little details that’ll make a fan sit up and say, “I get that reference,” leading to a boring, annoying mess, like when Doom shoehorned in that nausea-inducing first-person sequence. But Like a Dragon works precisely because it didn’t go for being a straight-up recreation of the games. It probably won’t do for Yakuza what Fallout did for… well, Fallout. But Like a Dragon is made better because it puts being good TV first over being a faithful adaptation.
Image: Amazon MGM Studios
Like a Dragon proves that a quality adaptation doesn’t always have to be a faithful one.
It was late when I watched the first episode of Amazon Prime Video’s Like a Dragon: Yakuza. I bargained with myself to watch one episode before sleep; three episodes later, I finally went to bed. Let’s get this out of the way: if you’re looking for Like a Dragon to be a faithful representation of the Yakuza video game series, you’re going to be disappointed. But that’s what makes it worth watching. Like a Dragon’s unique approach to storytelling blends two different timelines together, making for a show that works on its own merit, without needing all the trappings of a Yakuza video game.
Like a Dragon is Amazon’s second bite at the video game adaptation apple after the surprising success of its Fallout show. It stars Ryoma Takeuchi as Kazuma Kiryu, an orphaned youth who joins the Tojo yakuza clan with dreams of earning the title of Dragon of Dojima.
Like a Dragon’s story is loosely based on the events of the first two Yakuza games and is told across two concurrently running timelines in 1995 and 2005. Each of the series’ six episodes jumps between the two time periods, chronicling Kiryu’s rise and fall as a yakuza member, the shattering of his chosen family, and how those pieces are violently smashed back together 10 years later.
Image: Amazon MGM Studios
Kiryu (far left) and the members of his chosen family.
What initially shocked me most about Like a Dragon and what most cleanly separates it from its source material is all the violence. I’m aware of the irony: this is a mob show; people tend to get hurt in those. But the Yakuza series has always been deliberate in how it depicts violence. Guns are rare and murder is rarer, but Like a Dragon has both in abundance. The games also depict their fair share of blood, but those are typically in street brawls fought with fists and the occasional traffic cone. There was one murder in the show, of a civilian no less, so shocking in its casual execution that it actually made me queasy.
All the time skipping is the most interesting element of the show and the reason I don’t mind that it bears little resemblance to the games. In 1995, Kiryu was surrounded by the love of his chosen family and the respect of his yakuza brothers. By 2005, all of that had rotted away into distant animosity, and it was fun watching how the show reconciled it all. Rather than simply tell the story chronologically, Like a Dragon intentionally created gaps in understanding with one timeline and then filled them in with the other. In 1995, Kiryu has two father figures: the ex-yakuza who raised him in an orphanage and his clan leader. In 2005, both men are absent, and Kiryu has since been branded as an oyagoroshi — or “father killer.” The back and forth created a thrilling tension, compelling me to work alongside the show to piece the plot together like it was a mystery in addition to its basic gangster plot of revenge and betrayal. And I was pleasantly surprised by the resolution.
Since Like a Dragon’s violence feels antithetical to the spirit of the source material, I’m glad that the show didn’t also try to incorporate the series’ wackier elements. Yakuza is a video game and is therefore not subject to the mundanities of realism. Kiryu fighting grown men in diapers — a regular occurrence in the games — works because you, the player, are in on the joke and are participating in its telling.
Image: Amazon MGM Studio
The show’s set and styling are the most like the Yakuza games rather than the story itself.
But while Yakuza’s heartfelt story of redemption and its aesthetics as a Japanese gangster thriller translated well to TV, its over-the-top goofiness doesn’t. The story can’t bear that level of irreverence because there’s no player driving the action. Cutting from a moment of extreme violence to Kiryu at the Kamurocho batting cages, while a totally authentic representation of the games, would have created a tonal whiplash that would have taken even the most diehard Yakuza fan out of the show.
But dispensing with comedy in exchange for drama does mean the show gets a bit tedious in its later episodes. Paramount’s Halo series also bears no resemblance to its source material, but it was interesting as hell (and canceled far too soon) because it was willing to use familiar characters in totally new narratives. Like a Dragon adds some new characters and remixes familiar story elements, but it’s basically the same story I’ve experienced before in the games.
A lot of video game adaptations fail because they seem to operate from the premise that being just like the game is entertainment enough. The story gets twisted to fit all the little details that’ll make a fan sit up and say, “I get that reference,” leading to a boring, annoying mess, like when Doom shoehorned in that nausea-inducing first-person sequence. But Like a Dragon works precisely because it didn’t go for being a straight-up recreation of the games. It probably won’t do for Yakuza what Fallout did for… well, Fallout. But Like a Dragon is made better because it puts being good TV first over being a faithful adaptation.
The confusing state of Apple Intelligence
Image: Alex Parkin / The Verge
The first bits of Apple Intelligence are starting to show up on people’s phones. The features in iOS 18.1 are fairly basic: summarizing messages, writing emails, that kind of thing. Apple is already letting developers play with iOS 18.2, though, which looks like a much more substantial update. Meanwhile, the company is about to launch a bunch of new M4-powered and AI-focused Macs, and launched the iPad Mini this week. It’s a lot happening all at once, and it’s a lot to make sense of.
On this episode of The Vergecast, we try and make sense of it. The Verge’s Richard Lawler joins us as we talk through the small changes in 18.1 and the much bigger changes in 18.2, debate whether Tim Cook can really use every Apple product every day, and wonder what might be coming from Apple’s week of Mac announcements.
After that, we get into the other news in the world of AI. Anthropic built a model that can use your computer for you, which is both cool and horrifying and is also the main goal of practically every company in AI. Humane made its AI Pin cheaper, is working on licensing its operating system to other companies, and has some big questions to ask. Perplexity is under fire for copyright reasons. Watermarking AI images still doesn’t work. It’s all a lot.
Finally, in the lightning round, we talk about the Boox Palma 2, T-Mobile changing the “lifetime” deal it made with older subscribers, and the silly fight against the FTC’s new click to cancel rule. Please, don’t make us make phone calls.
If you want to know more about everything we discuss in this episode, here are some links to get you started, beginning with Apple AI:
Apple iPad Mini 2024 review: missing pieces
iOS 18.2 will let everyone set new default phone and messaging apps
Apple’s first iOS 18.2 beta adds more AI features and ChatGPT integration
Where are the iPhone’s WebKit-less browsers?
Apple teases ‘week’ of Mac announcements starting Monday
Apple is preparing an M4 MacBook Air update for early next year
Tim Cook says he usese every Apple product every day — how does that work?
From The Wall Street Journal: Tim Cook on Why Apple’s Huge Bets Will Pay Off
And in other AI news:
Anthropic’s latest AI update can use a computer on its own
Humane slashes the price of its AI Pin after weak sales
Apple is ‘concerned’ about AI turning real photos into ‘fantasy’
Google Photos will soon show you if an image was edited with AI
News Corp sues Perplexity for ripping off WSJ and New York Post
Character.AI and Google sued after chatbot-obsessed teen’s death
Kevin Bacon, Kate McKinnon, and other creatives warn of ‘unjust’ AI threat
Google open-sourced its watermarking tool for AI-generated text
And in the lightning round:
David Pierce’s pick: The Boox Palma 2 has a faster processor and adds a fingerprint reader
Richard Lawler’s pick: Seniors are PISSED that T-Mobile won’t honor its “lifetime” price guarantee.
Nilay Patel’s pick: Guess who’s suing the FTC to stop ‘click to cancel’
Image: Alex Parkin / The Verge
The first bits of Apple Intelligence are starting to show up on people’s phones. The features in iOS 18.1 are fairly basic: summarizing messages, writing emails, that kind of thing. Apple is already letting developers play with iOS 18.2, though, which looks like a much more substantial update. Meanwhile, the company is about to launch a bunch of new M4-powered and AI-focused Macs, and launched the iPad Mini this week. It’s a lot happening all at once, and it’s a lot to make sense of.
On this episode of The Vergecast, we try and make sense of it. The Verge’s Richard Lawler joins us as we talk through the small changes in 18.1 and the much bigger changes in 18.2, debate whether Tim Cook can really use every Apple product every day, and wonder what might be coming from Apple’s week of Mac announcements.
After that, we get into the other news in the world of AI. Anthropic built a model that can use your computer for you, which is both cool and horrifying and is also the main goal of practically every company in AI. Humane made its AI Pin cheaper, is working on licensing its operating system to other companies, and has some big questions to ask. Perplexity is under fire for copyright reasons. Watermarking AI images still doesn’t work. It’s all a lot.
Finally, in the lightning round, we talk about the Boox Palma 2, T-Mobile changing the “lifetime” deal it made with older subscribers, and the silly fight against the FTC’s new click to cancel rule. Please, don’t make us make phone calls.
If you want to know more about everything we discuss in this episode, here are some links to get you started, beginning with Apple AI:
Apple iPad Mini 2024 review: missing pieces
iOS 18.2 will let everyone set new default phone and messaging apps
Apple’s first iOS 18.2 beta adds more AI features and ChatGPT integration
Where are the iPhone’s WebKit-less browsers?
Apple teases ‘week’ of Mac announcements starting Monday
Apple is preparing an M4 MacBook Air update for early next year
Tim Cook says he usese every Apple product every day — how does that work?
From The Wall Street Journal: Tim Cook on Why Apple’s Huge Bets Will Pay Off
And in other AI news:
Anthropic’s latest AI update can use a computer on its own
Humane slashes the price of its AI Pin after weak sales
Apple is ‘concerned’ about AI turning real photos into ‘fantasy’
Google Photos will soon show you if an image was edited with AI
News Corp sues Perplexity for ripping off WSJ and New York Post
Character.AI and Google sued after chatbot-obsessed teen’s death
Kevin Bacon, Kate McKinnon, and other creatives warn of ‘unjust’ AI threat
Google open-sourced its watermarking tool for AI-generated text
And in the lightning round:
David Pierce’s pick: The Boox Palma 2 has a faster processor and adds a fingerprint reader
Richard Lawler’s pick: Seniors are PISSED that T-Mobile won’t honor its “lifetime” price guarantee.
Nilay Patel’s pick: Guess who’s suing the FTC to stop ‘click to cancel’
Adobe execs say artists need to embrace AI or get left behind
Cath Virginia / The Verge | Photos by Getty Images
Adobe is going all in on generative AI models and tools, even if that means turning away creators who dislike the technology. Artists who refuse to embrace AI in their work are “not going to be successful in this new world without using it,” says Alexandru Costin, vice president of generative AI at Adobe.
In an interview with The Verge, Costin said that he “isn’t aware” of any plans for Adobe to launch products that don’t include generative AI for creators who prefer to manually complete tasks or oppose how AI is changing the creative industry.
“We have older versions of our products that don’t use gen AI, but I wouldn’t recommend using them,” Costin said. “Our goal is to make our customers successful, and we think that in order for them to be successful, they need to embrace the tech.”
And according to Adobe’s President of Digital Media, David Wadhwani, the company is unlikely to accommodate creators who think otherwise.
“We’ve always innovated with conviction, and we believe in the conviction of what we’re doing here,” said Wadhwani, acknowledging that some creatives have loudly criticized Adobe’s adoption of generative AI technology. “People will either agree with that conviction or they won’t, but we think our approach is the one that wins frankly in the short term, but certainly in the long term.”
Image: Adobe
Most of Adobe’s generative AI tools have very focused purposes, like Photoshop’s new Remove Distractions feature.
Adobe is in a difficult position — while many of its customers, particularly businesses and large creative teams, are hungry for AI features that can increase productivity, many artists openly detest the technology and fear how it will impact their livelihoods. Given the demand already exists, however, Adobe would be risking its dominant position in the creative software market if it ignored what many customers are asking for. If Adobe doesn’t develop these tools, other companies will, and they may not make the effort to do so in a way that respects artists’ work.
There are also communities of people online who harbor an extreme hatred of AI regardless of how it’s been applied, and will go out of their way to condemn and avoid interacting with it. For example, when a “shot-for-shot” remake of the Princess Mononoke movie trailer made with Kling recently went viral, it was briefly taken offline by its creator following intense backlash from fans of the original Hayao Miyazaki classic who felt the video was disrespectful or outright ugly.
“I’ve heard Miyazaki is anti-AI. That’s okay.”… Excuse you? To say that in the same breath as the word “ethical”? And to call a shot-for-shot remake “creating a new world”?Zero creativity, zero respect, and zero concept of what art is.You’re not an artist — you’re a fraud. https://t.co/bchmaA7UiA pic.twitter.com/wO8q69C4PK— Swann Grey ❄️ swannsvoice.bsky.social (@swannsvoice) October 4, 2024
But the generative AI features like those powered by Adobe’s Firefly models are the most adopted products Adobe has ever released, according to Wadhwani, which is all the signal the company needs to continue on the same path. There are plenty of generative AI models that already compete with Adobe’s Firefly lineup, from both heavy hitters like OpenAI and Google, and smaller niche startups that are trying to carve out their own place in the industry. And in many cases, Adobe is the one playing catch up. The forthcoming “Project Concept” collaborative canvas, which also includes text-to-image tools and an AI remixing feature, is similar to existing apps like Figma’s FigJam and Kaiber’s Superstudio, for example.
Adobe says it aims to implement AI in a way that gives artists more time to focus on actually being creative rather than replacing them entirely, such as making tools more efficient and removing tedious tasks like resizing or masking objects. The company is essentially trying to appeal to both sides by giving its AI tools very specific purposes inside its Creative Cloud applications, rather than pitching them as a means to replace every aspect of content creation.
“If you just rely on AI for all this stuff, you’re going to end up with a lot more content that looks like the same content everyone else is making.”
“We think that demand for content is insatiable. We also think that human creativity will be a critical part of it,” said Wadhwani. “If you just rely on AI for all this stuff, you’re going to end up with a lot more content that looks like the same content everyone else is making.”
What we’re likely to see is a greater divide between smaller artists and the wider creative industry. The demand for effectively every kind of content, from images and copy for advertising, to the TV shows and other media we consume, is growing rapidly. An Adobe survey reports that it increased two-fold between 2021-2023, and could increase up to 2000 percent by 2025, which is pushing companies to find new ways to affordably increase production.
Generative AI tools — many of which promise to automate repetitive or technically challenging tasks — are a highly appealing solution to meet such demands. But plenty of people still value the work that goes into manual creative processes, and I don’t see that going away entirely.
Image: Adobe
Adobe now has a generative AI model that can produce video clips from text descriptions, which may eventually impact cinematographers. animators, and VFX artists.
“I think there will be a thirst for artists who do things by hand,” said Wadhwani. “In the last decade I can take a picture and run it through a process that makes it look like a painting, but I’m not going to value that ‘painting’ the same way I would an artist who actually took the time to make a real painting.”
There’s little doubt that generative AI is changing the creative landscape though. Adobe says the technology will create new jobs, but those jobs will be different, and some specialized roles may disappear entirely. It’s also just difficult to avoid AI art generally these days — platforms like Etsy that were created for creators to sell hand-made wares are now inundated with it, and it’s harder for artists to find exposure online now they have to compete with AI content farms.
Adobe is the dominant provider of creative design software and few other companies provide a similarly connected ecosystem of products. That makes it hard for customers to simply jump ship if they don’t agree with the direction it’s taking, even if it is trying to be considerate about how generative AI is being implemented. But if its endorsement of AI ruffles enough feathers, then that could give way for new competitors to appease the users that Adobe is leaving behind.
And if the backlash presented by online creators is any indication, that’s a sizable market that Adobe is at risk of losing. It seems Adobe just thinks the opportunity that AI adopters present is even larger.
Cath Virginia / The Verge | Photos by Getty Images
Adobe is going all in on generative AI models and tools, even if that means turning away creators who dislike the technology. Artists who refuse to embrace AI in their work are “not going to be successful in this new world without using it,” says Alexandru Costin, vice president of generative AI at Adobe.
In an interview with The Verge, Costin said that he “isn’t aware” of any plans for Adobe to launch products that don’t include generative AI for creators who prefer to manually complete tasks or oppose how AI is changing the creative industry.
“We have older versions of our products that don’t use gen AI, but I wouldn’t recommend using them,” Costin said. “Our goal is to make our customers successful, and we think that in order for them to be successful, they need to embrace the tech.”
And according to Adobe’s President of Digital Media, David Wadhwani, the company is unlikely to accommodate creators who think otherwise.
“We’ve always innovated with conviction, and we believe in the conviction of what we’re doing here,” said Wadhwani, acknowledging that some creatives have loudly criticized Adobe’s adoption of generative AI technology. “People will either agree with that conviction or they won’t, but we think our approach is the one that wins frankly in the short term, but certainly in the long term.”
Image: Adobe
Most of Adobe’s generative AI tools have very focused purposes, like Photoshop’s new Remove Distractions feature.
Adobe is in a difficult position — while many of its customers, particularly businesses and large creative teams, are hungry for AI features that can increase productivity, many artists openly detest the technology and fear how it will impact their livelihoods. Given the demand already exists, however, Adobe would be risking its dominant position in the creative software market if it ignored what many customers are asking for. If Adobe doesn’t develop these tools, other companies will, and they may not make the effort to do so in a way that respects artists’ work.
There are also communities of people online who harbor an extreme hatred of AI regardless of how it’s been applied, and will go out of their way to condemn and avoid interacting with it. For example, when a “shot-for-shot” remake of the Princess Mononoke movie trailer made with Kling recently went viral, it was briefly taken offline by its creator following intense backlash from fans of the original Hayao Miyazaki classic who felt the video was disrespectful or outright ugly.
“I’ve heard Miyazaki is anti-AI. That’s okay.”
… Excuse you? To say that in the same breath as the word “ethical”? And to call a shot-for-shot remake “creating a new world”?
Zero creativity, zero respect, and zero concept of what art is.
You’re not an artist — you’re a fraud. https://t.co/bchmaA7UiA pic.twitter.com/wO8q69C4PK
— Swann Grey ❄️ swannsvoice.bsky.social (@swannsvoice) October 4, 2024
But the generative AI features like those powered by Adobe’s Firefly models are the most adopted products Adobe has ever released, according to Wadhwani, which is all the signal the company needs to continue on the same path. There are plenty of generative AI models that already compete with Adobe’s Firefly lineup, from both heavy hitters like OpenAI and Google, and smaller niche startups that are trying to carve out their own place in the industry. And in many cases, Adobe is the one playing catch up. The forthcoming “Project Concept” collaborative canvas, which also includes text-to-image tools and an AI remixing feature, is similar to existing apps like Figma’s FigJam and Kaiber’s Superstudio, for example.
Adobe says it aims to implement AI in a way that gives artists more time to focus on actually being creative rather than replacing them entirely, such as making tools more efficient and removing tedious tasks like resizing or masking objects. The company is essentially trying to appeal to both sides by giving its AI tools very specific purposes inside its Creative Cloud applications, rather than pitching them as a means to replace every aspect of content creation.
“We think that demand for content is insatiable. We also think that human creativity will be a critical part of it,” said Wadhwani. “If you just rely on AI for all this stuff, you’re going to end up with a lot more content that looks like the same content everyone else is making.”
What we’re likely to see is a greater divide between smaller artists and the wider creative industry. The demand for effectively every kind of content, from images and copy for advertising, to the TV shows and other media we consume, is growing rapidly. An Adobe survey reports that it increased two-fold between 2021-2023, and could increase up to 2000 percent by 2025, which is pushing companies to find new ways to affordably increase production.
Generative AI tools — many of which promise to automate repetitive or technically challenging tasks — are a highly appealing solution to meet such demands. But plenty of people still value the work that goes into manual creative processes, and I don’t see that going away entirely.
Image: Adobe
Adobe now has a generative AI model that can produce video clips from text descriptions, which may eventually impact cinematographers. animators, and VFX artists.
“I think there will be a thirst for artists who do things by hand,” said Wadhwani. “In the last decade I can take a picture and run it through a process that makes it look like a painting, but I’m not going to value that ‘painting’ the same way I would an artist who actually took the time to make a real painting.”
There’s little doubt that generative AI is changing the creative landscape though. Adobe says the technology will create new jobs, but those jobs will be different, and some specialized roles may disappear entirely. It’s also just difficult to avoid AI art generally these days — platforms like Etsy that were created for creators to sell hand-made wares are now inundated with it, and it’s harder for artists to find exposure online now they have to compete with AI content farms.
Adobe is the dominant provider of creative design software and few other companies provide a similarly connected ecosystem of products. That makes it hard for customers to simply jump ship if they don’t agree with the direction it’s taking, even if it is trying to be considerate about how generative AI is being implemented. But if its endorsement of AI ruffles enough feathers, then that could give way for new competitors to appease the users that Adobe is leaving behind.
And if the backlash presented by online creators is any indication, that’s a sizable market that Adobe is at risk of losing. It seems Adobe just thinks the opportunity that AI adopters present is even larger.
The company behind Arc is now building a second, much simpler browser
One of the many prototypes The Browser Company is building for its next browser. | Image: The Browser Company
Stop me if this sounds familiar: The Browser Company is building a browser that it thinks can make your internet life a little more organized, a little more useful, and maybe even a little more delightful. It has new ideas about tabs, and what your browser can do on your behalf.
I’ve heard this story before! But the browser that Browser Company CEO Josh Miller wants to talk about when he calls me on Thursday isn’t Arc, the product he and his team have been working on for the last five years. It’s not Arc 2.0, either, even though Miller has been talking publicly about Arc 2.0 for a while now. It’s an entirely new browser. And for Miller and The Browser Company, it’s a chance to get back to building the future of browsers they set out to create in the first place.
A strange thing has happened over the last couple of years, Miller says. Arc has grown fast — users quadrupled this year alone — but it has also become clear that Arc is never going to be a truly mainstream product. It’s too complicated, too different, too hard to get into. “It’s just too much novelty and change,” Miller says, “to get to the number of people we really want to get to.” User interviews and data have convinced the company that this is a power-user tool, and always will be.
On the other hand, the people who use Arc tend to love Arc. They love the sidebar, they love having spaces and profiles, they love all the customization options. Generally speaking, those users have also settled into Arc — Miller says they don’t want new features as much as they just want their browser to be faster, smoother, more secure. And fair enough!
So The Browser Company faced a situation many companies encounter: they had a well-liked product that was never going to be a game-changer. Rather than try to build the next thing into the current thing, and risk both alienating the people who like it and never reaching the people who don’t, the company decided to just build something new.
Arc is not dying, Miller says. He says that over and over, in fact, even after I tell him the YouTube video the company just released sounds like the thing companies say right before they kill a product. It’s just that Arc won’t change much anymore. It’ll get stability updates and bug fixes, and there’s a team at The Browser Company dedicated to those. “In that sense,” Miller says, “it feels like a complete-ish product.” Most of the team’s energy and time will now be dedicated to starting from scratch.
“Arc was basically this front-end, tab management innovation,” Miller says. “People loved it. It grew like a weed. Then it started getting slow and started crashing a lot, and we felt bad, and we had to learn how to make it fast. And we kind of lost sight, in some ways, of the fact that we’ve got to do the operating system part.”
The plan this time is to build not just a different interface for a browser, but a different kind of browser entirely — one that is much more proactive, more powerful, more AI-centric, more in line with that original vision. Call it the iPhone of web browsers, or the “internet computer,” or whatever other metaphor you like. The idea is to turn the browser into an app platform. Miller still wants to do it, and he wants to do it for everyone.
What does that look like? Miller is a bit vague on the details. The new browser, which Miller intimates could launch as soon as the beginning of next year, is designed to come with no switching costs, which means among other things that it will have horizontal tabs and fewer ideas about organization. The idea is to “make the first 90 seconds effortless” in order to get more people to switch. And then, slowly, to reveal what this new browser can do.
Miller has a couple of favorite examples of how a browser might help you get stuff done, which he’s said to me, on Decoder, and elsewhere in recent months. There’s the teacher who spends hours copying and pasting data between enterprise apps; the Shopify sellers who spend too much time looking up order numbers and then pasting them into customer-support emails. Those are the sorts of things that a browser, with access to all your web apps and browsing data, could begin to do on your behalf. And with AI tools like the new “Computer use” feature from Anthropic, that kind of thing is beginning to become automated and possible.
Designing a browser that is both accessible to everyone and a completely new thing won’t be easy. The Browser Company tried it once already, and ended up here. But Miller feels good about having built a good browser over the last five years. Now it’s time to get back to the real job.
One of the many prototypes The Browser Company is building for its next browser. | Image: The Browser Company
Stop me if this sounds familiar: The Browser Company is building a browser that it thinks can make your internet life a little more organized, a little more useful, and maybe even a little more delightful. It has new ideas about tabs, and what your browser can do on your behalf.
I’ve heard this story before! But the browser that Browser Company CEO Josh Miller wants to talk about when he calls me on Thursday isn’t Arc, the product he and his team have been working on for the last five years. It’s not Arc 2.0, either, even though Miller has been talking publicly about Arc 2.0 for a while now. It’s an entirely new browser. And for Miller and The Browser Company, it’s a chance to get back to building the future of browsers they set out to create in the first place.
A strange thing has happened over the last couple of years, Miller says. Arc has grown fast — users quadrupled this year alone — but it has also become clear that Arc is never going to be a truly mainstream product. It’s too complicated, too different, too hard to get into. “It’s just too much novelty and change,” Miller says, “to get to the number of people we really want to get to.” User interviews and data have convinced the company that this is a power-user tool, and always will be.
On the other hand, the people who use Arc tend to love Arc. They love the sidebar, they love having spaces and profiles, they love all the customization options. Generally speaking, those users have also settled into Arc — Miller says they don’t want new features as much as they just want their browser to be faster, smoother, more secure. And fair enough!
So The Browser Company faced a situation many companies encounter: they had a well-liked product that was never going to be a game-changer. Rather than try to build the next thing into the current thing, and risk both alienating the people who like it and never reaching the people who don’t, the company decided to just build something new.
Arc is not dying, Miller says. He says that over and over, in fact, even after I tell him the YouTube video the company just released sounds like the thing companies say right before they kill a product. It’s just that Arc won’t change much anymore. It’ll get stability updates and bug fixes, and there’s a team at The Browser Company dedicated to those. “In that sense,” Miller says, “it feels like a complete-ish product.” Most of the team’s energy and time will now be dedicated to starting from scratch.
“Arc was basically this front-end, tab management innovation,” Miller says. “People loved it. It grew like a weed. Then it started getting slow and started crashing a lot, and we felt bad, and we had to learn how to make it fast. And we kind of lost sight, in some ways, of the fact that we’ve got to do the operating system part.”
The plan this time is to build not just a different interface for a browser, but a different kind of browser entirely — one that is much more proactive, more powerful, more AI-centric, more in line with that original vision. Call it the iPhone of web browsers, or the “internet computer,” or whatever other metaphor you like. The idea is to turn the browser into an app platform. Miller still wants to do it, and he wants to do it for everyone.
What does that look like? Miller is a bit vague on the details. The new browser, which Miller intimates could launch as soon as the beginning of next year, is designed to come with no switching costs, which means among other things that it will have horizontal tabs and fewer ideas about organization. The idea is to “make the first 90 seconds effortless” in order to get more people to switch. And then, slowly, to reveal what this new browser can do.
Miller has a couple of favorite examples of how a browser might help you get stuff done, which he’s said to me, on Decoder, and elsewhere in recent months. There’s the teacher who spends hours copying and pasting data between enterprise apps; the Shopify sellers who spend too much time looking up order numbers and then pasting them into customer-support emails. Those are the sorts of things that a browser, with access to all your web apps and browsing data, could begin to do on your behalf. And with AI tools like the new “Computer use” feature from Anthropic, that kind of thing is beginning to become automated and possible.
Designing a browser that is both accessible to everyone and a completely new thing won’t be easy. The Browser Company tried it once already, and ended up here. But Miller feels good about having built a good browser over the last five years. Now it’s time to get back to the real job.
OpenAI plans to release its next big AI model by December
Image: Cath Virginia / The Verge; Getty Images
OpenAI plans to launch Orion, its next frontier model, by December, The Verge has learned.
Unlike the release of OpenAI’s last two models, GPT-4o and o1, Orion won’t initially be released widely through ChatGPT. Instead, OpenAI is planning to grant access first to companies it works closely with in order for them to build their own products and features, according to a source familiar with the plan.
Another source tells The Verge that engineers inside Microsoft — OpenAI’s main partner for deploying AI models — are preparing to host Orion on Azure as early as November. While Orion is seen inside OpenAI as the successor to GPT-4, it’s unclear if the company will call it GPT-5 externally. As always, the release plan is subject to change and could slip. OpenAI declined to comment for this story.
Orion had previously been teased by one OpenAI executive as potentially up to 100 times more powerful than GPT-4; it’s separate from the o1 reasoning model OpenAI released in September. The company’s goal is to combine its LLMs over time to create an even more capable model that could eventually be called artificial general intelligence, or AGI.
It was previously reported that OpenAI was using o1, code named Strawberry, to provide synthetic data to train Orion. In September, OpenAI researchers threw a happy hour to celebrate finishing training the new model, a source familiar with the matter tells The Verge.
i love being home in the midwest.the night sky is so beautiful.excited for the winter constellations to rise soon; they are so great.— Sam Altman (@sama) September 14, 2024
That timing lines up with a cryptic post on X by OpenAI CEO Sam Altman, in which he said he was “excited for the winter constellations to rise soon.” If you ask ChatGPT o1-preview what Altman’s post is hiding, it will tell you that he’s hinting at the word Orion, which is the winter constellation that’s most visible in the night sky from November to February.
Screenshot by Tom Warren / The Verge
Even ChatGPT thinks Sam Altman is teasing Orion.
The release of this next model comes at a crucial time for OpenAI, which just closed a historic $6.6 billion funding round that requires the company to restructure itself as a for-profit entity. The company is also experiencing significant staff turnover: CTO Mira Murati just announced her departure along with Bob McGrew, the company’s chief research officer, and Barret Zoph, VP of post training.
Image: Cath Virginia / The Verge; Getty Images
OpenAI plans to launch Orion, its next frontier model, by December, The Verge has learned.
Unlike the release of OpenAI’s last two models, GPT-4o and o1, Orion won’t initially be released widely through ChatGPT. Instead, OpenAI is planning to grant access first to companies it works closely with in order for them to build their own products and features, according to a source familiar with the plan.
Another source tells The Verge that engineers inside Microsoft — OpenAI’s main partner for deploying AI models — are preparing to host Orion on Azure as early as November. While Orion is seen inside OpenAI as the successor to GPT-4, it’s unclear if the company will call it GPT-5 externally. As always, the release plan is subject to change and could slip. OpenAI declined to comment for this story.
Orion had previously been teased by one OpenAI executive as potentially up to 100 times more powerful than GPT-4; it’s separate from the o1 reasoning model OpenAI released in September. The company’s goal is to combine its LLMs over time to create an even more capable model that could eventually be called artificial general intelligence, or AGI.
It was previously reported that OpenAI was using o1, code named Strawberry, to provide synthetic data to train Orion. In September, OpenAI researchers threw a happy hour to celebrate finishing training the new model, a source familiar with the matter tells The Verge.
i love being home in the midwest.
the night sky is so beautiful.
excited for the winter constellations to rise soon; they are so great.
— Sam Altman (@sama) September 14, 2024
That timing lines up with a cryptic post on X by OpenAI CEO Sam Altman, in which he said he was “excited for the winter constellations to rise soon.” If you ask ChatGPT o1-preview what Altman’s post is hiding, it will tell you that he’s hinting at the word Orion, which is the winter constellation that’s most visible in the night sky from November to February.
Screenshot by Tom Warren / The Verge
Even ChatGPT thinks Sam Altman is teasing Orion.
The release of this next model comes at a crucial time for OpenAI, which just closed a historic $6.6 billion funding round that requires the company to restructure itself as a for-profit entity. The company is also experiencing significant staff turnover: CTO Mira Murati just announced her departure along with Bob McGrew, the company’s chief research officer, and Barret Zoph, VP of post training.
Perplexity blasts media as ‘adversarial’ in response to copyright lawsuit
The Verge
AI startup Perplexity, which offers an AI search engine, published a blog post today pushing back on News Corp’s lawsuit against the company.
Perplexity has recently come under significant scrutiny following accusations that it scraped content without permission, and News Corp, which is the parent company of the New York Post and The Wall Street Journal-owner Dow Jones, alleged that Perplexity’s search engine “copies on a massive scale.”
Perplexity, in its response today, argues that news organizations like News Corp that have filed lawsuits against AI companies “prefer to live in a world where publicly reported facts are owned by corporations, and no one can do anything with those publicly reported facts without paying a toll.”
No one, including corporations, owns facts. Copyright can, however, cover how facts are expressed — in other words, the material that News Corp is suing over. (Previously, Forbes accused Perplexity of publishing “eerily similar wording” and “some entirely lifted fragments” from its stories.)
Perplexity thinks that the lawsuit “reflects an adversarial posture between media and tech that is — while depressingly familiar — fundamentally shortsighted, unnecessary, and self-defeating.” The company says there are “countless things we would love to do beyond what the default application of law allows,” and it points to its revenue-sharing program it has launched in partnership with publications like Time, Der Spiegel, and Fortune as something that it’s proud of. It also says the facts alleged in News Corp’s lawsuit are “misleading at best.”
When reached for comment, News Corp shared the same statement from CEO Robert Thomson that it shared on Monday:
Perplexity perpetrates an abuse of intellectual property that harms journalists, writers, publishers and News Corp. The perplexing Perplexity has willfully copied copious amounts of copyrighted material without compensation, and shamelessly presents repurposed material as a direct substitute for the original source. Perplexity proudly states that users can “skip the links” — apparently, Perplexity wants to skip the check.
We applaud principled companies like OpenAI, which understands that integrity and creativity are essential if we are to realise the potential of Artificial Intelligence. Perplexity is not the only AI company abusing intellectual property and it is not the only AI company that we will pursue with vigor and rigor. We have made clear that we would rather woo than sue, but, for the sake of our journalists, our writers and our company, we must challenge the content kleptocracy.
Update, October 24th: Added statement from News Corp.
The Verge
AI startup Perplexity, which offers an AI search engine, published a blog post today pushing back on News Corp’s lawsuit against the company.
Perplexity has recently come under significant scrutiny following accusations that it scraped content without permission, and News Corp, which is the parent company of the New York Post and The Wall Street Journal-owner Dow Jones, alleged that Perplexity’s search engine “copies on a massive scale.”
Perplexity, in its response today, argues that news organizations like News Corp that have filed lawsuits against AI companies “prefer to live in a world where publicly reported facts are owned by corporations, and no one can do anything with those publicly reported facts without paying a toll.”
No one, including corporations, owns facts. Copyright can, however, cover how facts are expressed — in other words, the material that News Corp is suing over. (Previously, Forbes accused Perplexity of publishing “eerily similar wording” and “some entirely lifted fragments” from its stories.)
Perplexity thinks that the lawsuit “reflects an adversarial posture between media and tech that is — while depressingly familiar — fundamentally shortsighted, unnecessary, and self-defeating.” The company says there are “countless things we would love to do beyond what the default application of law allows,” and it points to its revenue-sharing program it has launched in partnership with publications like Time, Der Spiegel, and Fortune as something that it’s proud of. It also says the facts alleged in News Corp’s lawsuit are “misleading at best.”
When reached for comment, News Corp shared the same statement from CEO Robert Thomson that it shared on Monday:
Perplexity perpetrates an abuse of intellectual property that harms journalists, writers, publishers and News Corp. The perplexing Perplexity has willfully copied copious amounts of copyrighted material without compensation, and shamelessly presents repurposed material as a direct substitute for the original source. Perplexity proudly states that users can “skip the links” — apparently, Perplexity wants to skip the check.
We applaud principled companies like OpenAI, which understands that integrity and creativity are essential if we are to realise the potential of Artificial Intelligence. Perplexity is not the only AI company abusing intellectual property and it is not the only AI company that we will pursue with vigor and rigor. We have made clear that we would rather woo than sue, but, for the sake of our journalists, our writers and our company, we must challenge the content kleptocracy.
Update, October 24th: Added statement from News Corp.