verge-rss

Another online pharmacy bypasses the FDA to offer cut-rate weight loss drugs

Hims and Hers is the latest pharmacy to offer GLP-1 drugs from compounding pharmacies. | Illustration by Hugo Herrera for The Verge

Hims & Hers Health, one of the online pharmacies that got its start prescribing dick pills, is now offering knockoff versions of GLP-1 weight loss drugs. Hims & Hers says it will offer drugs that mimic Ozempic and Wegovy, the active ingredient of which is semaglutide.
The copycat versions are made by compounding pharmacies. The formulations aren’t the same as the FDA-approved versions of the drug and haven’t been directly evaluated by the FDA, either. But they’re cheaper than the real thing: $199 a month, compared to the branded version, which can cost more than $1,000 a month without insurance.
Compounding pharmacies can make knockoff versions of branded drugs when they are in shortage, as the GLP-1 drugs — prescribed for diabetes and weight loss — currently are. The FDA has already received reports of adverse events for compounded versions of semaglutide.
Hims & Hers says it “conducted extensive research for over a year” into finding a supplier, but does not name the one it chose to partner with. “Over the last year, we have grown in our conviction — based on our medical experts’ evaluation and the strength of our infrastructure — that if done properly, compounded GLP-1s are safe and effective,” the company said in its statement.
Hims & Hers introduced its weight loss program in December 2023, according to an investor presentation. Its weight loss program costs $79 a month, and is expected to “deliver $100m+” in 2025. Hims & Hers makes most of its money through subscriptions; more than 90 percent of its revenue is “recurring.” Expanding its number of subscribers is how it plans to grow. GLP-1 weight-loss drugs must be taken continuously in order to sustain weight loss; one study has shown that people regain two-thirds of the pounds they lost once they quit semaglutide.
In the first quarter of 2024, the company added “a record 172k net new subscribers,” it said in its shareholder letter. The company has splashed out on TV advertising during NBA and NFL games, as well as Keeping Up with the Kardashians and The Bachelorette.
Ro, another online pharmacy that started with dick pills, is also already prescribing compounded versions of these drugs, Bloomberg reports. Ro previously advertised weight-loss drugs on the New York City subway system.

Hims and Hers is the latest pharmacy to offer GLP-1 drugs from compounding pharmacies. | Illustration by Hugo Herrera for The Verge

Hims & Hers Health, one of the online pharmacies that got its start prescribing dick pills, is now offering knockoff versions of GLP-1 weight loss drugs. Hims & Hers says it will offer drugs that mimic Ozempic and Wegovy, the active ingredient of which is semaglutide.

The copycat versions are made by compounding pharmacies. The formulations aren’t the same as the FDA-approved versions of the drug and haven’t been directly evaluated by the FDA, either. But they’re cheaper than the real thing: $199 a month, compared to the branded version, which can cost more than $1,000 a month without insurance.

Compounding pharmacies can make knockoff versions of branded drugs when they are in shortage, as the GLP-1 drugs — prescribed for diabetes and weight loss — currently are. The FDA has already received reports of adverse events for compounded versions of semaglutide.

Hims & Hers says it “conducted extensive research for over a year” into finding a supplier, but does not name the one it chose to partner with. “Over the last year, we have grown in our conviction — based on our medical experts’ evaluation and the strength of our infrastructure — that if done properly, compounded GLP-1s are safe and effective,” the company said in its statement.

Hims & Hers introduced its weight loss program in December 2023, according to an investor presentation. Its weight loss program costs $79 a month, and is expected to “deliver $100m+” in 2025. Hims & Hers makes most of its money through subscriptions; more than 90 percent of its revenue is “recurring.” Expanding its number of subscribers is how it plans to grow. GLP-1 weight-loss drugs must be taken continuously in order to sustain weight loss; one study has shown that people regain two-thirds of the pounds they lost once they quit semaglutide.

In the first quarter of 2024, the company added “a record 172k net new subscribers,” it said in its shareholder letter. The company has splashed out on TV advertising during NBA and NFL games, as well as Keeping Up with the Kardashians and The Bachelorette.

Ro, another online pharmacy that started with dick pills, is also already prescribing compounded versions of these drugs, Bloomberg reports. Ro previously advertised weight-loss drugs on the New York City subway system.

Read More 

Microsoft announces Copilot Plus PCs with built-in AI hardware

Microsoft CEO Satya Nadella introduces the Copilot Plus PC branding. | Image: Allison Johnson

Microsoft is making a major push to put AI into laptops. It’s introducing a new branding today called “Copilot Plus PCs” that’ll highlight when Windows laptops come with built-in AI hardware and support for AI features across the operating system.
All of Microsoft’s major laptop partners will offer Copilot Plus PCs, Microsoft CEO Satya Nadella said at an event at the company’s headquarters on Monday. That includes Dell, Lenovo, Samsung, HP, Acer, and Asus; Microsoft is also introducing two of its own as part of the Surface line. And while Microsoft is also making a big push to bring Arm chips to Windows laptops today, Nadella said that laptops with Intel and AMD chips will offer these AI features, too.
“We get to reimagine the platform that fuels our work.”
The AI capabilities will be possible thanks to a neural processor included with the laptops. One of the flagship features it’ll power is “Recall,” which is supposed to use AI to create a searchable “photographic memory” of everything you’ve done and seen on your PC. The laptops will run more than 40 AI models as part of Windows 11 to power these new features. Microsoft’s built-in AI assistant, Copilot, will also gain support for OpenAI’s GPT-4o model, which was introduced last week.
Yusuf Mehdi, the Microsoft exec over Windows, said the new laptops will be “58 percent faster” than a MacBook Air with an M3 processor and have battery life that lasts “all day.” Mehdi didn’t make it clear, however, if this will be true of all Copilot Plus PC laptops or just the models that make the switch to Qualcomm’s Arm-based processors. Microsoft expects 50 million laptops to be sold over the next year under the Copilot Plus PC branding.

Photo by Allison Johnson / The Verge
Copilot Plus PCs will require at least 16GB of RAM, 256GB of SSD storage, and an NPU.

Copilot Plus PCs will have certain spec requirements to make sure they can deliver the performance Microsoft is promising. They’ll need to have at least a 256GB SSD, an integrated neural processor, and 16GB of RAM — double what the MacBook Air starts at. The Arm-based models with Qualcomm chips are quoted as having battery life that supports “up to 15 hours of web browsing.”
Microsoft is pitching these devices as the start of a new era of Windows laptops, and it might not be all talk. The shift to Arm-based chips — which Microsoft has tried and failed to achieve in the past — could meaningfully boost the battery life on Windows laptops. And the new AI features are designed to work across processor hardware. It’s two big bets on unproven hardware and software, but they have the potential to be transformative if they work.
“Today is kind of a special day. We get to reimagine the platform that fuels our work and passion … on a new category of PCs,” Mehdi said at the event.
Developing…

Microsoft CEO Satya Nadella introduces the Copilot Plus PC branding. | Image: Allison Johnson

Microsoft is making a major push to put AI into laptops. It’s introducing a new branding today called “Copilot Plus PCs” that’ll highlight when Windows laptops come with built-in AI hardware and support for AI features across the operating system.

All of Microsoft’s major laptop partners will offer Copilot Plus PCs, Microsoft CEO Satya Nadella said at an event at the company’s headquarters on Monday. That includes Dell, Lenovo, Samsung, HP, Acer, and Asus; Microsoft is also introducing two of its own as part of the Surface line. And while Microsoft is also making a big push to bring Arm chips to Windows laptops today, Nadella said that laptops with Intel and AMD chips will offer these AI features, too.

“We get to reimagine the platform that fuels our work.”

The AI capabilities will be possible thanks to a neural processor included with the laptops. One of the flagship features it’ll power is “Recall,” which is supposed to use AI to create a searchable “photographic memory” of everything you’ve done and seen on your PC. The laptops will run more than 40 AI models as part of Windows 11 to power these new features. Microsoft’s built-in AI assistant, Copilot, will also gain support for OpenAI’s GPT-4o model, which was introduced last week.

Yusuf Mehdi, the Microsoft exec over Windows, said the new laptops will be “58 percent faster” than a MacBook Air with an M3 processor and have battery life that lasts “all day.” Mehdi didn’t make it clear, however, if this will be true of all Copilot Plus PC laptops or just the models that make the switch to Qualcomm’s Arm-based processors. Microsoft expects 50 million laptops to be sold over the next year under the Copilot Plus PC branding.

Photo by Allison Johnson / The Verge
Copilot Plus PCs will require at least 16GB of RAM, 256GB of SSD storage, and an NPU.

Copilot Plus PCs will have certain spec requirements to make sure they can deliver the performance Microsoft is promising. They’ll need to have at least a 256GB SSD, an integrated neural processor, and 16GB of RAM — double what the MacBook Air starts at. The Arm-based models with Qualcomm chips are quoted as having battery life that supports “up to 15 hours of web browsing.”

Microsoft is pitching these devices as the start of a new era of Windows laptops, and it might not be all talk. The shift to Arm-based chips — which Microsoft has tried and failed to achieve in the past — could meaningfully boost the battery life on Windows laptops. And the new AI features are designed to work across processor hardware. It’s two big bets on unproven hardware and software, but they have the potential to be transformative if they work.

“Today is kind of a special day. We get to reimagine the platform that fuels our work and passion … on a new category of PCs,” Mehdi said at the event.

Developing…

Read More 

Microsoft’s Surface and Windows AI event live blog: it’s Arm time

Photo by Amelia Holowaty Krales / The Verge

Are you ready for Microsoft’s latest Windows on Arm push? That’s what we’re expecting at today’s Surface and Windows AI event.
Rumors suggest Microsoft will unveil a new Arm-powered Surface Pro 10 and Surface Laptop 6 devices and a host of new AI features for Windows. Microsoft has been building up to this moment for quite some time, so expect to hear a lot about its transition to Windows on Arm.
Things kick off at 10AM PT / 1PM ET. Microsoft isn’t livestreaming this event, so follow our live blog for all the very latest as it happens.
‘,c(t.firstChild,i)))})(window);

Photo by Amelia Holowaty Krales / The Verge

Are you ready for Microsoft’s latest Windows on Arm push? That’s what we’re expecting at today’s Surface and Windows AI event.

Rumors suggest Microsoft will unveil a new Arm-powered Surface Pro 10 and Surface Laptop 6 devices and a host of new AI features for Windows. Microsoft has been building up to this moment for quite some time, so expect to hear a lot about its transition to Windows on Arm.

Things kick off at 10AM PT / 1PM ET. Microsoft isn’t livestreaming this event, so follow our live blog for all the very latest as it happens.

‘,c(t.firstChild,i)))})(window);

Read More 

A rundown of what’s new and improved in Android 15 — so far

Illustration by Samar Haddad / The Verge

The annual refreshes of Android and iOS are always worth looking out for, and in the wake of Google I/O 2024, we now have the second beta release of Android 15, so it’s a good opportunity to round up everything coming to the OS this year.
If you want to get involved in the beta testing — bearing in mind that these betas will have bugs and issues — head here to see if your device is eligible. Google Pixel owners can sign up, and a select number of phones from third-party manufacturers are included in the program, too, including handsets from OnePlus and Nothing (though not Samsung as yet).

Bear in mind that features will be added (and quite possibly removed) in the months to come as we head toward the full launch of Android 15, which will be around October if Google follows the Android 14 schedule. But for now, here’s what’s new and improved in Android 15 so far.
Better multitasking
Android 15 will improve the multitasking experience on tablets and large-screen displays by enabling you to pin the taskbar permanently on the screen for a more desktop-like experience. What’s more, split-screen app combinations — like Gmail and YouTube — can be saved to bring back later. These app pairs can be pinned to the taskbar, too.
Private space

Screenshot: Google
Your private space can use the same lock as your handset.

Screenshot: Google
You can use a different Google account with your private space.

Android 15 is adding a new secure location on your phone — a private space — so you can lock away your most sensitive apps and the data inside them. If you use a Samsung phone, there’s already something similar called Secure Folder, but now it’s going to be baked into Android for all users.
It works by creating a new section in the app drawer that will need extra verification (like a passcode or fingerprint) to access. You can install any apps you like here, including separate instances of the Camera, Google Photos, and Google Chrome, for photos, videos, and web browsing that you really don’t want anyone else to see.
Predictive back
This oddly named feature means you see a quick preview of what you’re going back to when you use the universal back gesture (a swipe in from the side of the screen). So, for example, you might see a website you just left or the homescreen — the idea being that users know what they’re going back to before they complete the gesture.
Partial screen recording
New in Android 15 is the ability to record just part of the screen rather than all of it; this is handy if you’re putting together a tutorial, troubleshooting a problem, or recording your screen for any other reason. It’s available in Android’s own screen recorder tool, and developers can add it to their own apps as well.
Redesigned volume sliders

Screenshot: Google
Screen recordings can now just capture a single app.

Screenshot: Google
The volume sliders panel gets a redesign.

Perhaps not the most important change but one you’ll see a lot: the panel that appears when you tap the three dots on the bottom of the volume slider now takes up more room on-screen, making it easier to adjust volumes for media, alarms, and calls. You’re also able to access connected Bluetooth devices from the same screen.
Satellite messaging
It looks as though Android phones might finally get satellite messaging, just like the iPhone. We say “might” because while Android 15 will officially “extend platform support for satellite connectivity” in terms of software features, this connectivity is also going to have to be built into the hardware and presumably involve a satellite network partner.
Widget previews

Screenshot: Google
Richer widget previews are on the way.

Widgets are nothing new in Android, of course, but usually when you’re adding new ones to the homescreen, you just see generic examples of what the widgets look like. In Android 15, developers can add rich previews to widgets — so, for example, when you add a contact widget, you’ll be able to preview how it looks using one of your actual contacts.
Set custom vibrations
Something for those of you who always have your phones muted: Android apps have previously been able to set their own custom vibrations, but in Android 15, you’re going to be able to set these yourself for specific notification channels. So you can have one buzz for an email and two buzzes for a text, for example.
One-time password protection
You may have accounts you log in to that are sent to one-time passwords (OTPs) to prove you are who you say you are. In Android 15, notifications with these OTPs won’t show up on-screen, minimizing the risk of anyone stealing your passwords by looking over your shoulder or somehow recording your screen.
Anti-theft protection

Screenshots: Google
New security features hide one-time passwords and deter thieves.

Google is adding a bunch of features to Android to deter thieves. Theft Detection Lock, for example, will use AI to detect if your phone is snatched at speed and then automatically turn on the lock screen. This auto-lock will also kick in if your phone is off the grid for an extended period of time or if too many failed authentication attempts are made.
You’ll also be able to more easily lock your device remotely. All of these features are heading to devices running Android 10 or later at some point this year, but one is exclusive to Android 15: your device can’t be reset (a common tactic used by thieves) without access to your Google account credentials, which means only you will be able to do it.
More convenient passkeys
Android 15 is also bringing with it some useful tweaks to passkey support on your phone — that’s where you use a phone unlock method (like a fingerprint scan) rather than a password to get into your Google account. In the new OS, the account selection screen and confirmation screen are combined into one, so that’s one less screen to get through. Google is also adding a new restore feature to make it easier to transfer your credentials over to a new phone.
And more…
Android updates always include a bunch of tweaks and minor improvements that don’t necessarily grab a lot of attention but are still welcome nevertheless. With Android 15, they include more efficient video processing, better handling of apps running in the foreground (a boost for battery life), security protections to stop malicious apps hijacking tasks run by trustworthy apps, and the ability for Health Connect to pull in more data from more apps over a longer time period.
And even more…
Then there are the features that haven’t been announced but that diligent code diggers have found. As these updates are disabled and hidden away and yet to be mentioned by Google, we can’t promise they’ll make the final release of Android 15. But if you’re interested, they include a Samsung DeX-like desktop mode, a status page for the health of your phone’s storage, and an extra-dim mode that makes it easier to read a phone screen in very dark environments.
It’s possible that Google will decide against including some of these features in the full release of Android 15, but we’re likely to get a few more updates and announcements before then — we’re by no means on the final version of the software yet.

Illustration by Samar Haddad / The Verge

The annual refreshes of Android and iOS are always worth looking out for, and in the wake of Google I/O 2024, we now have the second beta release of Android 15, so it’s a good opportunity to round up everything coming to the OS this year.

If you want to get involved in the beta testing — bearing in mind that these betas will have bugs and issues — head here to see if your device is eligible. Google Pixel owners can sign up, and a select number of phones from third-party manufacturers are included in the program, too, including handsets from OnePlus and Nothing (though not Samsung as yet).

Bear in mind that features will be added (and quite possibly removed) in the months to come as we head toward the full launch of Android 15, which will be around October if Google follows the Android 14 schedule. But for now, here’s what’s new and improved in Android 15 so far.

Better multitasking

Android 15 will improve the multitasking experience on tablets and large-screen displays by enabling you to pin the taskbar permanently on the screen for a more desktop-like experience. What’s more, split-screen app combinations — like Gmail and YouTube — can be saved to bring back later. These app pairs can be pinned to the taskbar, too.

Private space

Screenshot: Google
Your private space can use the same lock as your handset.

Screenshot: Google
You can use a different Google account with your private space.

Android 15 is adding a new secure location on your phone — a private space — so you can lock away your most sensitive apps and the data inside them. If you use a Samsung phone, there’s already something similar called Secure Folder, but now it’s going to be baked into Android for all users.

It works by creating a new section in the app drawer that will need extra verification (like a passcode or fingerprint) to access. You can install any apps you like here, including separate instances of the Camera, Google Photos, and Google Chrome, for photos, videos, and web browsing that you really don’t want anyone else to see.

Predictive back

This oddly named feature means you see a quick preview of what you’re going back to when you use the universal back gesture (a swipe in from the side of the screen). So, for example, you might see a website you just left or the homescreen — the idea being that users know what they’re going back to before they complete the gesture.

Partial screen recording

New in Android 15 is the ability to record just part of the screen rather than all of it; this is handy if you’re putting together a tutorial, troubleshooting a problem, or recording your screen for any other reason. It’s available in Android’s own screen recorder tool, and developers can add it to their own apps as well.

Redesigned volume sliders

Screenshot: Google
Screen recordings can now just capture a single app.

Screenshot: Google
The volume sliders panel gets a redesign.

Perhaps not the most important change but one you’ll see a lot: the panel that appears when you tap the three dots on the bottom of the volume slider now takes up more room on-screen, making it easier to adjust volumes for media, alarms, and calls. You’re also able to access connected Bluetooth devices from the same screen.

Satellite messaging

It looks as though Android phones might finally get satellite messaging, just like the iPhone. We say “might” because while Android 15 will officially “extend platform support for satellite connectivity” in terms of software features, this connectivity is also going to have to be built into the hardware and presumably involve a satellite network partner.

Widget previews

Screenshot: Google
Richer widget previews are on the way.

Widgets are nothing new in Android, of course, but usually when you’re adding new ones to the homescreen, you just see generic examples of what the widgets look like. In Android 15, developers can add rich previews to widgets — so, for example, when you add a contact widget, you’ll be able to preview how it looks using one of your actual contacts.

Set custom vibrations

Something for those of you who always have your phones muted: Android apps have previously been able to set their own custom vibrations, but in Android 15, you’re going to be able to set these yourself for specific notification channels. So you can have one buzz for an email and two buzzes for a text, for example.

One-time password protection

You may have accounts you log in to that are sent to one-time passwords (OTPs) to prove you are who you say you are. In Android 15, notifications with these OTPs won’t show up on-screen, minimizing the risk of anyone stealing your passwords by looking over your shoulder or somehow recording your screen.

Anti-theft protection

Screenshots: Google
New security features hide one-time passwords and deter thieves.

Google is adding a bunch of features to Android to deter thieves. Theft Detection Lock, for example, will use AI to detect if your phone is snatched at speed and then automatically turn on the lock screen. This auto-lock will also kick in if your phone is off the grid for an extended period of time or if too many failed authentication attempts are made.

You’ll also be able to more easily lock your device remotely. All of these features are heading to devices running Android 10 or later at some point this year, but one is exclusive to Android 15: your device can’t be reset (a common tactic used by thieves) without access to your Google account credentials, which means only you will be able to do it.

More convenient passkeys

Android 15 is also bringing with it some useful tweaks to passkey support on your phone — that’s where you use a phone unlock method (like a fingerprint scan) rather than a password to get into your Google account. In the new OS, the account selection screen and confirmation screen are combined into one, so that’s one less screen to get through. Google is also adding a new restore feature to make it easier to transfer your credentials over to a new phone.

And more…

Android updates always include a bunch of tweaks and minor improvements that don’t necessarily grab a lot of attention but are still welcome nevertheless. With Android 15, they include more efficient video processing, better handling of apps running in the foreground (a boost for battery life), security protections to stop malicious apps hijacking tasks run by trustworthy apps, and the ability for Health Connect to pull in more data from more apps over a longer time period.

And even more…

Then there are the features that haven’t been announced but that diligent code diggers have found. As these updates are disabled and hidden away and yet to be mentioned by Google, we can’t promise they’ll make the final release of Android 15. But if you’re interested, they include a Samsung DeX-like desktop mode, a status page for the health of your phone’s storage, and an extra-dim mode that makes it easier to read a phone screen in very dark environments.

It’s possible that Google will decide against including some of these features in the full release of Android 15, but we’re likely to get a few more updates and announcements before then — we’re by no means on the final version of the software yet.

Read More 

Apple’s time travel comedy Time Bandits starts streaming in July

Image: Apple

You won’t need to travel too far into the future to see Apple TV Plus’ next adventure series. The streamer just announced that Time Bandits, an adaptation of the 1981 Terry Gilliam film of the same name, will premiere on July 24th. The TV version of the story will span 10 episodes and, according to Apple, is about “an unpredictable journey through time and space with a ragtag group of thieves and their newest recruit: an 11-year-old history buff named Kevin.”

The show has some comedy bona fides at least, with Jemaine Clement and Iain Morris serving as co-showrunners and Taika Waititi also on board as co-creator and executive producer. The cast, meanwhile, is led by Lisa Kudrow and a young Kal-El Tuck and also features the likes of Tadhg Murphy, Charlyne Yi, and Roger Nsengiyumva. You can get a first look at some of the main cast in the photo above.
Here’s the full description, according to Apple:
Guided by Lisa Kudrow, the eccentric crew of bandits embark on epic adventures while evil forces threaten their conquests and life as they know it. As the group transports through time and space, the gang stumbles upon fascinating worlds of the distant past while seeking out treasure, depending on Kevin to shed light on each situation. The Time Bandits witness the creation of Stonehenge, see the Trojan Horse in action, escape dinosaurs in the prehistoric ages, wreak havoc during medieval times, experience the ice age, ancient civilizations, the Harlem Renaissance, and much more along the way.
Time Bandits will continue a steady run of genre shows on Apple TV Plus. The not-quite-what-it-seems detective series Sugar just wrapped up its first season, the multiversal Dark Matter is ongoing, and July will also see the premiere of a robot mystery show called Sunny.

Image: Apple

You won’t need to travel too far into the future to see Apple TV Plus’ next adventure series. The streamer just announced that Time Bandits, an adaptation of the 1981 Terry Gilliam film of the same name, will premiere on July 24th. The TV version of the story will span 10 episodes and, according to Apple, is about “an unpredictable journey through time and space with a ragtag group of thieves and their newest recruit: an 11-year-old history buff named Kevin.”

The show has some comedy bona fides at least, with Jemaine Clement and Iain Morris serving as co-showrunners and Taika Waititi also on board as co-creator and executive producer. The cast, meanwhile, is led by Lisa Kudrow and a young Kal-El Tuck and also features the likes of Tadhg Murphy, Charlyne Yi, and Roger Nsengiyumva. You can get a first look at some of the main cast in the photo above.

Here’s the full description, according to Apple:

Guided by Lisa Kudrow, the eccentric crew of bandits embark on epic adventures while evil forces threaten their conquests and life as they know it. As the group transports through time and space, the gang stumbles upon fascinating worlds of the distant past while seeking out treasure, depending on Kevin to shed light on each situation. The Time Bandits witness the creation of Stonehenge, see the Trojan Horse in action, escape dinosaurs in the prehistoric ages, wreak havoc during medieval times, experience the ice age, ancient civilizations, the Harlem Renaissance, and much more along the way.

Time Bandits will continue a steady run of genre shows on Apple TV Plus. The not-quite-what-it-seems detective series Sugar just wrapped up its first season, the multiversal Dark Matter is ongoing, and July will also see the premiere of a robot mystery show called Sunny.

Read More 

Google CEO Sundar Pichai on AI-powered search and the future of the web

Photo illustration by The Verge

The head of Google sat down with Decoder last week to talk about the biggest advancements in AI, the future of Google Search, and the fate of the web. Today, I’m talking to Alphabet and Google CEO Sundar Pichai, who joined the show the day after the Google I/O developer conference last week. Google’s focus during the conference was AI, of course — Google is building AI into virtually all of its products. My personal favorite is the new AI search in Google Photos that lets you ask things like, “What’s my license plate number?” and get an answer back from your entire photo library. All in all, Google executives said “AI” more than 120 times during the keynote — we counted.
But there was one particular announcement at I/O that’s sending shockwaves around the web: Google is rolling out what it calls AI Overviews in Search to everyone in the United States by this week and around the world to more than a billion users by the end of the year. That means when you search for something on Google, you’ll get AI-powered results at the top of the page for a number of queries. The company literally describes this as “letting Google do the Googling for you.” Google has been testing this for a year now, in what it called the Search Generative Experience, so you may have already seen a version of this — but now it’s here, and it will change the web as we know it.

Until now, Google’s ecosystem has been based on links to everyone else’s content: you type something into a search box, you see some links, and you click one. That sends traffic to websites, which their owners can try to monetize in various ways, and ideally everyone wins.
Google is by far the biggest source of traffic on the web today, so if it starts keeping that traffic for itself by answering questions with AI, that will change or potentially even destroy the internet ecosystem as we know it. The News/Media Alliance, which represents a bunch of fancy news publishers, put out a press release calling AI previews in search “catastrophic to our traffic.”
If you’re a Decoder listener, you’ve heard me talk about this idea a lot over the past year: I call it Google Zero, and I’ve been asking web and media CEOs what would happen to their businesses if their Google traffic were to go to zero. If AI chatbots and AI-powered search results are summarizing everything for you, why would you go to a website? And if we all stop going to websites, what’s the incentive to put new content on the web? What’s going to stop shady characters from flooding the web with AI-generated spam to try and game these systems? And if we succeed in choking the web with AI, what are all these bots going to summarize when people ask them questions?
Sundar has some ideas. For one, he’s not convinced the web, which he says he cares deeply about, is in all that much danger. You’ll hear him mention Wired’s famous 2010 headline, “The web Is dead,” and he makes the argument that new, transformative technologies like AI always cause some short-term disruptions.
He says injecting AI into Search is about creating value for users, and those users are telling him that they find these new features to be helpful — and even clicking on links at higher rates in the AI previews. But he didn’t say where that leaves the people who put the content on the internet in the first place. We really sat with that idea for a while — and we talked a lot about the anger creative people feel toward AI systems training on their work.
I’ve talked to Sundar quite a bit over the past few years, and this was the most fired up I’ve ever seen him. You can really tell that there is a deep tension between the vision Google has for the future — where AI magically makes us smarter, more productive, and more artistic — and the very real fears and anxieties creators and website owners are feeling right now about how search has changed and how AI might swallow the internet forever. Sundar is wrestling with that tension.
One note: you’ll hear me say I think Sundar keeps making oblique references to OpenAI, which he pushes back on pretty strongly. I thought about it afterward, and it’s pretty clear he wasn’t just talking about OpenAI but also Meta, which has openly turned away from sending any traffic to any websites whatsoever and has been explicit that it doesn’t want to support news on its platforms at all anymore. I wish that had clicked for me during this conversation, because I would have asked about it more directly.
Okay, Google CEO Sundar Pichai. Here we go.

This transcript has been lightly edited for length and clarity.
Sundar Pichai, you are the CEO of both Alphabet and Google. Welcome to Decoder.
Nilay, good to be here.
I am excited to talk to you. I feel like I talk to you every year at Google I/O, and we talk about all the things you’ve announced. There’s a lot of AI news to talk about. As you know, I’m particularly interested in the future of the web, so I really want to talk about that with you, but I figured I’d start with an easy one.
Do you think language is the same as intelligence?
Wow, that’s not an easy question! I don’t think I’m the expert on it. I think language does encode a lot of intelligence, probably more than people thought. It explains the successes of large language models to a great extent. But my intuition tells me, as humans, there’s a lot more to the way we consume information than language alone. But I’d say language is a lot more than people think it is.
The reason I asked that question to start is: I look at the announcements at I/O with AI and what you’re doing, I look at your competitors with AI and what they’re doing, and everything is very language-heavy. It’s LLMs that have really led to this explosion of interest in innovation and investment, and I wonder if the intelligence is increasing at the same rate as the facility with language. I kind of don’t see it, to be perfectly honest. I see computers getting much better at language and actually in some cases getting dumber. I’m wondering if you see that same gap.
Yeah, it’s a great question. Part of the reason we made Gemini natively multimodal — and you’re beginning to see glimpses of it now but it hasn’t made its way fully into products yet — is so that with audio, video, text, images, and code, when we have multimodality working on the input and output side — and we are training models using all of that — maybe in the next cycle, that’ll encapsulate a lot more than just today, which is primarily text-based. I think that continuum will shift as we take in a lot more information that way. So maybe there’s more to come.
Last year the tagline was “Bold but responsible.” That’s Google’s approach. You said it again onstage this year. And then I look at our reactions to AI getting things wrong, and it seems like they’re getting more and more tempered over time.
I’ll give you an example. In the demos you had yesterday, you showed multimodal video search of someone trying to fix a broken film camera. And the answer was just wrong. The answer that was highlighted in the video was, “Just open the back of the film camera and jiggle it.” It’s like, well, that would ruin all of your film. No one who had an intelligent understanding of how that camera [worked] would suggest that.
I was talking to the team and, ironically, as part of making the video, they consulted with a bunch of subject matter experts who all reviewed the answer and thought it was okay. I understand the nuance. I agree with you. Obviously, you don’t want to expose your film by taking it outside of a darkroom. There are certain contexts in which it makes sense to do that. If you don’t want to break the camera and if what you’ve taken is not that valuable, it makes sense to do that.
You’re right. There is a lot of nuance to it. Part of what I hope Search serves to do is to give you a lot more context around that answer and allow people to explore it deeply. But I think these are the kinds of things for us to keep getting better at. But to your earlier question, look, I do see the capability frontier continuing to move forward. I think we are a bit limited if we were just training on text data, but we are all making it more multimodal. So I see more opportunities there.
Let’s talk about Search. This is the thing that I am most interested in — I think this is the thing that is changing the most. In an abstract way, it’s the thing that’s the most exciting. You can ask a computer a question, and it will just happily tell you an answer. That feels new. I see the excitement around it.
Yesterday, you announced AI Overviews are coming to Search. That’s an extension of what was called the Search Generative Experience, which was announced in a rollout to everyone in the United States. I would describe the reactions to that news from the people who make websites as fundamentally apocalyptic. The CEO of the News/Media Alliance said to CNN, “This will be catastrophic to our traffic.” Another media CEO forwarded me a newsletter and the headline was, “This is a death blow to publishers.” Were you expecting that kind of response to rolling out AI Overviews in Search?
I recall, in 2010, there were headlines that the web was dead. I’ve long worked on the web, obviously. I care deeply about it. When the transition from desktop to mobile happened, there was a lot of concern because people were like, “Oh, it’s a small screen. How will people read content? Why would they look at content?” We had started introducing what we internally called “Web Answers” in 2014, which are featured snippets outside [the list of links]. So you had questions like that.
I remain optimistic. Empirically, what we are seeing throughout the years, I think human curiosity is boundless. It’s something we have deeply understood in Search. More than any other company, we will differentiate ourselves in our approach even through this transition. As a company, we realize the value of this ecosystem, and it’s symbiotic. If there isn’t a rich ecosystem making unique and useful content, what are you putting together and organizing? So we feel it.
I would say, through all of these transitions, things have played out a bit differently. I think users are looking for high-quality content. The counterintuitive part, which I think almost always plays out, is [that] it’s not a zero-sum game. People are responding very positively to AI Overviews. It’s one of the most positive changes I’ve seen in Search based on metrics. But people do jump off on it. And when you give context around it, they actually jump off it. It actually helps them understand, and so they engage with content underneath, too. In fact, if you put content and links within AI Overviews, they get higher clickthrough rates than if you put it outside of AI Overviews.
But I understand the sentiment. It’s a big change. These are disruptive moments. AI is a big platform shift. People are projecting out, and people are putting a lot into creating content. It’s their businesses. So I understand the perspective [and] I’m not surprised. We are engaging with a lot of players, both directly and indirectly, but I remain optimistic about how it’ll actually play out. But it’s a good question. I’m happy to talk about it more.
I have this concept I call “Google Zero,” which is born of my own paranoia. Every referrer that The Verge has ever had has gone up and then it’s gone down, and Google is the last large-scale referrer of traffic on the web for almost every website now. And I can see that for a lot of sites, Google Zero is playing out. Their Google traffic has gone to zero, particularly independent sites that aren’t part of some huge publishing conglomerate. There’s an air purifier blog that we covered called HouseFresh. There’s a gaming site called Retro Dodo. Both of these sites have said, “Look, our Google traffic went to zero. Our businesses are doomed.”
Is that the right outcome here in all of this — that the people who care so much about video games or air purifiers that they started websites and made the content for the web are the ones getting hurt the most in the platform shift?
It’s always difficult to talk about individual cases, and at the end of the day, we are trying to satisfy user expectations. Users are voting with their feet, and people are trying to figure out what’s valuable to them. We are doing it at scale, and I can’t answer on the particular site—
A bunch of small players are feeling the hurt. Loudly, they’re saying it: “Our businesses are going away.” And that’s the thing you’re saying: “We’re engaging, we’re talking.” But this thing is happening very clearly.
It’s not clear to me if that’s a uniform trend. I have to look at data on an aggregate [basis], so anecdotally, there are always times when people have come in an area and said, “Me, as a specific site, I have done worse.” But it’s like an individual restaurant saying, “I’ve started getting fewer customers this year. People have stopped eating food,” or whatever it is. It’s not necessarily true. Some other restaurant might have opened next door that’s doing very well. So it’s tough to say.
From our standpoint, when I look historically even over the past decade, we have provided more traffic to the ecosystem, and we’ve driven that growth. You may be making a secondary point about small sites versus more aggregating sites, which is the second point you’re talking about. Ironically, there are times when we have made changes to actually send more traffic to the smaller sites. Some of those sites that complain a lot are the aggregators in the middle. So should the traffic go to the restaurant that has created a website with their menus and stuff or people writing about these restaurants? These are deep questions. I’m not saying there’s a right answer.
But you’re about to flip over the whole apple cart, right? You’re about to start answering some of these questions very directly. And where that content comes from in the future, I think you want the people who care the most to publish that information directly to be the thing that you synthesize.
I agree.
The incentives for that seem to be getting lower and lower — on the web, anyway.
I feel it’s the opposite. If anything, I feel like through AI Overviews, when you give people context, yes, there are times all people want is a quick answer and they bounce back. But overall, when we look at user journeys, when you give the context, it also exposes people to jumping-off points, and so they engage more. Actually, this is what drives growth over time. I look at desktop to mobile, and there were similar questions. In fact, there was a [magazine] cover I’m almost tempted to pull out, saying, “The web is dead.” There was a Google Zero argument 10 years ago. But you yourself made the point that it’s not an accident that we still remain as one of the largest referrers because we’ve cared about it deeply for a long, long time.
I look at our journey, even the last year through the Search Generative Experience, and I constantly found us prioritizing approaches that would send more traffic while meeting user expectations. We think through that deeply and we actually change our approach. If there are areas where we feel like we haven’t fully gotten it right, we are careful about rolling it out. But I think what’s positively surprising us is that people engage more, and that will lead to more growth over time for high-quality content.
There’s a lot of debate about what high-quality content is. At least in my experience, I value independent sources, I value smaller things, I want more authentic voices. And I think those are important attributes we are constantly trying to improve.
You mentioned that you think more people will click through links in AI Overviews. Liz [Reid] who runs Search had a blog post making the same claim. There’s no public data that says that is true yet. Are you going to release that data? Are you going to show people that this is actually happening?
On an aggregate, I think people rely on this value of the ecosystem. If people over time don’t see value, website owners don’t see value coming back from Google, I think we’ll pay a price. We have the right incentive structure. But obviously, look, we are careful about… there are a lot of individual variations, and some of it is users choosing which way to go. That part is hard to sort out. But I do think we are committed at an aggregate level to do the right thing.
I was reading some SEO community trade publications this morning responding to the changes, and one of the things that was pointed out was that, in Search Console, it doesn’t show you if the clicks are coming from a featured snippet or an AI Overview or just Google’s regular 10 blue links. Would you break that out? Would you commit to breaking that out so people can actually audit and verify and measure that the AI Overviews are sending out as much traffic as you say they are?
It’s a good question for the Search team. They think about this at a deeper level than I do. I think we are constantly trying to give more visibility, but also we want people to create content that’s good. And we are trying to rank it and organize it, so I think there’s a balance to be had. The more we spec it out, then the more people design for that. There’s a tradeoff there, so it’s not clear to me what the right answer is.
That tradeoff between what you spec out and say and what people make, that’s been the story of the web for quite some time. It had reached, I think, a steady state. Whether you thought that steady state was good or bad, it was at least at a steady state. Now, that state is changing — AI is obviously changing it.
The 10 blue link model, the old steady state, is very much based on an exchange: “We’re going to let you index our content. We’re going to [have] featured snippets. We’re going to let you see all of our information. In return, you will send us traffic.” That formed the basis of what you might call a fair-use argument. Google’s going to index this stuff, [and] there’s not going to be a lot of payments in the middle.
In the AI era, no one knows how that’s going to go. There are some major lawsuits happening. There are deals being made by Google and OpenAI for training data. Do you think it’s appropriate for Google to start making more deals to pay for data to train search results? Because those AI snippets are not really the same as the 10 blue links or anything else you’ve done in the past.
To be very clear, there’s a myth that Google’s search has been 10 blue links for — I look at our mobile experience — many, many years. And we have had answers, we allow you to refine questions, we’ve had featured snippets, and so on. The product has evolved significantly.
Having said that, as a company, even as we look at AI, we have done Google [News] Showcase, we have done licensing deals. To the extent there is value there, we obviously think there is a case for fair use in the context of beneficial, transformative use. I’m not going to argue that with you given your background. But I think there are cases in which we will see dedicated incremental value to our models, and we’ll be looking at partnerships to get at that. I do think we’ll approach it that way.
Let me ask this question in a different way. I won’t do too much fair-use analysis with you, I promise, as much as I like doing it.
There were some news reports recently that OpenAI had trained its video generation product, Sora, on YouTube. How did you feel when you heard that news?
Look, we don’t know the details. Our YouTube team is following up and trying to understand it. We have terms and conditions, and we would expect people to abide by those terms and conditions when you build a product, so that’s how I felt about it.
So you felt like they had potentially broken your terms and conditions? Or if they had, that wouldn’t have been appropriate?
That’s right.
The reason I asked that question — which is a much more emotional question — is okay, maybe that’s not appropriate. And what OpenAI has said is essentially “We’ve trained on publicly available information,” which means they found it on the web.
Most people don’t get to make that deal. They don’t have a YouTube team of licensing professionals who can say, “We have terms and conditions.” They don’t even have terms and conditions. They’re just putting their stuff on the internet. Do you understand why, emotionally, there’s the reaction to AI from the creative community — that it feels the same as you might have felt about OpenAI training on YouTube?
Absolutely. Look, be it website owners or content creators or artists, I can understand how emotional a transformation this is. Part of the reason you saw, even through Google I/O when we were working on products like music generation, we have really taken an approach by which we are working first to make tools for artists. We haven’t put a general-purpose tool out there for anyone to create songs.
The way we have taken that approach in many of these cases is to put the creator community as much at the center of it as possible. We’ve long done that with YouTube. Through it all, we are trying to figure out what the right ways to approach this.
But it is a transformative moment as well, and there are other players in this. We are not the only player in the ecosystem. But, to your earlier question, yes, I understand people’s emotions about it. I definitely am very empathetic to how people are perceiving this moment.
They feel like it’s a taking — that they put work on the internet and the big companies are coming, taking it for free, and then making products that they are charging $20 a month for or that will lift their creative work and remix it for other people. The thing that makes it feel like a taking is [that] very little value accrues back to them.
That’s really the thing I’m asking about: how do you bring value back to them? How do you bring incentives back to the small creator or the independent business that’s saying, “Look, this feels a taking.”
Look. [Sighs] The whole reason we’ve been successful on platforms like YouTube is we have worked hard to answer this question. You’ll continue to see us dig deep about how to do this well. And I think the players who end up doing better here will have more winning strategies over time. I genuinely believe that.
Across everything we do, we have to sort that out. Anytime you’re running a platform, it’s the basis on which you can build a sustainable long-term platform. Through this AI moment, over time, there’ll be players who will do better by the content creators that support their platforms, and whoever does it better will emerge as the winner. I believe that to be a tenet of these things over time.
One thing that I think is really interesting about the YouTube comparison in particular — it’s been described to me many times that YouTube is a licensing business. You license a lot of content from the creators. You obviously pay them back in terms of the advertising model there. The music industry has a huge licensing business with YouTube. It is an existential relationship for both sides. Susan Wojcicki used to describe YouTube as a music service, which I think confused everyone until you looked at the data.
Universal Music is mad about AI on YouTube. YouTube reacts. It builds a bunch of tools. It writes a constitution about what AI will and will not do. People are mad about the Search Generative Experience or AI [Overviews] on the web. Google doesn’t react the same way. I’m wondering if you can square that circle.
That is so far from reality.
You think so?
That’s so far from reality. I look at other players and how they’ve approached—
You’re talking about OpenAI, which is just out there taking stuff.
In general, when you look at how we have approached the Search Generative Experience, even through a moment like this, the time we have taken to test, iterate, and prioritize approaches, and the way we’ve done it over the years, I would say I definitely disagree with the notion we don’t listen. We care deeply; we listen. People may not agree with everything we do. When you’re running an ecosystem, you are balancing different needs. I think that’s the essence of what makes a product successful.
Let me talk about the other side of this. There’s search: people are going to game search and that’s always going to happen and that’s a chicken-and-egg problem.
The other thing that I see happening is the web is being flooded with AI content. There was an example a few months ago where some unsavory SEO character said, “I stole a bunch of traffic from a competitor. I copied their site map. I fed it into an AI system and had it generate copy for a website that matched their site map, and I put up this website and stole a bunch of traffic from my competitor.” I think that’s a bad outcome. I don’t think we want to incentivize that in any way, shape, or form.
[Shakes head] No, no—
That’s going to happen at scale. More and more of the internet that we experience will be synthetic in some important way. How do you, on the one hand, build the systems that create the synthetic content for people and, on the other hand, rank it so that you’re only getting the best stuff? Because at some point, the defining line for a lot of people is, “I want stuff made by a human, and not stuff made by AI.”
I think there are multiple parts to your question. One, how do we differentiate high quality from low quality? I literally view it as our mission statement, and it is what has defined Search over many, many years.
I actually think people underestimate… Anytime you have these disruptive platform shifts, you’re going to go through a phase like this. I have seen that team invest so much. Our entire search quality team has been spending the last year gearing up our ranking systems, etc., to better get at what high-quality content is. If I take the next decade, [the] people who can do that better, who can sift through that, I think, will win out.
I think you’re right in your assessment that people will value human-created experiences. I hope the data bears that out. We have to be careful every time there’s a new technology. There are filmmakers, if you go and talk about CGI in films, they’re going to react very emotionally, and there are still esteemed filmmakers who never use CGI in films. But then there are people who use it and produce great films. And so you may be using AI to lay out and enhance video effects in your video.
But I agree with you. I think using AI to produce content en masse without adding any value is not what users are looking for.
But there is a big continuum and, over time, users are adapting. We are trying hard to make sure we do it in a responsible way, but we’re also listening to what users consider to be high quality and trying to get that balance right. That continuum will look different a few years out than it does today, but I think I view it as the essence of what search quality is. Do I feel confident we will be able to approach it better than others? Yes. And I think that’s what defines the work we do.
For the listener, there have been a lot of subtle shots at OpenAI today.
Can I put this into practice? I actually just did this search. It is a search for “best Chromebook.” As you know, I once bought my mother a Chromebook Pixel. It’s one of my favorite tech purchases of all time. This is a search for “best Chromebook.” I’m going to hit “generate” at the top, it’s going to generate the answer, and then I’m going to do something terrifying, which is, I’m going to hand my phone to the CEO of Google. This is my personal phone. Don’t dig through it.
You look at that — it’s the same generation that I’ve seen earlier. I asked it for the best Chromebook, and it says, “Here’s some stuff you might think of.” Then you scroll, and it’s some Chromebooks. It doesn’t say whether they’re the best Chromebooks, and then it’s a bunch of headlines, some of which are Verge headlines, that are like, “Here are some of the best Chromebooks.” That feels like the exact kind of thing that an AI-generated search could answer in a better way. Do you think that’s a good experience? Is that a waypoint or is that the destination?
I think, look, you’re showing me a query in which we didn’t automatically generate the AI.
There was a button that said, “Do you want to do this?”
But let me push back. There’s an important differentiation. There’s a reason we are giving a view without the generated AI Overview, and as a user, you’re initiating an action, so we’re respecting the user intent there. When I scroll, I see Chromebooks. I also see a whole set of links, which I can go to and that tell me all the ways you can think about Chromebooks. I see a lot of links. We didn’t show an AI Overview in this case. As a user, you’re generating the follow-up question. I think it’s right that we respect the user’s intent. If you don’t do that, people will go somewhere else, too.
But I’m saying — I did not write, “What is the best Chromebook?” I just wrote “best Chromebook — [but] the answer, a thing that identifies itself as an answer, is not on that page. The leap from “I had to push the button” to “Google pushes the button for me and then says what it believes to be the answer” is very small. I’m wondering if you think a page like that today is the destination of the search experience, or if this is a waypoint and you can see a better future version of that experience.
I think the direction of how these things will go, it’s tough to fully predict. Users keep evolving. It’s a more dynamic moment than ever. We are testing all of this, and this is a case where we didn’t trigger the AI Overview because we felt like our AI Overview is not necessarily the first experience we want to provide for that query because what’s underlying is maybe a better first look for the user — those are all quality tradeoffs we are making. But if the user is asking for a summary, we are summarizing and giving links. I think that seems like a reasonable direction to me.
I’ll show you another one where it did expand automatically. This one I only have screenshots for. I don’t think I’m fully opted in. This is Dave Lee from Bloomberg, who did a search. He got an AI Overview, and he just searched for “JetBlue Mint Lounge SFO.” And it just says the answer, which I think is fine. That’s the answer.
If you swipe one over — I cannot believe I’m letting the CEO of Google swipe on my camera roll — but if you swipe one over, you see the site it pulled from. It is a word-for-word rewrite of that site. This is the thing I’m getting at.
The AI-generated overview of that answer, if you just look at where it came from, is almost the same sentence as the source. And that’s what I mean. At some point, the better experience is the AI overview, and it’s just the thing that exists on all the sites underneath it. It’s the same information.
[Sighs] The thing with Search — we handle billions of queries. You can absolutely find a query and hand it to me and say, “Could we have done better on that query?” Yes, for sure. But in many cases, part of what is making people respond positively to AI Overviews is that the summary we are providing clearly adds value and helps them look at things they may not have otherwise thought about. If you’re adding value at that level, I think people notice it over time, and I think that’s the bar you’re trying to meet. Our data would show, over 25 years, if you aren’t doing something that users find valuable or enjoyable, they let us know right away. Over and over again we see that.
Through this transition, everything is the opposite. It’s one of the biggest quality improvements we are driving in our product. People are valuing this experience. There’s a general presumption that people don’t know what they’re doing, which I disagree with strongly. People who use Google are savvy. They understand. And so, to me, I can give plenty of examples where I’ve used AI Overviews as a user. I’m like, “Oh, this is giving context. Oh, maybe there are these dimensions I didn’t even think of in my original query. How do I expand upon it and look at it?”
You’ve made oblique mention of OpenAI a few times, I think.
I actually haven’t.
You’re saying “others.” There’s one other big competitor that is, I think, a little more—
You’re putting words in my mouth, but that’s okay.
I saw OpenAI’s demo the other day of GPT-4o, Omni. It looked a lot like the demos you gave at I/O. This idea of multimodal search, the idea that you have this character you can talk to — you have Gems, which are the same kind of idea — it feels like there’s a race to get to the same outcome for a search-like experience or an agent-like experience. Do you feel the pressure from that competition?
This is no different from Siri and Alexa. When you’re working in the technology industry, I think there is relentless innovation we felt a few years ago, all of us building voice assistants. You could have asked the same version of this question: what was Alexa trying to do and what was Siri trying to do? It’s a natural extension of that. I think you have a new technology now, and it’s evolving rapidly.
I felt like it was a good week for technology. There was a lot of innovation, I felt, on Monday and Tuesday and so on. That’s how I feel, and I think it’s going to be that way for a while. I’d rather have it that way. You’d rather be in a place where the underlying technology is evolving, which means you can radically improve the experiences you’re putting out. I’d rather have that any time than a static phase in which you feel like you’re not able to move forward quickly.
A lot of us have had this vision for what a powerful assistant can be, but we were held back by the underlying technology not being able to serve that goal. I think we have a technology that is better able to serve that. That’s why you’re seeing the progress again. I think that’s exciting. To me, I look at it and say, “We can actually make Google Assistant a whole lot better.” You’re seeing visions of that with Project Astra. It’s incredibly magical to me when I use it, so I’m very excited by it.
This brings me back to the first question I asked: language versus intelligence. To make these products, I think you need a core level of intelligence. Do you have in your head a measure of “This is when it’s going to be good enough. I can trust this”?
On all of your demo slides and all of OpenAI’s demo slides, there’s a disclaimer that says “Check this info,” and to me, it’s ready when you don’t need that anymore. You didn’t have “Check this info” at the bottom of the 10 blue links. You didn’t have “Check this info” at the bottom of featured snippets.
You’re getting at a deeper point where hallucination is still an unsolved problem. In some ways, it’s an inherent feature. It’s what makes these models very creative. It’s why it can immediately write a poem about Thomas Jefferson in the style of Nilay. It can do that. It’s incredibly creative. But LLMs aren’t necessarily the best approach to always get at factuality, which is part of why I feel excited about Search.
Because in Search we are bringing LLMs in a way, but we are grounding it with all the work we do in Search and layering it with enough context that we can deliver a better experience from that perspective. But I think the reason you’re seeing those disclaimers is because of the inherent nature. There are still times it’s going to get it wrong, but I don’t think I would look at that and underestimate how useful it can be at the same time. I think that would be the wrong way to think about it.
Google Lens is a good example. When we first put Google Lens out, it didn’t recognize all objects well. But the curve year on year has been pretty dramatic, and users are using it more and more. We’ve had billions of queries now with Google Lens. It’s because the underlying image recognition, paired with our knowledge entity understanding, has dramatically expanded over time.
I would view it as a continuum, and I think, again, I go back to this saying that users vote with their feet. Fewer people used Lens in the first year. We also didn’t put it everywhere because we realized the limitations of the product.
When you talk to the DeepMind Google Brain team, is there a solution to the hallucination problem on the roadmap?
It’s Google DeepMind. [Laughs]
Are we making progress? Yes, we are. We have definitely made progress when we look at metrics on factuality year on year. We are all making it better, but it’s not solved. Are there interesting ideas and approaches that they’re working on? Yes, but time will tell. I would view it as LLMs are an aspect of AI. We are working on AI in a much broader way, but it’s an area where we are all definitely working to drive more progress.
Five years from now, this technology, the paradigm shift, it feels like we’ll be through it. What does the best version of the web look like for you five years from now?
I hope the web is much richer in terms of modality. Today, I feel like the way humans consume information is still not fully encapsulated in the web. Today, things exist in very different ways — you have webpages, you have YouTube, etc. But over time, I hope the web is much more multimodal, it’s much richer, much more interactive. It’s a lot more stateful, which it’s not today.
I view it as, while fully acknowledging the point that people may use AI to generate a lot of spam, I also feel every time there’s a new wave of technology, people don’t quite know how to use it. When mobile came, everyone took webpages and shoved them into mobile applications. Then, later, people evolved [into making] really native mobile applications.
The way people use AI to actually solve new things, new use cases, etc. is yet to come. When that happens, I think the web will be much, much richer, too. So: dynamically composing a UI in a way that makes sense for you. Different people have different needs, but today you’re not dynamically composing that UI. AI can help you do that over time. You can also do it badly and in the wrong way and people can use it shallowly, but there will be entrepreneurs who figure out an extraordinarily good way to do it, and out of it, there’ll be great new things to come.
Google creates a lot of incentives for development on the web through Search, through Chrome, through everything that you do. How do you make sure those incentives are aligned with those goals? Because maybe the biggest thing here is that the web ecosystem is in a moment of change, and Google has a lot of trust to build and rebuild. How do you think about making sure those incentives point at the right goals?
Look, not everything is in Google’s control. I wish I could influence what the single toughest experience when I go to websites today as a user is — you have a lot of cookie dialogues to accept, etc. So I would argue there are many things outside of that. You can go poll 100 users.
But what are the incentives we would like to create? I think, and this is a complex question, which is how do you reward originality, creativity, and independent voice at whatever scale at which you’re able to and give a chance for that to thrive in this content ecosystem we create? That’s what I think about. That’s what the Search team thinks about. But I think it’s an important principle, and I think it’ll be important for the web and important for us as a company.
That’s great. Well, Sundar, thank you so much for the time. Thank you for being on Decoder.
Thanks, Nilay. I greatly enjoyed it.

Photo illustration by The Verge

The head of Google sat down with Decoder last week to talk about the biggest advancements in AI, the future of Google Search, and the fate of the web.

Today, I’m talking to Alphabet and Google CEO Sundar Pichai, who joined the show the day after the Google I/O developer conference last week. Google’s focus during the conference was AI, of course — Google is building AI into virtually all of its products. My personal favorite is the new AI search in Google Photos that lets you ask things like, “What’s my license plate number?” and get an answer back from your entire photo library. All in all, Google executives said “AI” more than 120 times during the keynote — we counted.

But there was one particular announcement at I/O that’s sending shockwaves around the web: Google is rolling out what it calls AI Overviews in Search to everyone in the United States by this week and around the world to more than a billion users by the end of the year. That means when you search for something on Google, you’ll get AI-powered results at the top of the page for a number of queries. The company literally describes this as “letting Google do the Googling for you.” Google has been testing this for a year now, in what it called the Search Generative Experience, so you may have already seen a version of this — but now it’s here, and it will change the web as we know it.

Until now, Google’s ecosystem has been based on links to everyone else’s content: you type something into a search box, you see some links, and you click one. That sends traffic to websites, which their owners can try to monetize in various ways, and ideally everyone wins.

Google is by far the biggest source of traffic on the web today, so if it starts keeping that traffic for itself by answering questions with AI, that will change or potentially even destroy the internet ecosystem as we know it. The News/Media Alliance, which represents a bunch of fancy news publishers, put out a press release calling AI previews in search “catastrophic to our traffic.”

If you’re a Decoder listener, you’ve heard me talk about this idea a lot over the past year: I call it Google Zero, and I’ve been asking web and media CEOs what would happen to their businesses if their Google traffic were to go to zero. If AI chatbots and AI-powered search results are summarizing everything for you, why would you go to a website? And if we all stop going to websites, what’s the incentive to put new content on the web? What’s going to stop shady characters from flooding the web with AI-generated spam to try and game these systems? And if we succeed in choking the web with AI, what are all these bots going to summarize when people ask them questions?

Sundar has some ideas. For one, he’s not convinced the web, which he says he cares deeply about, is in all that much danger. You’ll hear him mention Wired’s famous 2010 headline, “The web Is dead,” and he makes the argument that new, transformative technologies like AI always cause some short-term disruptions.

He says injecting AI into Search is about creating value for users, and those users are telling him that they find these new features to be helpful — and even clicking on links at higher rates in the AI previews. But he didn’t say where that leaves the people who put the content on the internet in the first place. We really sat with that idea for a while — and we talked a lot about the anger creative people feel toward AI systems training on their work.

I’ve talked to Sundar quite a bit over the past few years, and this was the most fired up I’ve ever seen him. You can really tell that there is a deep tension between the vision Google has for the future — where AI magically makes us smarter, more productive, and more artistic — and the very real fears and anxieties creators and website owners are feeling right now about how search has changed and how AI might swallow the internet forever. Sundar is wrestling with that tension.

One note: you’ll hear me say I think Sundar keeps making oblique references to OpenAI, which he pushes back on pretty strongly. I thought about it afterward, and it’s pretty clear he wasn’t just talking about OpenAI but also Meta, which has openly turned away from sending any traffic to any websites whatsoever and has been explicit that it doesn’t want to support news on its platforms at all anymore. I wish that had clicked for me during this conversation, because I would have asked about it more directly.

Okay, Google CEO Sundar Pichai. Here we go.

This transcript has been lightly edited for length and clarity.

Sundar Pichai, you are the CEO of both Alphabet and Google. Welcome to Decoder.

Nilay, good to be here.

I am excited to talk to you. I feel like I talk to you every year at Google I/O, and we talk about all the things you’ve announced. There’s a lot of AI news to talk about. As you know, I’m particularly interested in the future of the web, so I really want to talk about that with you, but I figured I’d start with an easy one.

Do you think language is the same as intelligence?

Wow, that’s not an easy question! I don’t think I’m the expert on it. I think language does encode a lot of intelligence, probably more than people thought. It explains the successes of large language models to a great extent. But my intuition tells me, as humans, there’s a lot more to the way we consume information than language alone. But I’d say language is a lot more than people think it is.

The reason I asked that question to start is: I look at the announcements at I/O with AI and what you’re doing, I look at your competitors with AI and what they’re doing, and everything is very language-heavy. It’s LLMs that have really led to this explosion of interest in innovation and investment, and I wonder if the intelligence is increasing at the same rate as the facility with language. I kind of don’t see it, to be perfectly honest. I see computers getting much better at language and actually in some cases getting dumber. I’m wondering if you see that same gap.

Yeah, it’s a great question. Part of the reason we made Gemini natively multimodal — and you’re beginning to see glimpses of it now but it hasn’t made its way fully into products yet — is so that with audio, video, text, images, and code, when we have multimodality working on the input and output side — and we are training models using all of that — maybe in the next cycle, that’ll encapsulate a lot more than just today, which is primarily text-based. I think that continuum will shift as we take in a lot more information that way. So maybe there’s more to come.

Last year the tagline was “Bold but responsible.” That’s Google’s approach. You said it again onstage this year. And then I look at our reactions to AI getting things wrong, and it seems like they’re getting more and more tempered over time.

I’ll give you an example. In the demos you had yesterday, you showed multimodal video search of someone trying to fix a broken film camera. And the answer was just wrong. The answer that was highlighted in the video was, “Just open the back of the film camera and jiggle it.” It’s like, well, that would ruin all of your film. No one who had an intelligent understanding of how that camera [worked] would suggest that.

I was talking to the team and, ironically, as part of making the video, they consulted with a bunch of subject matter experts who all reviewed the answer and thought it was okay. I understand the nuance. I agree with you. Obviously, you don’t want to expose your film by taking it outside of a darkroom. There are certain contexts in which it makes sense to do that. If you don’t want to break the camera and if what you’ve taken is not that valuable, it makes sense to do that.

You’re right. There is a lot of nuance to it. Part of what I hope Search serves to do is to give you a lot more context around that answer and allow people to explore it deeply. But I think these are the kinds of things for us to keep getting better at. But to your earlier question, look, I do see the capability frontier continuing to move forward. I think we are a bit limited if we were just training on text data, but we are all making it more multimodal. So I see more opportunities there.

Let’s talk about Search. This is the thing that I am most interested in — I think this is the thing that is changing the most. In an abstract way, it’s the thing that’s the most exciting. You can ask a computer a question, and it will just happily tell you an answer. That feels new. I see the excitement around it.

Yesterday, you announced AI Overviews are coming to Search. That’s an extension of what was called the Search Generative Experience, which was announced in a rollout to everyone in the United States. I would describe the reactions to that news from the people who make websites as fundamentally apocalyptic. The CEO of the News/Media Alliance said to CNN, “This will be catastrophic to our traffic.” Another media CEO forwarded me a newsletter and the headline was, “This is a death blow to publishers.” Were you expecting that kind of response to rolling out AI Overviews in Search?

I recall, in 2010, there were headlines that the web was dead. I’ve long worked on the web, obviously. I care deeply about it. When the transition from desktop to mobile happened, there was a lot of concern because people were like, “Oh, it’s a small screen. How will people read content? Why would they look at content?” We had started introducing what we internally called “Web Answers” in 2014, which are featured snippets outside [the list of links]. So you had questions like that.

I remain optimistic. Empirically, what we are seeing throughout the years, I think human curiosity is boundless. It’s something we have deeply understood in Search. More than any other company, we will differentiate ourselves in our approach even through this transition. As a company, we realize the value of this ecosystem, and it’s symbiotic. If there isn’t a rich ecosystem making unique and useful content, what are you putting together and organizing? So we feel it.

I would say, through all of these transitions, things have played out a bit differently. I think users are looking for high-quality content. The counterintuitive part, which I think almost always plays out, is [that] it’s not a zero-sum game. People are responding very positively to AI Overviews. It’s one of the most positive changes I’ve seen in Search based on metrics. But people do jump off on it. And when you give context around it, they actually jump off it. It actually helps them understand, and so they engage with content underneath, too. In fact, if you put content and links within AI Overviews, they get higher clickthrough rates than if you put it outside of AI Overviews.

But I understand the sentiment. It’s a big change. These are disruptive moments. AI is a big platform shift. People are projecting out, and people are putting a lot into creating content. It’s their businesses. So I understand the perspective [and] I’m not surprised. We are engaging with a lot of players, both directly and indirectly, but I remain optimistic about how it’ll actually play out. But it’s a good question. I’m happy to talk about it more.

I have this concept I call “Google Zero,” which is born of my own paranoia. Every referrer that The Verge has ever had has gone up and then it’s gone down, and Google is the last large-scale referrer of traffic on the web for almost every website now. And I can see that for a lot of sites, Google Zero is playing out. Their Google traffic has gone to zero, particularly independent sites that aren’t part of some huge publishing conglomerate. There’s an air purifier blog that we covered called HouseFresh. There’s a gaming site called Retro Dodo. Both of these sites have said, “Look, our Google traffic went to zero. Our businesses are doomed.”

Is that the right outcome here in all of this — that the people who care so much about video games or air purifiers that they started websites and made the content for the web are the ones getting hurt the most in the platform shift?

It’s always difficult to talk about individual cases, and at the end of the day, we are trying to satisfy user expectations. Users are voting with their feet, and people are trying to figure out what’s valuable to them. We are doing it at scale, and I can’t answer on the particular site—

A bunch of small players are feeling the hurt. Loudly, they’re saying it: “Our businesses are going away.” And that’s the thing you’re saying: “We’re engaging, we’re talking.” But this thing is happening very clearly.

It’s not clear to me if that’s a uniform trend. I have to look at data on an aggregate [basis], so anecdotally, there are always times when people have come in an area and said, “Me, as a specific site, I have done worse.” But it’s like an individual restaurant saying, “I’ve started getting fewer customers this year. People have stopped eating food,” or whatever it is. It’s not necessarily true. Some other restaurant might have opened next door that’s doing very well. So it’s tough to say.

From our standpoint, when I look historically even over the past decade, we have provided more traffic to the ecosystem, and we’ve driven that growth. You may be making a secondary point about small sites versus more aggregating sites, which is the second point you’re talking about. Ironically, there are times when we have made changes to actually send more traffic to the smaller sites. Some of those sites that complain a lot are the aggregators in the middle. So should the traffic go to the restaurant that has created a website with their menus and stuff or people writing about these restaurants? These are deep questions. I’m not saying there’s a right answer.

But you’re about to flip over the whole apple cart, right? You’re about to start answering some of these questions very directly. And where that content comes from in the future, I think you want the people who care the most to publish that information directly to be the thing that you synthesize.

I agree.

The incentives for that seem to be getting lower and lower — on the web, anyway.

I feel it’s the opposite. If anything, I feel like through AI Overviews, when you give people context, yes, there are times all people want is a quick answer and they bounce back. But overall, when we look at user journeys, when you give the context, it also exposes people to jumping-off points, and so they engage more. Actually, this is what drives growth over time. I look at desktop to mobile, and there were similar questions. In fact, there was a [magazine] cover I’m almost tempted to pull out, saying, “The web is dead.” There was a Google Zero argument 10 years ago. But you yourself made the point that it’s not an accident that we still remain as one of the largest referrers because we’ve cared about it deeply for a long, long time.

I look at our journey, even the last year through the Search Generative Experience, and I constantly found us prioritizing approaches that would send more traffic while meeting user expectations. We think through that deeply and we actually change our approach. If there are areas where we feel like we haven’t fully gotten it right, we are careful about rolling it out. But I think what’s positively surprising us is that people engage more, and that will lead to more growth over time for high-quality content.

There’s a lot of debate about what high-quality content is. At least in my experience, I value independent sources, I value smaller things, I want more authentic voices. And I think those are important attributes we are constantly trying to improve.

You mentioned that you think more people will click through links in AI Overviews. Liz [Reid] who runs Search had a blog post making the same claim. There’s no public data that says that is true yet. Are you going to release that data? Are you going to show people that this is actually happening?

On an aggregate, I think people rely on this value of the ecosystem. If people over time don’t see value, website owners don’t see value coming back from Google, I think we’ll pay a price. We have the right incentive structure. But obviously, look, we are careful about… there are a lot of individual variations, and some of it is users choosing which way to go. That part is hard to sort out. But I do think we are committed at an aggregate level to do the right thing.

I was reading some SEO community trade publications this morning responding to the changes, and one of the things that was pointed out was that, in Search Console, it doesn’t show you if the clicks are coming from a featured snippet or an AI Overview or just Google’s regular 10 blue links. Would you break that out? Would you commit to breaking that out so people can actually audit and verify and measure that the AI Overviews are sending out as much traffic as you say they are?

It’s a good question for the Search team. They think about this at a deeper level than I do. I think we are constantly trying to give more visibility, but also we want people to create content that’s good. And we are trying to rank it and organize it, so I think there’s a balance to be had. The more we spec it out, then the more people design for that. There’s a tradeoff there, so it’s not clear to me what the right answer is.

That tradeoff between what you spec out and say and what people make, that’s been the story of the web for quite some time. It had reached, I think, a steady state. Whether you thought that steady state was good or bad, it was at least at a steady state. Now, that state is changing — AI is obviously changing it.

The 10 blue link model, the old steady state, is very much based on an exchange: “We’re going to let you index our content. We’re going to [have] featured snippets. We’re going to let you see all of our information. In return, you will send us traffic.” That formed the basis of what you might call a fair-use argument. Google’s going to index this stuff, [and] there’s not going to be a lot of payments in the middle.

In the AI era, no one knows how that’s going to go. There are some major lawsuits happening. There are deals being made by Google and OpenAI for training data. Do you think it’s appropriate for Google to start making more deals to pay for data to train search results? Because those AI snippets are not really the same as the 10 blue links or anything else you’ve done in the past.

To be very clear, there’s a myth that Google’s search has been 10 blue links for — I look at our mobile experience — many, many years. And we have had answers, we allow you to refine questions, we’ve had featured snippets, and so on. The product has evolved significantly.

Having said that, as a company, even as we look at AI, we have done Google [News] Showcase, we have done licensing deals. To the extent there is value there, we obviously think there is a case for fair use in the context of beneficial, transformative use. I’m not going to argue that with you given your background. But I think there are cases in which we will see dedicated incremental value to our models, and we’ll be looking at partnerships to get at that. I do think we’ll approach it that way.

Let me ask this question in a different way. I won’t do too much fair-use analysis with you, I promise, as much as I like doing it.

There were some news reports recently that OpenAI had trained its video generation product, Sora, on YouTube. How did you feel when you heard that news?

Look, we don’t know the details. Our YouTube team is following up and trying to understand it. We have terms and conditions, and we would expect people to abide by those terms and conditions when you build a product, so that’s how I felt about it.

So you felt like they had potentially broken your terms and conditions? Or if they had, that wouldn’t have been appropriate?

That’s right.

The reason I asked that question — which is a much more emotional question — is okay, maybe that’s not appropriate. And what OpenAI has said is essentially “We’ve trained on publicly available information,” which means they found it on the web.

Most people don’t get to make that deal. They don’t have a YouTube team of licensing professionals who can say, “We have terms and conditions.” They don’t even have terms and conditions. They’re just putting their stuff on the internet. Do you understand why, emotionally, there’s the reaction to AI from the creative community — that it feels the same as you might have felt about OpenAI training on YouTube?

Absolutely. Look, be it website owners or content creators or artists, I can understand how emotional a transformation this is. Part of the reason you saw, even through Google I/O when we were working on products like music generation, we have really taken an approach by which we are working first to make tools for artists. We haven’t put a general-purpose tool out there for anyone to create songs.

The way we have taken that approach in many of these cases is to put the creator community as much at the center of it as possible. We’ve long done that with YouTube. Through it all, we are trying to figure out what the right ways to approach this.

But it is a transformative moment as well, and there are other players in this. We are not the only player in the ecosystem. But, to your earlier question, yes, I understand people’s emotions about it. I definitely am very empathetic to how people are perceiving this moment.

They feel like it’s a taking — that they put work on the internet and the big companies are coming, taking it for free, and then making products that they are charging $20 a month for or that will lift their creative work and remix it for other people. The thing that makes it feel like a taking is [that] very little value accrues back to them.

That’s really the thing I’m asking about: how do you bring value back to them? How do you bring incentives back to the small creator or the independent business that’s saying, “Look, this feels a taking.”

Look. [Sighs] The whole reason we’ve been successful on platforms like YouTube is we have worked hard to answer this question. You’ll continue to see us dig deep about how to do this well. And I think the players who end up doing better here will have more winning strategies over time. I genuinely believe that.

Across everything we do, we have to sort that out. Anytime you’re running a platform, it’s the basis on which you can build a sustainable long-term platform. Through this AI moment, over time, there’ll be players who will do better by the content creators that support their platforms, and whoever does it better will emerge as the winner. I believe that to be a tenet of these things over time.

One thing that I think is really interesting about the YouTube comparison in particular — it’s been described to me many times that YouTube is a licensing business. You license a lot of content from the creators. You obviously pay them back in terms of the advertising model there. The music industry has a huge licensing business with YouTube. It is an existential relationship for both sides. Susan Wojcicki used to describe YouTube as a music service, which I think confused everyone until you looked at the data.

Universal Music is mad about AI on YouTube. YouTube reacts. It builds a bunch of tools. It writes a constitution about what AI will and will not do. People are mad about the Search Generative Experience or AI [Overviews] on the web. Google doesn’t react the same way. I’m wondering if you can square that circle.

That is so far from reality.

You think so?

That’s so far from reality. I look at other players and how they’ve approached—

You’re talking about OpenAI, which is just out there taking stuff.

In general, when you look at how we have approached the Search Generative Experience, even through a moment like this, the time we have taken to test, iterate, and prioritize approaches, and the way we’ve done it over the years, I would say I definitely disagree with the notion we don’t listen. We care deeply; we listen. People may not agree with everything we do. When you’re running an ecosystem, you are balancing different needs. I think that’s the essence of what makes a product successful.

Let me talk about the other side of this. There’s search: people are going to game search and that’s always going to happen and that’s a chicken-and-egg problem.

The other thing that I see happening is the web is being flooded with AI content. There was an example a few months ago where some unsavory SEO character said, “I stole a bunch of traffic from a competitor. I copied their site map. I fed it into an AI system and had it generate copy for a website that matched their site map, and I put up this website and stole a bunch of traffic from my competitor.” I think that’s a bad outcome. I don’t think we want to incentivize that in any way, shape, or form.

[Shakes head] No, no—

That’s going to happen at scale. More and more of the internet that we experience will be synthetic in some important way. How do you, on the one hand, build the systems that create the synthetic content for people and, on the other hand, rank it so that you’re only getting the best stuff? Because at some point, the defining line for a lot of people is, “I want stuff made by a human, and not stuff made by AI.”

I think there are multiple parts to your question. One, how do we differentiate high quality from low quality? I literally view it as our mission statement, and it is what has defined Search over many, many years.

I actually think people underestimate… Anytime you have these disruptive platform shifts, you’re going to go through a phase like this. I have seen that team invest so much. Our entire search quality team has been spending the last year gearing up our ranking systems, etc., to better get at what high-quality content is. If I take the next decade, [the] people who can do that better, who can sift through that, I think, will win out.

I think you’re right in your assessment that people will value human-created experiences. I hope the data bears that out. We have to be careful every time there’s a new technology. There are filmmakers, if you go and talk about CGI in films, they’re going to react very emotionally, and there are still esteemed filmmakers who never use CGI in films. But then there are people who use it and produce great films. And so you may be using AI to lay out and enhance video effects in your video.

But I agree with you. I think using AI to produce content en masse without adding any value is not what users are looking for.

But there is a big continuum and, over time, users are adapting. We are trying hard to make sure we do it in a responsible way, but we’re also listening to what users consider to be high quality and trying to get that balance right. That continuum will look different a few years out than it does today, but I think I view it as the essence of what search quality is. Do I feel confident we will be able to approach it better than others? Yes. And I think that’s what defines the work we do.

For the listener, there have been a lot of subtle shots at OpenAI today.

Can I put this into practice? I actually just did this search. It is a search for “best Chromebook.” As you know, I once bought my mother a Chromebook Pixel. It’s one of my favorite tech purchases of all time. This is a search for “best Chromebook.” I’m going to hit “generate” at the top, it’s going to generate the answer, and then I’m going to do something terrifying, which is, I’m going to hand my phone to the CEO of Google. This is my personal phone. Don’t dig through it.

You look at that — it’s the same generation that I’ve seen earlier. I asked it for the best Chromebook, and it says, “Here’s some stuff you might think of.” Then you scroll, and it’s some Chromebooks. It doesn’t say whether they’re the best Chromebooks, and then it’s a bunch of headlines, some of which are Verge headlines, that are like, “Here are some of the best Chromebooks.” That feels like the exact kind of thing that an AI-generated search could answer in a better way. Do you think that’s a good experience? Is that a waypoint or is that the destination?

I think, look, you’re showing me a query in which we didn’t automatically generate the AI.

There was a button that said, “Do you want to do this?”

But let me push back. There’s an important differentiation. There’s a reason we are giving a view without the generated AI Overview, and as a user, you’re initiating an action, so we’re respecting the user intent there. When I scroll, I see Chromebooks. I also see a whole set of links, which I can go to and that tell me all the ways you can think about Chromebooks. I see a lot of links. We didn’t show an AI Overview in this case. As a user, you’re generating the follow-up question. I think it’s right that we respect the user’s intent. If you don’t do that, people will go somewhere else, too.

But I’m saying — I did not write, “What is the best Chromebook?” I just wrote “best Chromebook — [but] the answer, a thing that identifies itself as an answer, is not on that page. The leap from “I had to push the button” to “Google pushes the button for me and then says what it believes to be the answer” is very small. I’m wondering if you think a page like that today is the destination of the search experience, or if this is a waypoint and you can see a better future version of that experience.

I think the direction of how these things will go, it’s tough to fully predict. Users keep evolving. It’s a more dynamic moment than ever. We are testing all of this, and this is a case where we didn’t trigger the AI Overview because we felt like our AI Overview is not necessarily the first experience we want to provide for that query because what’s underlying is maybe a better first look for the user — those are all quality tradeoffs we are making. But if the user is asking for a summary, we are summarizing and giving links. I think that seems like a reasonable direction to me.

I’ll show you another one where it did expand automatically. This one I only have screenshots for. I don’t think I’m fully opted in. This is Dave Lee from Bloomberg, who did a search. He got an AI Overview, and he just searched for “JetBlue Mint Lounge SFO.” And it just says the answer, which I think is fine. That’s the answer.

If you swipe one over — I cannot believe I’m letting the CEO of Google swipe on my camera roll — but if you swipe one over, you see the site it pulled from. It is a word-for-word rewrite of that site. This is the thing I’m getting at.

The AI-generated overview of that answer, if you just look at where it came from, is almost the same sentence as the source. And that’s what I mean. At some point, the better experience is the AI overview, and it’s just the thing that exists on all the sites underneath it. It’s the same information.

[Sighs] The thing with Search — we handle billions of queries. You can absolutely find a query and hand it to me and say, “Could we have done better on that query?” Yes, for sure. But in many cases, part of what is making people respond positively to AI Overviews is that the summary we are providing clearly adds value and helps them look at things they may not have otherwise thought about. If you’re adding value at that level, I think people notice it over time, and I think that’s the bar you’re trying to meet. Our data would show, over 25 years, if you aren’t doing something that users find valuable or enjoyable, they let us know right away. Over and over again we see that.

Through this transition, everything is the opposite. It’s one of the biggest quality improvements we are driving in our product. People are valuing this experience. There’s a general presumption that people don’t know what they’re doing, which I disagree with strongly. People who use Google are savvy. They understand. And so, to me, I can give plenty of examples where I’ve used AI Overviews as a user. I’m like, “Oh, this is giving context. Oh, maybe there are these dimensions I didn’t even think of in my original query. How do I expand upon it and look at it?”

You’ve made oblique mention of OpenAI a few times, I think.

I actually haven’t.

You’re saying “others.” There’s one other big competitor that is, I think, a little more—

You’re putting words in my mouth, but that’s okay.

I saw OpenAI’s demo the other day of GPT-4o, Omni. It looked a lot like the demos you gave at I/O. This idea of multimodal search, the idea that you have this character you can talk to — you have Gems, which are the same kind of idea — it feels like there’s a race to get to the same outcome for a search-like experience or an agent-like experience. Do you feel the pressure from that competition?

This is no different from Siri and Alexa. When you’re working in the technology industry, I think there is relentless innovation we felt a few years ago, all of us building voice assistants. You could have asked the same version of this question: what was Alexa trying to do and what was Siri trying to do? It’s a natural extension of that. I think you have a new technology now, and it’s evolving rapidly.

I felt like it was a good week for technology. There was a lot of innovation, I felt, on Monday and Tuesday and so on. That’s how I feel, and I think it’s going to be that way for a while. I’d rather have it that way. You’d rather be in a place where the underlying technology is evolving, which means you can radically improve the experiences you’re putting out. I’d rather have that any time than a static phase in which you feel like you’re not able to move forward quickly.

A lot of us have had this vision for what a powerful assistant can be, but we were held back by the underlying technology not being able to serve that goal. I think we have a technology that is better able to serve that. That’s why you’re seeing the progress again. I think that’s exciting. To me, I look at it and say, “We can actually make Google Assistant a whole lot better.” You’re seeing visions of that with Project Astra. It’s incredibly magical to me when I use it, so I’m very excited by it.

This brings me back to the first question I asked: language versus intelligence. To make these products, I think you need a core level of intelligence. Do you have in your head a measure of “This is when it’s going to be good enough. I can trust this”?

On all of your demo slides and all of OpenAI’s demo slides, there’s a disclaimer that says “Check this info,” and to me, it’s ready when you don’t need that anymore. You didn’t have “Check this info” at the bottom of the 10 blue links. You didn’t have “Check this info” at the bottom of featured snippets.

You’re getting at a deeper point where hallucination is still an unsolved problem. In some ways, it’s an inherent feature. It’s what makes these models very creative. It’s why it can immediately write a poem about Thomas Jefferson in the style of Nilay. It can do that. It’s incredibly creative. But LLMs aren’t necessarily the best approach to always get at factuality, which is part of why I feel excited about Search.

Because in Search we are bringing LLMs in a way, but we are grounding it with all the work we do in Search and layering it with enough context that we can deliver a better experience from that perspective. But I think the reason you’re seeing those disclaimers is because of the inherent nature. There are still times it’s going to get it wrong, but I don’t think I would look at that and underestimate how useful it can be at the same time. I think that would be the wrong way to think about it.

Google Lens is a good example. When we first put Google Lens out, it didn’t recognize all objects well. But the curve year on year has been pretty dramatic, and users are using it more and more. We’ve had billions of queries now with Google Lens. It’s because the underlying image recognition, paired with our knowledge entity understanding, has dramatically expanded over time.

I would view it as a continuum, and I think, again, I go back to this saying that users vote with their feet. Fewer people used Lens in the first year. We also didn’t put it everywhere because we realized the limitations of the product.

When you talk to the DeepMind Google Brain team, is there a solution to the hallucination problem on the roadmap?

It’s Google DeepMind. [Laughs]

Are we making progress? Yes, we are. We have definitely made progress when we look at metrics on factuality year on year. We are all making it better, but it’s not solved. Are there interesting ideas and approaches that they’re working on? Yes, but time will tell. I would view it as LLMs are an aspect of AI. We are working on AI in a much broader way, but it’s an area where we are all definitely working to drive more progress.

Five years from now, this technology, the paradigm shift, it feels like we’ll be through it. What does the best version of the web look like for you five years from now?

I hope the web is much richer in terms of modality. Today, I feel like the way humans consume information is still not fully encapsulated in the web. Today, things exist in very different ways — you have webpages, you have YouTube, etc. But over time, I hope the web is much more multimodal, it’s much richer, much more interactive. It’s a lot more stateful, which it’s not today.

I view it as, while fully acknowledging the point that people may use AI to generate a lot of spam, I also feel every time there’s a new wave of technology, people don’t quite know how to use it. When mobile came, everyone took webpages and shoved them into mobile applications. Then, later, people evolved [into making] really native mobile applications.

The way people use AI to actually solve new things, new use cases, etc. is yet to come. When that happens, I think the web will be much, much richer, too. So: dynamically composing a UI in a way that makes sense for you. Different people have different needs, but today you’re not dynamically composing that UI. AI can help you do that over time. You can also do it badly and in the wrong way and people can use it shallowly, but there will be entrepreneurs who figure out an extraordinarily good way to do it, and out of it, there’ll be great new things to come.

Google creates a lot of incentives for development on the web through Search, through Chrome, through everything that you do. How do you make sure those incentives are aligned with those goals? Because maybe the biggest thing here is that the web ecosystem is in a moment of change, and Google has a lot of trust to build and rebuild. How do you think about making sure those incentives point at the right goals?

Look, not everything is in Google’s control. I wish I could influence what the single toughest experience when I go to websites today as a user is — you have a lot of cookie dialogues to accept, etc. So I would argue there are many things outside of that. You can go poll 100 users.

But what are the incentives we would like to create? I think, and this is a complex question, which is how do you reward originality, creativity, and independent voice at whatever scale at which you’re able to and give a chance for that to thrive in this content ecosystem we create? That’s what I think about. That’s what the Search team thinks about. But I think it’s an important principle, and I think it’ll be important for the web and important for us as a company.

That’s great. Well, Sundar, thank you so much for the time. Thank you for being on Decoder.

Thanks, Nilay. I greatly enjoyed it.

Read More 

Apple’s next AirTag could arrive in 2025

Photo by Vjeran Pavic / The Verge

You may not have even thought about replacing your AirTag yet, but Bloomberg reports that Apple is working on a new one that could arrive in mid-2025. The new AirTag will reportedly feature an updated chip with better location tracking — an improvement it might need as competition among tracking devices ramps up.
By the time Apple rolls out its refreshed AirTag, the Bluetooth tracking landscape will look a lot different on both Android and iOS. Last month, Google revealed its new Find My Device network, which lets users locate their phones using signals from nearby Android devices. Even Life360, the safety service company that owns Tile, is creating its own location-tracking network that uses satellites to locate its Bluetooth tags.
In last week’s iOS 17.5 update, Apple finally started letting iPhones show unwanted tracking alerts for third-party Bluetooth tags. If an unknown AirTag or other third-party tracker is found with an iPhone user, they’ll get an alert and can play a sound to locate it. The feature is part of an industry specification created to prevent stalking across iPhones and Android devices. Several companies that make Bluetooth tracking devices, including Chipolo, Pebblebee, and Eufy are on board with the new standard.

Photo by Vjeran Pavic / The Verge

You may not have even thought about replacing your AirTag yet, but Bloomberg reports that Apple is working on a new one that could arrive in mid-2025. The new AirTag will reportedly feature an updated chip with better location tracking — an improvement it might need as competition among tracking devices ramps up.

By the time Apple rolls out its refreshed AirTag, the Bluetooth tracking landscape will look a lot different on both Android and iOS. Last month, Google revealed its new Find My Device network, which lets users locate their phones using signals from nearby Android devices. Even Life360, the safety service company that owns Tile, is creating its own location-tracking network that uses satellites to locate its Bluetooth tags.

In last week’s iOS 17.5 update, Apple finally started letting iPhones show unwanted tracking alerts for third-party Bluetooth tags. If an unknown AirTag or other third-party tracker is found with an iPhone user, they’ll get an alert and can play a sound to locate it. The feature is part of an industry specification created to prevent stalking across iPhones and Android devices. Several companies that make Bluetooth tracking devices, including Chipolo, Pebblebee, and Eufy are on board with the new standard.

Read More 

OpenAI pulls its Scarlett Johansson-like voice for ChatGPT

OpenAI says “AI voices should not deliberately mimic a celebrity’s distinctive voice.” | Photo: Warner Bros.

OpenAI is pulling the ChatGPT voice that sounds remarkably similar to Scarlett Johansson after numerous headlines (and even Saturday Night Live) noted the similarity. The voice, known as Sky, is now being put on “pause,” the company says.
“We believe that AI voices should not deliberately mimic a celebrity’s distinctive voice— Sky’s voice is not an imitation of Scarlett Johansson but belongs to a different professional actress using her own natural speaking voice,” OpenAI wrote this morning.
OpenAI CTO Mira Murati denied that the imitation of Johansson was intentional in an interview with The Verge last week. Even if Johansson’s voice wasn’t directly referenced, OpenAI CEO Sam Altman was seemingly already aware of the similarities, posting the single-word message “her” on X after the event. Altman has previously said that Spike Jonze’s Her, which features Scarlett Johansson voicing a sultry-sounding virtual assistant, is his favorite movie.
OpenAI did not mention if it had received any contacts about possible legal issues or challenges regarding its assistant’s similarities with Johansson or the role she plays in Her. The Verge has reached out to a representative for Johansson for comment.

We’ve heard questions about how we chose the voices in ChatGPT, especially Sky. We are working to pause the use of Sky while we address them.Read more about how we chose these voices: https://t.co/R8wwZjU36L— OpenAI (@OpenAI) May 20, 2024

ChatGPT’s voice mode and the Sky voice model have been around since last year. But the feature was made far more prominent last week, when OpenAI demoed advancements it made as part of its new GPT-4o model. The new model makes the voice assistant more expressive and allows it to read facial expressions through a phone’s camera and translate spoken language in real time.
The five currently available ChatGPT voice profiles were selected from over 400 casting submissions from voice and screen actors, according to OpenAI. The company declined to share the names of the actors, citing the need to “protect their privacy.”

The new ChatGPT voice assistant capabilities will launch “in the coming weeks” as a limited “alpha” release for ChatGPT Plus subscribers. OpenAI plans to eventually introduce additional voices to “better match the diverse interests and preferences of users.”

OpenAI says “AI voices should not deliberately mimic a celebrity’s distinctive voice.” | Photo: Warner Bros.

OpenAI is pulling the ChatGPT voice that sounds remarkably similar to Scarlett Johansson after numerous headlines (and even Saturday Night Live) noted the similarity. The voice, known as Sky, is now being put on “pause,” the company says.

“We believe that AI voices should not deliberately mimic a celebrity’s distinctive voice— Sky’s voice is not an imitation of Scarlett Johansson but belongs to a different professional actress using her own natural speaking voice,” OpenAI wrote this morning.

OpenAI CTO Mira Murati denied that the imitation of Johansson was intentional in an interview with The Verge last week. Even if Johansson’s voice wasn’t directly referenced, OpenAI CEO Sam Altman was seemingly already aware of the similarities, posting the single-word message “her” on X after the event. Altman has previously said that Spike Jonze’s Her, which features Scarlett Johansson voicing a sultry-sounding virtual assistant, is his favorite movie.

OpenAI did not mention if it had received any contacts about possible legal issues or challenges regarding its assistant’s similarities with Johansson or the role she plays in Her. The Verge has reached out to a representative for Johansson for comment.

We’ve heard questions about how we chose the voices in ChatGPT, especially Sky. We are working to pause the use of Sky while we address them.

Read more about how we chose these voices: https://t.co/R8wwZjU36L

— OpenAI (@OpenAI) May 20, 2024

ChatGPT’s voice mode and the Sky voice model have been around since last year. But the feature was made far more prominent last week, when OpenAI demoed advancements it made as part of its new GPT-4o model. The new model makes the voice assistant more expressive and allows it to read facial expressions through a phone’s camera and translate spoken language in real time.

The five currently available ChatGPT voice profiles were selected from over 400 casting submissions from voice and screen actors, according to OpenAI. The company declined to share the names of the actors, citing the need to “protect their privacy.”

The new ChatGPT voice assistant capabilities will launch “in the coming weeks” as a limited “alpha” release for ChatGPT Plus subscribers. OpenAI plans to eventually introduce additional voices to “better match the diverse interests and preferences of users.”

Read More 

What to expect from Microsoft’s Surface event today

Photo by Amelia Holowaty Krales / The Verge

One of Microsoft’s most consequential hardware events in years is about to take place. It’s not Microsoft Build — the company’s developer conference — which kicks off on Tuesday. Instead, Microsoft is holding a small event today, May 20th, to talk about its next Surface devices.
At the event, we’re expecting Microsoft to announce new versions of the Surface Pro 10 and the Surface Laptop 6, both running on Qualcomm’s Snapdragon X Elite processors. It’s Microsoft’s latest attempt to switch to Arm — and one that the company believes is finally going to stick. The change should deliver far better battery life; and if early rumors are true, Qualcomm’s chip should be powerful enough to keep up with the Intel processors it’s replacing.

Image: Microsoft
The Surface Pro 10 for Business. This model, released in April, has an Intel chip, but the Arm version is expected to look the same.

These devices are also expected to include dedicated AI hardware accelerators called NPUs to better support coming Windows 11 AI features. One of those rumored features, AI Explorer, is supposed to keep track of everything you do on your Windows 11 machine, then let you prompt the AI about what you’ve been up to. Microsoft is reportedly building dozens of language models into the system so that these chips can run AI features locally.

Beyond the new processors, the hardware could get some improvements over the Surface models introduced earlier this spring for business customers. The Surface Pro 10 might come with an optional OLED screen and 16GB of RAM, at least according to Geekbench scores that Windows Report spotted in April. The Surface Laptop 6 could also see a 16GB RAM default and could get a facelift that the business version didn’t, including thinner bezels, rounded corners, and a haptic touchpad.

Image: Microsoft
The Surface Laptop 6 for Business, which didn’t get the physical changes rumored for the new one.

Today’s Surface event may also be the start of a bigger push from Microsoft. A lot more “AI PC” laptops that use Qualcomm’s new chip are rumored to be on the way. Around the event, we’ll also probably hear about other Snapdragon X-equipped machines from the likes of Asus, Dell, and Lenovo. Microsoft is apparently very confident that new Arm chips will put Windows machines back in the game against Apple’s powerful yet battery-efficient computers.
The event won’t be broadcast — only journalists are allowed to attend — so keep an eye on The Verge for live coverage of today’s news when the event kicks off at 10AM PT / 1PM ET.

Photo by Amelia Holowaty Krales / The Verge

One of Microsoft’s most consequential hardware events in years is about to take place. It’s not Microsoft Build — the company’s developer conference — which kicks off on Tuesday. Instead, Microsoft is holding a small event today, May 20th, to talk about its next Surface devices.

At the event, we’re expecting Microsoft to announce new versions of the Surface Pro 10 and the Surface Laptop 6, both running on Qualcomm’s Snapdragon X Elite processors. It’s Microsoft’s latest attempt to switch to Arm — and one that the company believes is finally going to stick. The change should deliver far better battery life; and if early rumors are true, Qualcomm’s chip should be powerful enough to keep up with the Intel processors it’s replacing.

Image: Microsoft
The Surface Pro 10 for Business. This model, released in April, has an Intel chip, but the Arm version is expected to look the same.

These devices are also expected to include dedicated AI hardware accelerators called NPUs to better support coming Windows 11 AI features. One of those rumored features, AI Explorer, is supposed to keep track of everything you do on your Windows 11 machine, then let you prompt the AI about what you’ve been up to. Microsoft is reportedly building dozens of language models into the system so that these chips can run AI features locally.

Beyond the new processors, the hardware could get some improvements over the Surface models introduced earlier this spring for business customers. The Surface Pro 10 might come with an optional OLED screen and 16GB of RAM, at least according to Geekbench scores that Windows Report spotted in April. The Surface Laptop 6 could also see a 16GB RAM default and could get a facelift that the business version didn’t, including thinner bezels, rounded corners, and a haptic touchpad.

Image: Microsoft
The Surface Laptop 6 for Business, which didn’t get the physical changes rumored for the new one.

Today’s Surface event may also be the start of a bigger push from Microsoft. A lot more “AI PC” laptops that use Qualcomm’s new chip are rumored to be on the way. Around the event, we’ll also probably hear about other Snapdragon X-equipped machines from the likes of Asus, Dell, and Lenovo. Microsoft is apparently very confident that new Arm chips will put Windows machines back in the game against Apple’s powerful yet battery-efficient computers.

The event won’t be broadcast — only journalists are allowed to attend — so keep an eye on The Verge for live coverage of today’s news when the event kicks off at 10AM PT / 1PM ET.

Read More 

Meta and LG’s headset partnership is on the rocks

LG CEO William Cho and home entertainment company president Park Hyoung-sei met with Meta CEO Mark Zuckerberg in February to discuss the collaboration. | Image: LG

LG is reportedly pausing its partnership with Meta to develop an extended reality (XR) device that would take on Apple’s Vision Pro headset, just three months after announcing the joint venture.
While multiple Korean news outlets are reporting that the Meta partnership has broken down entirely due to a lack of “synergy” between the companies, LG has denied terminating the deal. “LG Electronics continues the XR partnership with Meta forged in February but is controlling its pace,” LG said in a statement to Korea JoongAng Daily.
Products from the collaboration, set to combine Meta’s Horizon Worlds mixed reality platform with content and service capabilities from LG’s TV business, were expected to hit the market next year. The potential for on-device AI integration was also being explored, courtesy of Meta’s large language models.
Adding to the confusion over what’s happening with the project, Seoul Economic Daily reports that the collaboration is still ongoing but is now likely targeting a 2027 release date. Meanwhile, several of the publications reporting that LG has walked away from Meta claim it’s instead pursuing a new XR headset partnership with Amazon that would benefit from Amazon Prime’s strong library of streaming content and 200 million subscribers.
We have reached out to LG and Meta to clarify the future of the current headset project.
Meta’s Reality Labs program has reportedly lost a billion dollars every month since June 2022
This potential relationship breakdown comes as Meta struggles to prevent its AR and VR businesses from hemorrhaging money, with GamesIndustry.biz reporting last month that Reality Labs has burned a billion dollars every month since June 2022. Meta doesn’t seem to be dissuaded, however, announcing that it expects these losses to increase “meaningfully” year over year due to the company’s “ongoing product development efforts and our investments to further scale our ecosystem.”
Meanwhile, the wider VR / AR product industry is also struggling amid falling interest from consumers. Sony has reportedly paused production of its PSVR 2 headset due to a backlog of unsold inventory, and slow adoption of Apple’s Vision Pro headset has seen some Apple stores selling as few as two units per week, according to Bloomberg.

LG CEO William Cho and home entertainment company president Park Hyoung-sei met with Meta CEO Mark Zuckerberg in February to discuss the collaboration. | Image: LG

LG is reportedly pausing its partnership with Meta to develop an extended reality (XR) device that would take on Apple’s Vision Pro headset, just three months after announcing the joint venture.

While multiple Korean news outlets are reporting that the Meta partnership has broken down entirely due to a lack of “synergy” between the companies, LG has denied terminating the deal. “LG Electronics continues the XR partnership with Meta forged in February but is controlling its pace,” LG said in a statement to Korea JoongAng Daily.

Products from the collaboration, set to combine Meta’s Horizon Worlds mixed reality platform with content and service capabilities from LG’s TV business, were expected to hit the market next year. The potential for on-device AI integration was also being explored, courtesy of Meta’s large language models.

Adding to the confusion over what’s happening with the project, Seoul Economic Daily reports that the collaboration is still ongoing but is now likely targeting a 2027 release date. Meanwhile, several of the publications reporting that LG has walked away from Meta claim it’s instead pursuing a new XR headset partnership with Amazon that would benefit from Amazon Prime’s strong library of streaming content and 200 million subscribers.

We have reached out to LG and Meta to clarify the future of the current headset project.

Meta’s Reality Labs program has reportedly lost a billion dollars every month since June 2022

This potential relationship breakdown comes as Meta struggles to prevent its AR and VR businesses from hemorrhaging money, with GamesIndustry.biz reporting last month that Reality Labs has burned a billion dollars every month since June 2022. Meta doesn’t seem to be dissuaded, however, announcing that it expects these losses to increase “meaningfully” year over year due to the company’s “ongoing product development efforts and our investments to further scale our ecosystem.”

Meanwhile, the wider VR / AR product industry is also struggling amid falling interest from consumers. Sony has reportedly paused production of its PSVR 2 headset due to a backlog of unsold inventory, and slow adoption of Apple’s Vision Pro headset has seen some Apple stores selling as few as two units per week, according to Bloomberg.

Read More 

Scroll to top
Generated by Feedzy