verge-rss

AnandTech shuts down after 27 years

It’s the end of an era. | Image: AnandTech, Future

Hardware enthusiast site AnandTech is shutting down after nearly three decades of covering computers. AnandTech’s final Editor-in-Chief, Ryan Smith, announced the news in a farewell post this morning, writing, “…Few things last forever, and the market for written tech journalism is not what it once was — nor will it ever be again. So, the time has come for AnandTech to wrap up its work, and let the next generation of tech journalists take their place within the zeitgeist.”
AnandTech was founded in 1997 by Anand Lal Shimpi, who led the site until retiring from journalism in 2014 to work for Apple as part of the team that delivered the M series Apple Silicon chips. Before leaving, Shimpi spoke with The Verge in 2011 about his frustrations with the “cable-TV-ification of the internet” — or online media moving away from high-quality, in-depth analysis toward sensationalism and clickbaity content:

Which trend makes you gnash your teeth in frustration?
Something I call the cable-TV-ification of the internet. For the past several years it seems as if there has been a trend away from ultimate understanding in content online and towards the tenets of modern mainstream media (sensationalism and the general silliness you see on US cable TV news). The transition isn’t anywhere near complete, but I feel like that’s the direction things are headed. We have to learn from the mistakes of our predecessors, not repeat them with sweeter technology.

AnandTech as a site made a point of resisting that trend, which Smith also called back to in his farewell note.
Over the years, the site built a loyal audience among hardware lovers due in large part to its detailed reviews of motherboards, chips, and other hardware components. The quality of its analysis made it a resource for PC builders, academics, fellow journalists, and anyone fascinated by the inner workings of a computer.
When any beloved site is shuttered, there’s always a question of what happens to the content. For now, AnandTech fans can breathe a sigh of relief. Smith writes that FutureLLC, AnandTech’s publisher, will keep the site’s archive indefinitely. The active AnandTech Forums will also continue to operate and be moderated by Future’s community team.

It’s the end of an era. | Image: AnandTech, Future

Hardware enthusiast site AnandTech is shutting down after nearly three decades of covering computers. AnandTech’s final Editor-in-Chief, Ryan Smith, announced the news in a farewell post this morning, writing, “…Few things last forever, and the market for written tech journalism is not what it once was — nor will it ever be again. So, the time has come for AnandTech to wrap up its work, and let the next generation of tech journalists take their place within the zeitgeist.”

AnandTech was founded in 1997 by Anand Lal Shimpi, who led the site until retiring from journalism in 2014 to work for Apple as part of the team that delivered the M series Apple Silicon chips. Before leaving, Shimpi spoke with The Verge in 2011 about his frustrations with the “cable-TV-ification of the internet” — or online media moving away from high-quality, in-depth analysis toward sensationalism and clickbaity content:

Which trend makes you gnash your teeth in frustration?

Something I call the cable-TV-ification of the internet. For the past several years it seems as if there has been a trend away from ultimate understanding in content online and towards the tenets of modern mainstream media (sensationalism and the general silliness you see on US cable TV news). The transition isn’t anywhere near complete, but I feel like that’s the direction things are headed. We have to learn from the mistakes of our predecessors, not repeat them with sweeter technology.

AnandTech as a site made a point of resisting that trend, which Smith also called back to in his farewell note.

Over the years, the site built a loyal audience among hardware lovers due in large part to its detailed reviews of motherboards, chips, and other hardware components. The quality of its analysis made it a resource for PC builders, academics, fellow journalists, and anyone fascinated by the inner workings of a computer.

When any beloved site is shuttered, there’s always a question of what happens to the content. For now, AnandTech fans can breathe a sigh of relief. Smith writes that FutureLLC, AnandTech’s publisher, will keep the site’s archive indefinitely. The active AnandTech Forums will also continue to operate and be moderated by Future’s community team.

Read More 

Google’s Play Store can finally install or update multiple Android apps at once

Illustration by Alex Castro / The Verge

Google Play Store can now download, install, and update multiple Android apps simultaneously. Previously, if you attempted to run all your updates manually, Play Store would only process a single app at a time. Now, as we’ve tested, it can do up to three.
As reported by Android Police, Google has tested concurrent app updates for Play Store before, starting as far back as 2019 and as recently as last March. Now it looks like the speedier updates are rolling out to more users, which could be a huge time saver for people setting up new phones or restoring from a backup. Apple has also supported up to three iOS app installs from the App Store for many years.

Screenshot: Richard Lawler / The Verge
Three simultaneous downloads and installs in action.

Google does not have an announced launch timeline or indication that the feature has arrived on any specific device, which means the limitations on simultaneous downloads and installs are likely server-side. You’ll just have to try updating all your apps manually to see if it’s there.

Illustration by Alex Castro / The Verge

Google Play Store can now download, install, and update multiple Android apps simultaneously. Previously, if you attempted to run all your updates manually, Play Store would only process a single app at a time. Now, as we’ve tested, it can do up to three.

As reported by Android Police, Google has tested concurrent app updates for Play Store before, starting as far back as 2019 and as recently as last March. Now it looks like the speedier updates are rolling out to more users, which could be a huge time saver for people setting up new phones or restoring from a backup. Apple has also supported up to three iOS app installs from the App Store for many years.

Screenshot: Richard Lawler / The Verge
Three simultaneous downloads and installs in action.

Google does not have an announced launch timeline or indication that the feature has arrived on any specific device, which means the limitations on simultaneous downloads and installs are likely server-side. You’ll just have to try updating all your apps manually to see if it’s there.

Read More 

TikTok is adding new ways to fine-tune your For You Page algorithm

Image: The Verge

TikTok is giving users a way to more precisely shape what kind of content they see on their feeds — or at least signal to the algorithm what they’re interested in.
Under Content preferences > Manage topics (which can be found in “Settings and Privacy”), users can adjust sliders to signal they want more or less of certain topics in their For You Page. Topics include “creative arts,” “current affairs,” “humor” and more. Sliders are set to a default middle ground at first, and users can tweak it from there.
TikTok’s hyper-targeted For You Page is famously a bit of a black box, built on a trove of user data that sometimes makes it feel like the algorithm is reading your mind. This new slider feature gives us a peak behind the curtain, especially regarding how TikTok categorizes content. It’s interesting, for example, that “dance” is its own distinct topic separate from other arts — not totally surprising given TikTok’s early role as the kids dance app.

Image: TikTok

Over the years, TikTok has rolled out different features to give users a sense of control over their own algorithm. The “not interested” button on videos is meant to signal to the system that you want to see less of something, and users can also block certain keywords or hashtags from their feeds entirely. Last year, TikTok also began testing a way to reset your For You Page to start fresh.
It’s not yet clear how effective adjusting the sliders will be — historically this kind of tool isn’t necessarily a hard and fast rule for what kind of content recommendation systems return. Prior research into YouTube’s “dislike” button, for example, has found that the algorithm kept recommending similar content regardless.

Image: The Verge

TikTok is giving users a way to more precisely shape what kind of content they see on their feeds — or at least signal to the algorithm what they’re interested in.

Under Content preferences > Manage topics (which can be found in “Settings and Privacy”), users can adjust sliders to signal they want more or less of certain topics in their For You Page. Topics include “creative arts,” “current affairs,” “humor” and more. Sliders are set to a default middle ground at first, and users can tweak it from there.

TikTok’s hyper-targeted For You Page is famously a bit of a black box, built on a trove of user data that sometimes makes it feel like the algorithm is reading your mind. This new slider feature gives us a peak behind the curtain, especially regarding how TikTok categorizes content. It’s interesting, for example, that “dance” is its own distinct topic separate from other arts — not totally surprising given TikTok’s early role as the kids dance app.

Image: TikTok

Over the years, TikTok has rolled out different features to give users a sense of control over their own algorithm. The “not interested” button on videos is meant to signal to the system that you want to see less of something, and users can also block certain keywords or hashtags from their feeds entirely. Last year, TikTok also began testing a way to reset your For You Page to start fresh.

It’s not yet clear how effective adjusting the sliders will be — historically this kind of tool isn’t necessarily a hard and fast rule for what kind of content recommendation systems return. Prior research into YouTube’s “dislike” button, for example, has found that the algorithm kept recommending similar content regardless.

Read More 

How to use iOS’s Live Text feature

Image: Samar Haddad / The Verge

I’m something of a hoarder when it comes to interesting facts — I’ll see something online or in a photo, and a small voice in my brain says, “You’ll be able to use that reference somewhere, somehow. Save it so you can find it later!” Which is fine if it’s just a line or two of text, but what if that info is within an image — a photo, say, or even a drawing?
Luckily, back in 2021, Apple introduced a feature in iOS 15 called Live Text, which makes it possible for iPhone users to grab text, email addresses, phone numbers, and more from images. Live Text works with both handwritten and typed text and supports a variety of languages.
Once you’ve captured text from an image, you can paste it into a document, an email, a text message, etc., just as you would any copied text. (A suggestion: you might want to proofread it before you send or post it, especially if you’ve copied handwritten text — Live Text is pretty good, but it’s not perfect.) Just press on the selected text and choose Copy. You can also select Translate if you want to translate it to another language or Look Up to get more information.
You can use Live Text with your camera app or directly from Photos or Safari. Here’s how to capture the text in these three apps.
Capture text using the camera app
One of the handiest ways of using Live Text is on documents or objects that are around you in real time. If you have an iPhone XS or later, you can use the camera app to quickly grab some text — say, the name on a product label, or of a book you want to remember to buy — and drop it into another app. You don’t even have to snap a photo. It’s very easy:

Open the camera app and point it at the text you want to copy. You’ll see a faint outline around the text, and a live text button will appear in the corner of the viewfinder.
Hit the button, and you’ll get a pop-up with the text inside. Highlight the part of the text you want to copy (or hit Select All) and choose Copy.
If there is an email address, phone number, or link, you will be given the option of interacting with it directly.

Screenshot: Apple
When you point the camera at some text, it will be outlined; use the symbol in the lower right corner to grab it.

Screenshot: Apple
If the text has an email address or phone number, you can use it to send an email or a text message.

Capture text from photos
If you’re viewing any pictures with text in the Photos app, you can select and interact with text there. It works just like selecting actual text anywhere else on iOS: tap and hold on the text in an image and you’ll see the familiar-looking blue highlights pop up.
The search bar in Photos will also search for text within images in your photo library, which is handy if you know you took a screenshot or photo of a receipt and just aren’t sure when. Once you find it, you can copy it using the steps above.

Screenshot: Apple
You can capture text from photos….

Screenshot: Apple
…or from screenshots

Capture from web images in Safari
Text can also be selected directly from images on the web in Safari. To grab text, just highlight the text in the image as you would any text, and hit the Copy button.
Update, August 30th, 2024: This article was originally published on July 1st, 2021, and has been updated to account for changes in the OS.

Image: Samar Haddad / The Verge

I’m something of a hoarder when it comes to interesting facts — I’ll see something online or in a photo, and a small voice in my brain says, “You’ll be able to use that reference somewhere, somehow. Save it so you can find it later!” Which is fine if it’s just a line or two of text, but what if that info is within an image — a photo, say, or even a drawing?

Luckily, back in 2021, Apple introduced a feature in iOS 15 called Live Text, which makes it possible for iPhone users to grab text, email addresses, phone numbers, and more from images. Live Text works with both handwritten and typed text and supports a variety of languages.

Once you’ve captured text from an image, you can paste it into a document, an email, a text message, etc., just as you would any copied text. (A suggestion: you might want to proofread it before you send or post it, especially if you’ve copied handwritten text — Live Text is pretty good, but it’s not perfect.) Just press on the selected text and choose Copy. You can also select Translate if you want to translate it to another language or Look Up to get more information.

You can use Live Text with your camera app or directly from Photos or Safari. Here’s how to capture the text in these three apps.

Capture text using the camera app

One of the handiest ways of using Live Text is on documents or objects that are around you in real time. If you have an iPhone XS or later, you can use the camera app to quickly grab some text — say, the name on a product label, or of a book you want to remember to buy — and drop it into another app. You don’t even have to snap a photo. It’s very easy:

Open the camera app and point it at the text you want to copy. You’ll see a faint outline around the text, and a live text button will appear in the corner of the viewfinder.
Hit the button, and you’ll get a pop-up with the text inside. Highlight the part of the text you want to copy (or hit Select All) and choose Copy.
If there is an email address, phone number, or link, you will be given the option of interacting with it directly.

Screenshot: Apple
When you point the camera at some text, it will be outlined; use the symbol in the lower right corner to grab it.

Screenshot: Apple
If the text has an email address or phone number, you can use it to send an email or a text message.

Capture text from photos

If you’re viewing any pictures with text in the Photos app, you can select and interact with text there. It works just like selecting actual text anywhere else on iOS: tap and hold on the text in an image and you’ll see the familiar-looking blue highlights pop up.

The search bar in Photos will also search for text within images in your photo library, which is handy if you know you took a screenshot or photo of a receipt and just aren’t sure when. Once you find it, you can copy it using the steps above.

Screenshot: Apple
You can capture text from photos….

Screenshot: Apple
…or from screenshots

Capture from web images in Safari

Text can also be selected directly from images on the web in Safari. To grab text, just highlight the text in the image as you would any text, and hit the Copy button.

Update, August 30th, 2024: This article was originally published on July 1st, 2021, and has been updated to account for changes in the OS.

Read More 

Amazon’s new Alexa voice assistant will use Claude AI

The new version of the voice assistant, dubbed “Remarkable Alexa,” is expected to launch in October and require a subscription fee. | Photo by Jennifer Pattison Tuohy / The Verge

The improved version of Alexa that Amazon’s expected to release this year will primarily be powered by Anthropic’s Claude artificial intelligence model, according to Reuters. The publication reports that initial versions of Amazon’s smarter, subscription-based voice assistant that used the company’s own AI proved insufficient, often struggling with words and responding to user prompts.
Amazon’s minority stake in Anthropic is currently under investigation by the UK’s competition regulators. The company invested $4 billion into the startup last year with the promise that Amazon customers will get early access to the company’s AI tech.
The development of the new Alexa technology, dubbed “Remarkable Alexa,” has been rife with issues since it was announced last September, according to Fortune. Mihail Eric, a former machine learning scientist for Alexa AI, also said on X that the division was “riddled with technical and bureaucratic problems.” Meanwhile, Amazon’s currently dated but market-leading voice assistant is facing greater competition from challengers like OpenAI’s Advanced Voice Mode for ChatGPT, Google Gemini’s voice chat mode, and even Siri’s upcoming Apple Intelligence update.

The new Alexa built around Claude reportedly performs better than the version powered by Amazon’s in-house AI models.
“Amazon uses many different technologies to power Alexa,” the company told Reuters. “When it comes to machine learning models, we start with those built by Amazon, but we have used, and will continue to use, a variety of different models — including (Amazon AI model) Titan and future Amazon models, as well as those from partners — to build the best experience for customers.”
Following release delays, Remarkable Alexa will reportedly arrive sometime in mid-October. Expected features include daily AI-generated news summaries, a child-focused chatbot, and conversational shopping tools, according to a report from The Washington Post earlier this week. Reuters reported back in June that Amazon was considering placing the new Alexa behind a $5 to $10 monthly subscription in a bid to make the assistant profitable but would keep the current “Classic Alexa” offering available as a free-to-use service.
A demo of the new Alexa will be presented during Amazon’s annual devices and services event, according to Reuters, which is typically held in September.

The new version of the voice assistant, dubbed “Remarkable Alexa,” is expected to launch in October and require a subscription fee. | Photo by Jennifer Pattison Tuohy / The Verge

The improved version of Alexa that Amazon’s expected to release this year will primarily be powered by Anthropic’s Claude artificial intelligence model, according to Reuters. The publication reports that initial versions of Amazon’s smarter, subscription-based voice assistant that used the company’s own AI proved insufficient, often struggling with words and responding to user prompts.

Amazon’s minority stake in Anthropic is currently under investigation by the UK’s competition regulators. The company invested $4 billion into the startup last year with the promise that Amazon customers will get early access to the company’s AI tech.

The development of the new Alexa technology, dubbed “Remarkable Alexa,” has been rife with issues since it was announced last September, according to Fortune. Mihail Eric, a former machine learning scientist for Alexa AI, also said on X that the division was “riddled with technical and bureaucratic problems.” Meanwhile, Amazon’s currently dated but market-leading voice assistant is facing greater competition from challengers like OpenAI’s Advanced Voice Mode for ChatGPT, Google Gemini’s voice chat mode, and even Siri’s upcoming Apple Intelligence update.

The new Alexa built around Claude reportedly performs better than the version powered by Amazon’s in-house AI models.

“Amazon uses many different technologies to power Alexa,” the company told Reuters. “When it comes to machine learning models, we start with those built by Amazon, but we have used, and will continue to use, a variety of different models — including (Amazon AI model) Titan and future Amazon models, as well as those from partners — to build the best experience for customers.”

Following release delays, Remarkable Alexa will reportedly arrive sometime in mid-October. Expected features include daily AI-generated news summaries, a child-focused chatbot, and conversational shopping tools, according to a report from The Washington Post earlier this week. Reuters reported back in June that Amazon was considering placing the new Alexa behind a $5 to $10 monthly subscription in a bid to make the assistant profitable but would keep the current “Classic Alexa” offering available as a free-to-use service.

A demo of the new Alexa will be presented during Amazon’s annual devices and services event, according to Reuters, which is typically held in September.

Read More 

Princess Zelda draws a sword in Echoes of Wisdom’s new trailer

Nintendo

Our first couple of looks at Echoes of Wisdom made it seem like Zelda would be fighting to save Hyrule with just her wits and new magic powers, but the game’s latest trailer is all about the princess’ skills with a blade.
Though Princess Zelda will wield a mystical staff that basically summons monsters and conjures useful items, she also has a straight up sword in Echoes of Wisdom’s newest trailer that highlights a new game mechanic. After finding the blade, Zelda will gain the ability to shift into a very Link-like swordfighter form that’s timed to a magical energy gauge. In her swordfighter form, Zelda seems to be a bit more nimble and able to block enemy attacks with a shield, but she can only maintain the form for a limited amount of time before needing to refill her magic with mysterious energy found throughout the game.
While Zelda’s new form will give players a way to hack and slash their way through enemies, the new trailer really spotlights how sword fighting is just one of the skills you’re meant to cleverly deploy while exploring the world. The Legend of Zelda: Echoes of Wisdom hits the Switch on September 26th, 2024.

Nintendo

Our first couple of looks at Echoes of Wisdom made it seem like Zelda would be fighting to save Hyrule with just her wits and new magic powers, but the game’s latest trailer is all about the princess’ skills with a blade.

Though Princess Zelda will wield a mystical staff that basically summons monsters and conjures useful items, she also has a straight up sword in Echoes of Wisdom’s newest trailer that highlights a new game mechanic. After finding the blade, Zelda will gain the ability to shift into a very Link-like swordfighter form that’s timed to a magical energy gauge. In her swordfighter form, Zelda seems to be a bit more nimble and able to block enemy attacks with a shield, but she can only maintain the form for a limited amount of time before needing to refill her magic with mysterious energy found throughout the game.

While Zelda’s new form will give players a way to hack and slash their way through enemies, the new trailer really spotlights how sword fighting is just one of the skills you’re meant to cleverly deploy while exploring the world. The Legend of Zelda: Echoes of Wisdom hits the Switch on September 26th, 2024.

Read More 

All the news on Telegram CEO Pavel Durov’s arrest

Image: Cath Virginia / The Verge, Getty Images

French authorities arrested Durov as part of an investigation into criminal activity on Telegram. The arrest of Telegram CEO Pavel Durov has sparked questions about the messaging app’s future — and the precedent prosecution could set. On August 24th, French authorities took Durov into custody near Paris as part of an “ongoing judicial investigation” into criminal activity on the platform, which is known for its lax moderation policies.
A French judge later charged Durov with enabling illicit transactions, complicity in the distribution of child sexual abuse material, and refusing to cooperate with authorities, among other offenses. Although he has been released under judicial supervision, Durov has been barred from leaving France while the authorities continue their investigation.
If you want to keep up with the news surrounding Durov’s arrest, you can follow along below.

Image: Cath Virginia / The Verge, Getty Images

French authorities arrested Durov as part of an investigation into criminal activity on Telegram.

The arrest of Telegram CEO Pavel Durov has sparked questions about the messaging app’s future — and the precedent prosecution could set. On August 24th, French authorities took Durov into custody near Paris as part of an “ongoing judicial investigation” into criminal activity on the platform, which is known for its lax moderation policies.

A French judge later charged Durov with enabling illicit transactions, complicity in the distribution of child sexual abuse material, and refusing to cooperate with authorities, among other offenses. Although he has been released under judicial supervision, Durov has been barred from leaving France while the authorities continue their investigation.

If you want to keep up with the news surrounding Durov’s arrest, you can follow along below.

Read More 

OpenAI searches for an answer to its copyright problems

You mean it’s all copyright? | Image: Cath Virginia / The Verge, Getty Images

Why is OpenAI paying publishers if it already took their work? The huge leaps in OpenAI’s GPT model probably came from sucking down the entire written web. That includes entire archives of major publishers such as Axel Springer, Condé Nast, and The Associated Press — without their permission. But for some reason, OpenAI has announced deals with many of these conglomerates anyway.
At first glance, this doesn’t entirely make sense. Why would OpenAI pay for something it already had? And why would publishers, some of whom are lawsuit-style angry about their work being stolen, agree?
I suspect if we squint at these deals long enough, we can see one possible shape of the future of the web forming. Google has been referring less and less traffic outside itself — which threatens the existence of the entire rest of the web. That’s a power vacuum in search that OpenAI may be trying to fill.
The deals
Let’s start with what we know. The deals give OpenAI access to publications in order to, for instance, “enrich users’ experience with ChatGPT by adding recent and authoritative content on a wide variety of topics,” according to the press release announcing the Axel Springer deal. The “recent content” part is clutch. Scraping the web means there’s a date beyond which ChatGPT can’t retrieve information. The closer OpenAI is to real-time access, the closer its products are to real-time results.
On the one hand, this is peanuts, just embarrassingly small amounts of money
The terms around the deals have remained murky, I assume because everyone has been thoroughly NDA’d. Certainly I am in the dark about the specifics of the deal with Vox Media, the parent company of this publication. In the case of the publishers, keeping details private gives them a stronger hand when they pivot to, let’s say, Google and AI startup Anthropic — in the same way that not disclosing your previous salary lets you ask for more money from a new would-be employer.
OpenAI has been offering as little as $1 million to $5 million a year to publishers, according to The Information. There’s been some reporting on the deals with publishers such as Axel Springer, the Financial Times, NewsCorp, Condé Nast, and the AP. My back-of-the-envelope math based on publicly reported figures suggests that the ceiling on these deals is $10 million per publication per year.
On the one hand, this is peanuts, just embarrassingly small amounts of money. (The company’s former top researcher Ilya Sutskever made $1.9 million in 2016 alone.) On the other hand, OpenAI has already scraped all these publications’ data anyway. Unless and until it is prohibited by courts from doing so, it can just keep doing that. So what, exactly, is it paying for?
Maybe it’s API access, to make scraping easier and more current. As it stands, ChatGPT can’t answer up-to-the-moment queries; API access might change that.
But these payments can be thought of, also, as a way of ensuring publishers don’t sue OpenAI for the stuff it’s already scraped. One major publication has already filed suit, and the fallout could be much more expensive for OpenAI. The legal wrangling will take years.

The New York Times is prepared to litigate
If OpenAI ingested the entirety of the text-based internet, that means a couple things. First, that there’s no way to generate that volume of data again anytime soon, so that may limit any further leaps in usefulness from ChatGPT. (OpenAI notably has not yet released GPT-5.) Second, that a lot of people are pissed.
Many of those people have filed lawsuits, and the most important was filed by The New York Times. The Times’ lawsuit alleges that when OpenAI ingested its work to train its LLMs, it engaged in copyright infringement. Moreover, the product OpenAI created by doing this now competes with the Times and is meant to “steal audiences away from it.”
The Times’ lawsuit says that it tried to negotiate with OpenAI to permit the use of its work, but those negotiations failed. I’m going to take a wild guess based on the math I did above and say it’s because OpenAI offered insultingly low sums of money to the Times. Its excuse? Fair use — a provision that allows the unlicensed use of copyrighted material under certain circumstances.
Should the newspaper win its case, OpenAI is going to have to pay an absolute minimum of $7.5 billion in statutory damages alone
If the Times wins its lawsuit, it may be entitled to statutory damages, which start at $750 per work. (I know those figures because — as you may have guessed from my use of “statutory” — they are dictated by law. The paper is also asking for compensatory damages, restitution, and attorneys’ fees.) The Times says that OpenAI ingested 10 million total works — so that’s an absolute minimum of $7.5 billion in statutory damages alone. No wonder the Times wasn’t going to cut a deal in the single-digit millions.
So when OpenAI makes its deals with publishers, they are, functionally, settlements that guarantee the publishers won’t sue OpenAI as the Times is doing. They are also structured so that OpenAI can maintain its previous use of the publishers’ work is fair use — because OpenAI is going to have to argue that in multiple court cases, most notably the one with the Times.
“I do have every reason to believe that they would like to preserve their rights to use this under fair use,” says Danielle Coffey, the CEO of the News Media Alliance. “They wouldn’t be arguing that in a court if they didn’t.”
It seems like OpenAI is hoping to clean up its reputation a little. If you’re introducing a new product you want people to pay for, it simply can’t come with a ton of baggage and uncertainty. And OpenAI does have baggage: to make its fair use defense, it must admit to taking The New York Times’ copyrighted material without permission — which implicitly suggests it’s taken a lot of other copyrighted material without permission, too. Its argument is just that it is legally entitled to do that.
There’s also a question of accuracy. At this point, we all know generative AI makes stuff up. The publisher deals don’t just provide legitimacy — they may also help feed generative AI information that is less likely to result in embarrassing errors.
Google
There’s more at play than just lawsuit prevention and reputation management. Remember how the deals also give OpenAI up-to-date information? OpenAI recently announced SearchGPT, its very own search engine. AI-native web searching is still nascent, but being able to filter out AI-generated SEO glurge in favor of real sources of reliable information would be a leg up.
Google Search has seriously degraded over the last several years, and the AI chatbot Google has slapped on top of its results hasn’t exactly helped matters. It sometimes gives inaccurate answers while burying links with real information farther down the page. If you want to build a product to upend web search as we know it, now’s the time.
The OpenAI deals give publishers a little more leverage and may eventually force Google to the negotiating table
Google has also managed to piss off publishers — not just by ingesting all their data for its large language models, but also by repurposing itself. Once upon a time, Google Search was a major source of traffic for publishers and a way of directing people to primary sources. But then, Google introduced “snippets,” which meant that people didn’t have to click through to a link in order to find out, for instance, how much to dilute coconut cream to make it a coconut milk equivalent. Because people didn’t go to the original source, publishers didn’t get as many impressions on their ads. Various other changes to Search over the years have meant that Google has referred less traffic to publishers, especially smaller ones.
Now, Google’s AI chatbot sidelines publishers further. But the OpenAI deals give publishers a little more leverage and may eventually force Google to the negotiating table.
Google is not generally in the habit of making paid deals for search; until recently, the arrangement was that publishers got traffic referrals. But for its chatbot, Google did make a deal: with Reddit. For $60 million a year, Google has access to Reddit, cutting off every search engine that didn’t make a similar deal. This is significantly more money than OpenAI is paying publishers, and has cracked open a door that it seems publishers intend to walk through.
Taking over the search market is the kind of thing that could justify all that investment
Google has been getting less useful to the average person for years now. Generative AI threatens to make that worse, by creating sites full of junk text that serve ads. Google doesn’t treat all the sites it crawls the same, of course. But if someone can come up with an alternative that promises higher quality information, the search engine that lost its way may be in real trouble. After all, that’s how Google itself unseated the search engines that came before it, such as AltaVista.
OpenAI burns money, and may lose $5 billion this year. It’s currently in talks for yet another round, valuing the company at over $100 billion. To justify anything close to this valuation, it needs a path to profitability. Taking over the search market is the kind of thing that could justify all that investment.
OpenAI’s SearchGPT isn’t a serious threat yet. It’s still a “prototype,” which means that if it makes an error on the order of telling people to put glue on their pizza, that’s easier to explain away. Unlike Google, a utility for almost every person online, SearchGPT has a limited number of users — so a lot fewer people will see any early mistakes.
The deals with publishers also provide SearchGPT with another reputational cushion. Its competitor Perplexity is under fire for scraping sites that have explicitly banned it. SearchGPT, by contrast, is a collaboration with the publishers who inked deals.
What happens when the courts actually rule?
It’s not totally clear what the pivot to “answer engines” means for publishers’ bottom lines. Maybe some people will continue to click through to see original sources, especially if it isn’t possible to remove hallucinations from large language models. Another possible model comes from Perplexity, which belatedly introduced a revenue-sharing program.
The revenue sharing program makes it a little easier for Perplexity to claim its scraping is fair use (sound familiar?). Perplexity’s situation is a little different than ChatGPT’s; it has created a “Pages” product that has an unfortunate tendency to plagiarize copyrighted material. Forbes and Condé Nast have already sent Perplexity legal nastygrams.
So here’s the big question: what happens when the courts actually rule? Part of the reason these publisher deals exist at all is to reduce the threat of legal action. But their very existence may cut against the argument that scraping copyrighted material for AI is fair use.
Copywrong
A ruling in favor of The New York Times can potentially help both Google and OpenAI, as well as Microsoft, which is backing OpenAI. Maybe this was what Eric Schmidt, former Google CEO, meant when he said entrepreneurs should do whatever they want with copyrighted work and “hire a whole bunch of lawyers to go clean the mess up.”
Courts are unpredictable when it comes to copyright law because it kind of works like porn — judges know a violation when they see it. Plus, if there is indeed a trial between The New York Times and OpenAI, there will almost certainly be an appeal on the verdict, no matter who wins.
Court cases take time, and appeals take more time. It will be years before the courts sort all this out. And that’s plenty of time for a player like OpenAI to develop a dominant business.
She specifically cites Google as being so big that it can force publishers into its terms
Let’s say OpenAI eventually loses. That means all creators of large language models have to pay out. That can get very expensive, very fast — meaning that only the biggest players will be able to compete. It ensconces every established player and potentially destroys a number of open-source LLMs. That makes Google, Microsoft, Amazon, and Meta even more important in the ecosystem than they already dominate — as well as OpenAI and Anthropic, both of which have deals with some of the major players.
There’s also some precedent in how big tech companies navigate the rulings against them, says the News Media Alliance’s Coffey. She specifically cites Google as being so big that it can force publishers into its terms; as if to underscore her point, a few weeks after our interview, Google was legally declared a monopoly in an antitrust case.
Here’s an example of Google’s outsize power: In 2019, the EU gave digital publishers the right to demand payment when Google used snippets of their work. This law, first implemented in France, resulted in Google telling publishers it would use only headlines from their work rather than pay. “And so they sent a bunch of letters to French publications, saying waive your copyright protection if you want to be found,” Coffey said. “They’re almost above the law in that sense” because Google Search is so dominant.
Google is currently using its search dominance to squeeze publishers in a similar way. Blocking its AI from summarizing people’s work means that Google simply won’t list them at all, because it uses the same tool to scrape for web search and AI training.
“That would be a real anticompetitive tragedy at the beginning of the ecosystem.”
So if the Times wins, it seems possible that Google and other major AI players could still demand deals that don’t benefit publishers much — while also destroying competing LLMs. “I’m incredibly worried about the possibility that we are setting up an ecosystem where the only people who are going to be able to afford training data are the biggest companies,” says Nicholas Garcia, policy counsel at Public Knowledge.
In fact, the existence of the suit may be enough to discourage some players from using publicly accessible data to train their models. People might perceive that they can’t train on publicly available data — narrowing competitive dynamics even farther than the bottlenecks that already exist with the supply of compute and experts. “That would be a real anticompetitive tragedy at the beginning of the ecosystem,” Garcia says.
OpenAI isn’t the only defendant in the Times case; the other one is its partner, Microsoft. And if OpenAI does have to pay out a settlement that is, at minimum, hundreds of millions of dollars, that might open it up to an acquisition from Microsoft — which then has all the licensing deals that OpenAI already negotiated, in a world where the licensing deals are required by copyright law. Pretty big competitive advantage. Granted, right now, Microsoft is pretending it doesn’t really know OpenAI because of the government’s newfound interest in antitrust, but that could change by the time the copyright cases have rolled through the system.
And OpenAI may lose because of the licensing deals it negotiated. Those deals created a market for the publishers’ data, and under copyright law, if you’re disrupting such a market, well, that’s not fair use. This particular line of argument most recently came up in a Supreme Court case about an Andy Warhol painting that was found to unfairly compete with the original photograph used to create the painting.
The legal questions aren’t the only ones, of course. There’s something even more basic I’ve been wondering about: do people want answer engines, and if so, are they financially sustainable? Search isn’t just about finding answers — Google is a way of finding a specific website without having to memorize or bookmark the URL. Plus, AI is expensive. OpenAI might fail because it simply can’t turn a profit. As for Google, it could be broken up by regulators because of that monopoly finding.
In that case, maybe the publishers are the smart ones after all: getting the money while the money’s still good.

You mean it’s all copyright? | Image: Cath Virginia / The Verge, Getty Images

Why is OpenAI paying publishers if it already took their work?

The huge leaps in OpenAI’s GPT model probably came from sucking down the entire written web. That includes entire archives of major publishers such as Axel Springer, Condé Nast, and The Associated Press — without their permission. But for some reason, OpenAI has announced deals with many of these conglomerates anyway.

At first glance, this doesn’t entirely make sense. Why would OpenAI pay for something it already had? And why would publishers, some of whom are lawsuit-style angry about their work being stolen, agree?

I suspect if we squint at these deals long enough, we can see one possible shape of the future of the web forming. Google has been referring less and less traffic outside itself — which threatens the existence of the entire rest of the web. That’s a power vacuum in search that OpenAI may be trying to fill.

The deals

Let’s start with what we know. The deals give OpenAI access to publications in order to, for instance, “enrich users’ experience with ChatGPT by adding recent and authoritative content on a wide variety of topics,” according to the press release announcing the Axel Springer deal. The “recent content” part is clutch. Scraping the web means there’s a date beyond which ChatGPT can’t retrieve information. The closer OpenAI is to real-time access, the closer its products are to real-time results.

On the one hand, this is peanuts, just embarrassingly small amounts of money

The terms around the deals have remained murky, I assume because everyone has been thoroughly NDA’d. Certainly I am in the dark about the specifics of the deal with Vox Media, the parent company of this publication. In the case of the publishers, keeping details private gives them a stronger hand when they pivot to, let’s say, Google and AI startup Anthropic — in the same way that not disclosing your previous salary lets you ask for more money from a new would-be employer.

OpenAI has been offering as little as $1 million to $5 million a year to publishers, according to The Information. There’s been some reporting on the deals with publishers such as Axel Springer, the Financial Times, NewsCorp, Condé Nast, and the AP. My back-of-the-envelope math based on publicly reported figures suggests that the ceiling on these deals is $10 million per publication per year.

On the one hand, this is peanuts, just embarrassingly small amounts of money. (The company’s former top researcher Ilya Sutskever made $1.9 million in 2016 alone.) On the other hand, OpenAI has already scraped all these publications’ data anyway. Unless and until it is prohibited by courts from doing so, it can just keep doing that. So what, exactly, is it paying for?

Maybe it’s API access, to make scraping easier and more current. As it stands, ChatGPT can’t answer up-to-the-moment queries; API access might change that.

But these payments can be thought of, also, as a way of ensuring publishers don’t sue OpenAI for the stuff it’s already scraped. One major publication has already filed suit, and the fallout could be much more expensive for OpenAI. The legal wrangling will take years.

The New York Times is prepared to litigate

If OpenAI ingested the entirety of the text-based internet, that means a couple things. First, that there’s no way to generate that volume of data again anytime soon, so that may limit any further leaps in usefulness from ChatGPT. (OpenAI notably has not yet released GPT-5.) Second, that a lot of people are pissed.

Many of those people have filed lawsuits, and the most important was filed by The New York Times. The Times’ lawsuit alleges that when OpenAI ingested its work to train its LLMs, it engaged in copyright infringement. Moreover, the product OpenAI created by doing this now competes with the Times and is meant to “steal audiences away from it.”

The Times’ lawsuit says that it tried to negotiate with OpenAI to permit the use of its work, but those negotiations failed. I’m going to take a wild guess based on the math I did above and say it’s because OpenAI offered insultingly low sums of money to the Times. Its excuse? Fair use — a provision that allows the unlicensed use of copyrighted material under certain circumstances.

Should the newspaper win its case, OpenAI is going to have to pay an absolute minimum of $7.5 billion in statutory damages alone

If the Times wins its lawsuit, it may be entitled to statutory damages, which start at $750 per work. (I know those figures because — as you may have guessed from my use of “statutory” — they are dictated by law. The paper is also asking for compensatory damages, restitution, and attorneys’ fees.) The Times says that OpenAI ingested 10 million total works — so that’s an absolute minimum of $7.5 billion in statutory damages alone. No wonder the Times wasn’t going to cut a deal in the single-digit millions.

So when OpenAI makes its deals with publishers, they are, functionally, settlements that guarantee the publishers won’t sue OpenAI as the Times is doing. They are also structured so that OpenAI can maintain its previous use of the publishers’ work is fair use — because OpenAI is going to have to argue that in multiple court cases, most notably the one with the Times.

“I do have every reason to believe that they would like to preserve their rights to use this under fair use,” says Danielle Coffey, the CEO of the News Media Alliance. “They wouldn’t be arguing that in a court if they didn’t.”

It seems like OpenAI is hoping to clean up its reputation a little. If you’re introducing a new product you want people to pay for, it simply can’t come with a ton of baggage and uncertainty. And OpenAI does have baggage: to make its fair use defense, it must admit to taking The New York Times’ copyrighted material without permission — which implicitly suggests it’s taken a lot of other copyrighted material without permission, too. Its argument is just that it is legally entitled to do that.

There’s also a question of accuracy. At this point, we all know generative AI makes stuff up. The publisher deals don’t just provide legitimacy — they may also help feed generative AI information that is less likely to result in embarrassing errors.

Google

There’s more at play than just lawsuit prevention and reputation management. Remember how the deals also give OpenAI up-to-date information? OpenAI recently announced SearchGPT, its very own search engine. AI-native web searching is still nascent, but being able to filter out AI-generated SEO glurge in favor of real sources of reliable information would be a leg up.

Google Search has seriously degraded over the last several years, and the AI chatbot Google has slapped on top of its results hasn’t exactly helped matters. It sometimes gives inaccurate answers while burying links with real information farther down the page. If you want to build a product to upend web search as we know it, now’s the time.

The OpenAI deals give publishers a little more leverage and may eventually force Google to the negotiating table

Google has also managed to piss off publishers — not just by ingesting all their data for its large language models, but also by repurposing itself. Once upon a time, Google Search was a major source of traffic for publishers and a way of directing people to primary sources. But then, Google introduced “snippets,” which meant that people didn’t have to click through to a link in order to find out, for instance, how much to dilute coconut cream to make it a coconut milk equivalent. Because people didn’t go to the original source, publishers didn’t get as many impressions on their ads. Various other changes to Search over the years have meant that Google has referred less traffic to publishers, especially smaller ones.

Now, Google’s AI chatbot sidelines publishers further. But the OpenAI deals give publishers a little more leverage and may eventually force Google to the negotiating table.

Google is not generally in the habit of making paid deals for search; until recently, the arrangement was that publishers got traffic referrals. But for its chatbot, Google did make a deal: with Reddit. For $60 million a year, Google has access to Reddit, cutting off every search engine that didn’t make a similar deal. This is significantly more money than OpenAI is paying publishers, and has cracked open a door that it seems publishers intend to walk through.

Taking over the search market is the kind of thing that could justify all that investment

Google has been getting less useful to the average person for years now. Generative AI threatens to make that worse, by creating sites full of junk text that serve ads. Google doesn’t treat all the sites it crawls the same, of course. But if someone can come up with an alternative that promises higher quality information, the search engine that lost its way may be in real trouble. After all, that’s how Google itself unseated the search engines that came before it, such as AltaVista.

OpenAI burns money, and may lose $5 billion this year. It’s currently in talks for yet another round, valuing the company at over $100 billion. To justify anything close to this valuation, it needs a path to profitability. Taking over the search market is the kind of thing that could justify all that investment.

OpenAI’s SearchGPT isn’t a serious threat yet. It’s still a “prototype,” which means that if it makes an error on the order of telling people to put glue on their pizza, that’s easier to explain away. Unlike Google, a utility for almost every person online, SearchGPT has a limited number of users — so a lot fewer people will see any early mistakes.

The deals with publishers also provide SearchGPT with another reputational cushion. Its competitor Perplexity is under fire for scraping sites that have explicitly banned it. SearchGPT, by contrast, is a collaboration with the publishers who inked deals.

What happens when the courts actually rule?

It’s not totally clear what the pivot to “answer engines” means for publishers’ bottom lines. Maybe some people will continue to click through to see original sources, especially if it isn’t possible to remove hallucinations from large language models. Another possible model comes from Perplexity, which belatedly introduced a revenue-sharing program.

The revenue sharing program makes it a little easier for Perplexity to claim its scraping is fair use (sound familiar?). Perplexity’s situation is a little different than ChatGPT’s; it has created a “Pages” product that has an unfortunate tendency to plagiarize copyrighted material. Forbes and Condé Nast have already sent Perplexity legal nastygrams.

So here’s the big question: what happens when the courts actually rule? Part of the reason these publisher deals exist at all is to reduce the threat of legal action. But their very existence may cut against the argument that scraping copyrighted material for AI is fair use.

Copywrong

A ruling in favor of The New York Times can potentially help both Google and OpenAI, as well as Microsoft, which is backing OpenAI. Maybe this was what Eric Schmidt, former Google CEO, meant when he said entrepreneurs should do whatever they want with copyrighted work and “hire a whole bunch of lawyers to go clean the mess up.”

Courts are unpredictable when it comes to copyright law because it kind of works like porn — judges know a violation when they see it. Plus, if there is indeed a trial between The New York Times and OpenAI, there will almost certainly be an appeal on the verdict, no matter who wins.

Court cases take time, and appeals take more time. It will be years before the courts sort all this out. And that’s plenty of time for a player like OpenAI to develop a dominant business.

She specifically cites Google as being so big that it can force publishers into its terms

Let’s say OpenAI eventually loses. That means all creators of large language models have to pay out. That can get very expensive, very fast — meaning that only the biggest players will be able to compete. It ensconces every established player and potentially destroys a number of open-source LLMs. That makes Google, Microsoft, Amazon, and Meta even more important in the ecosystem than they already dominate — as well as OpenAI and Anthropic, both of which have deals with some of the major players.

There’s also some precedent in how big tech companies navigate the rulings against them, says the News Media Alliance’s Coffey. She specifically cites Google as being so big that it can force publishers into its terms; as if to underscore her point, a few weeks after our interview, Google was legally declared a monopoly in an antitrust case.

Here’s an example of Google’s outsize power: In 2019, the EU gave digital publishers the right to demand payment when Google used snippets of their work. This law, first implemented in France, resulted in Google telling publishers it would use only headlines from their work rather than pay. “And so they sent a bunch of letters to French publications, saying waive your copyright protection if you want to be found,” Coffey said. “They’re almost above the law in that sense” because Google Search is so dominant.

Google is currently using its search dominance to squeeze publishers in a similar way. Blocking its AI from summarizing people’s work means that Google simply won’t list them at all, because it uses the same tool to scrape for web search and AI training.

“That would be a real anticompetitive tragedy at the beginning of the ecosystem.”

So if the Times wins, it seems possible that Google and other major AI players could still demand deals that don’t benefit publishers much — while also destroying competing LLMs. “I’m incredibly worried about the possibility that we are setting up an ecosystem where the only people who are going to be able to afford training data are the biggest companies,” says Nicholas Garcia, policy counsel at Public Knowledge.

In fact, the existence of the suit may be enough to discourage some players from using publicly accessible data to train their models. People might perceive that they can’t train on publicly available data — narrowing competitive dynamics even farther than the bottlenecks that already exist with the supply of compute and experts. “That would be a real anticompetitive tragedy at the beginning of the ecosystem,” Garcia says.

OpenAI isn’t the only defendant in the Times case; the other one is its partner, Microsoft. And if OpenAI does have to pay out a settlement that is, at minimum, hundreds of millions of dollars, that might open it up to an acquisition from Microsoft — which then has all the licensing deals that OpenAI already negotiated, in a world where the licensing deals are required by copyright law. Pretty big competitive advantage. Granted, right now, Microsoft is pretending it doesn’t really know OpenAI because of the government’s newfound interest in antitrust, but that could change by the time the copyright cases have rolled through the system.

And OpenAI may lose because of the licensing deals it negotiated. Those deals created a market for the publishers’ data, and under copyright law, if you’re disrupting such a market, well, that’s not fair use. This particular line of argument most recently came up in a Supreme Court case about an Andy Warhol painting that was found to unfairly compete with the original photograph used to create the painting.

The legal questions aren’t the only ones, of course. There’s something even more basic I’ve been wondering about: do people want answer engines, and if so, are they financially sustainable? Search isn’t just about finding answers — Google is a way of finding a specific website without having to memorize or bookmark the URL. Plus, AI is expensive. OpenAI might fail because it simply can’t turn a profit. As for Google, it could be broken up by regulators because of that monopoly finding.

In that case, maybe the publishers are the smart ones after all: getting the money while the money’s still good.

Read More 

Backpage co-founder sentenced to five years in prison

Lacey still faces about 30 other charges related to prostitution facilitation and money laundering. | Cath Virginia / The Verge | Photos from Getty Images

Michael Lacey, a founder of the defunct classified site Backpage.com, received a five-year prison sentence on Wednesday and was fined $3 million. Lacey was found guilty of money laundering last year in a sweeping case that alleged Backpage executives promoted and profited from prostitution.
Lacey was convicted on a single count of international concealment money laundering in November 2023 but was acquitted of 50 other charges related to prostitution facilitation and money laundering due to insufficient evidence. He still faces about 30 related charges, according to the Associated Press. Two other Backpage executives — former chief financial officer John Brunst and executive vice president Scott Spear — received 10-year prison sentences on Wednesday after being convicted of money laundering and prostitution facilitation last year.
“The defendants and their conspirators obtained more than $500 million from operating an online forum that facilitated the sexual exploitation of countless victims,” said Principal Deputy Assistant Attorney General Nicole M. Argentieri in a Department of Justice press release. “The defendants thought they could hide their illicit proceeds by laundering the funds through shell companies in foreign countries. But they were wrong.”
The case is one of many that Backpage has faced regarding sexual exploitation over the last decade, having shuttered its “Adult Services” ads section in 2017 in response to pressure from lawmakers and critics. All three men have been ordered to turn themselves in by September 11th to begin serving their sentences. According to The New York Times, both Lacey and Brunst are planning to appeal the sentencing.

Lacey still faces about 30 other charges related to prostitution facilitation and money laundering. | Cath Virginia / The Verge | Photos from Getty Images

Michael Lacey, a founder of the defunct classified site Backpage.com, received a five-year prison sentence on Wednesday and was fined $3 million. Lacey was found guilty of money laundering last year in a sweeping case that alleged Backpage executives promoted and profited from prostitution.

Lacey was convicted on a single count of international concealment money laundering in November 2023 but was acquitted of 50 other charges related to prostitution facilitation and money laundering due to insufficient evidence. He still faces about 30 related charges, according to the Associated Press. Two other Backpage executives — former chief financial officer John Brunst and executive vice president Scott Spear — received 10-year prison sentences on Wednesday after being convicted of money laundering and prostitution facilitation last year.

“The defendants and their conspirators obtained more than $500 million from operating an online forum that facilitated the sexual exploitation of countless victims,” said Principal Deputy Assistant Attorney General Nicole M. Argentieri in a Department of Justice press release. “The defendants thought they could hide their illicit proceeds by laundering the funds through shell companies in foreign countries. But they were wrong.”

The case is one of many that Backpage has faced regarding sexual exploitation over the last decade, having shuttered its “Adult Services” ads section in 2017 in response to pressure from lawmakers and critics. All three men have been ordered to turn themselves in by September 11th to begin serving their sentences. According to The New York Times, both Lacey and Brunst are planning to appeal the sentencing.

Read More 

Hyundai’s electrified N Vision 74 is headed for production someday soon

Hyundai N Vision 74 Concept shown along with the Pony Coupe Concept from 1974 | Image: Hyundai

The N Vision 74 coupe was a slick embodiment of Hyundai’s “high-performance vision of electrification” two years ago that we have hoped to see as a real vehicle. That seems way more likely now that it got namechecked in Hyundai’s plan to launch 21 fully electric models by 2030. This slide (below) is from the company’s 2024 CEO Investor Day presentation, explaining the range of vehicles the company will launch and listing the Vision 74 and the Genesis Magma concept.
The plan also includes affordable EVs like its Inster / Casper subcompact, the three-row Ioniq 9 that’s next up to launch in the US, luxury EVs from Genesis, and, finally, high-performance models.

Image: Hyundai
Hyundai EV Full Lineup slide

Executives didn’t directly mention the N Vision 74 as the slide was shown. In response to an inquiry from The Verge, PR director Michael Stewart pointed to the slides and video presentation as all of the information available at this time.

For the Vision 74, the company cites inspiration from Hyundai’s 1974 Pony Coupe concept that shared a designer with the DMC DeLorean and a virtual supercar, the Hyundai N 2025 Vision Gran Turismo from 2015.
The Vision 74’s link to that virtual supercar included a hybrid hydrogen fuel cell system. Still, this announcement wasn’t directly connected to Hyundai’s hybrid plans, so in whatever form the real car arrives, it may be very different than what we’ve seen so far. Of course, in other parts of the presentation, Hyundai talked up plans for extended-range electric vehicles (EREVs) that use a gas engine to recharge the battery pack, with the company offering a range of powertrain options “including ICE, hybrids, plug-in hybrids, EVs and hydrogen fuel cell vehicles.”

Hyundai N Vision 74 Concept shown along with the Pony Coupe Concept from 1974 | Image: Hyundai

The N Vision 74 coupe was a slick embodiment of Hyundai’s “high-performance vision of electrification” two years ago that we have hoped to see as a real vehicle. That seems way more likely now that it got namechecked in Hyundai’s plan to launch 21 fully electric models by 2030. This slide (below) is from the company’s 2024 CEO Investor Day presentation, explaining the range of vehicles the company will launch and listing the Vision 74 and the Genesis Magma concept.

The plan also includes affordable EVs like its Inster / Casper subcompact, the three-row Ioniq 9 that’s next up to launch in the US, luxury EVs from Genesis, and, finally, high-performance models.

Image: Hyundai
Hyundai EV Full Lineup slide

Executives didn’t directly mention the N Vision 74 as the slide was shown. In response to an inquiry from The Verge, PR director Michael Stewart pointed to the slides and video presentation as all of the information available at this time.

For the Vision 74, the company cites inspiration from Hyundai’s 1974 Pony Coupe concept that shared a designer with the DMC DeLorean and a virtual supercar, the Hyundai N 2025 Vision Gran Turismo from 2015.

The Vision 74’s link to that virtual supercar included a hybrid hydrogen fuel cell system. Still, this announcement wasn’t directly connected to Hyundai’s hybrid plans, so in whatever form the real car arrives, it may be very different than what we’ve seen so far. Of course, in other parts of the presentation, Hyundai talked up plans for extended-range electric vehicles (EREVs) that use a gas engine to recharge the battery pack, with the company offering a range of powertrain options “including ICE, hybrids, plug-in hybrids, EVs and hydrogen fuel cell vehicles.”

Read More 

Scroll to top
Generated by Feedzy