verge-rss

Supreme Court decision means Biden administration can keep talking to social media companies

Illustration by Cath Virginia / The Verge | Photos via Getty Images

On Wednesday, the Supreme Court issued its decision in Murthy v. Missouri, a case spurred by conservative state attorneys general about whether the Biden administration illegally coerced social media companies to remove speech it didn’t like. In a 6-3 decision, the court reversed the decision by the Fifth Circuit Court of Appeals, which had found unconstitutional coercion in the government’s conduct. The Supreme Court held that the plaintiffs did not adequately establish standing — that is, their right to sue in the first place — and has sent the case back to the lower courts, where a new decision will be issued that is consistent with the SCOTUS opinion.
At its core, the case is about whether the Biden administration crossed the line from legal persuasion to illegal coercion in its communications with tech companies about things like voting or health misinformation during the pandemic. During oral arguments this year, several justices seemed uneasy with the idea of placing sweeping restrictions on the government from interacting with social media platforms.
“The plaintiffs, without any concrete link between their injuries and the defendants’ conduct, ask us to conduct a review of the years-long communications between dozens of federal officials, across different agencies, with different social-media platforms, about different topics,” Justice Amy Coney Barrett wrote in the opinion. “This Court’s standing doctrine prevents us from ‘exercis[ing such] general legal oversight’ of the other branches of Government.”
The Supreme Court said that the Fifth Circuit “glossed over complexities in the evidence” by “attributing every platform decision at least in part to the defendants,” meaning the federal government. While the majority opinion acknowledges that the government actors “played a role” at times in some of the social media platform’s content moderation decisions, it says that “the evidence indicates that the platforms had independent incentives to moderate content and often exercised their own judgment.”
On top of that, the timing of platforms’ content moderation decisions that were in question cast doubts on the causal relationship between government pressure and the platforms’ choices, according to the court. “Complicating the plaintiffs’ effort to demonstrate that each platform acted due to Government coercion, rather than its own judgment, is the fact that the platforms began to suppress the plaintiffs’ COVID–19 content before the defendants’ challenged communications started,” according to the majority.
They also says the states largely failed to link platforms’ restrictions to the federal government’s communications with the companies. For example, Facebook’s Covid-related restrictions on a “healthcare activist” predated some of the communications the federal government had with the company, according to the court. “Though she makes the best showing of all the plaintiffs, most of the lines she draws are tenuous,” the majority wrote.

Separate from the Supreme Court case, the question about government coercion has also become a focus of the House Judiciary Committee. Chair Jim Jordan (R-OH) attended oral arguments in the case and recently released a report with internal communications among high-ranking tech executives about how they responded to government outreach about posts officials deemed harmful to Americans.

This story is developing.

Illustration by Cath Virginia / The Verge | Photos via Getty Images

On Wednesday, the Supreme Court issued its decision in Murthy v. Missouri, a case spurred by conservative state attorneys general about whether the Biden administration illegally coerced social media companies to remove speech it didn’t like. In a 6-3 decision, the court reversed the decision by the Fifth Circuit Court of Appeals, which had found unconstitutional coercion in the government’s conduct. The Supreme Court held that the plaintiffs did not adequately establish standing — that is, their right to sue in the first place — and has sent the case back to the lower courts, where a new decision will be issued that is consistent with the SCOTUS opinion.

At its core, the case is about whether the Biden administration crossed the line from legal persuasion to illegal coercion in its communications with tech companies about things like voting or health misinformation during the pandemic. During oral arguments this year, several justices seemed uneasy with the idea of placing sweeping restrictions on the government from interacting with social media platforms.

“The plaintiffs, without any concrete link between their injuries and the defendants’ conduct, ask us to conduct a review of the years-long communications between dozens of federal officials, across different agencies, with different social-media platforms, about different topics,” Justice Amy Coney Barrett wrote in the opinion. “This Court’s standing doctrine prevents us from ‘exercis[ing such] general legal oversight’ of the other branches of Government.”

The Supreme Court said that the Fifth Circuit “glossed over complexities in the evidence” by “attributing every platform decision at least in part to the defendants,” meaning the federal government. While the majority opinion acknowledges that the government actors “played a role” at times in some of the social media platform’s content moderation decisions, it says that “the evidence indicates that the platforms had independent incentives to moderate content and often exercised their own judgment.”

On top of that, the timing of platforms’ content moderation decisions that were in question cast doubts on the causal relationship between government pressure and the platforms’ choices, according to the court. “Complicating the plaintiffs’ effort to demonstrate that each platform acted due to Government coercion, rather than its own judgment, is the fact that the platforms began to suppress the plaintiffs’ COVID–19 content before the defendants’ challenged communications started,” according to the majority.

They also says the states largely failed to link platforms’ restrictions to the federal government’s communications with the companies. For example, Facebook’s Covid-related restrictions on a “healthcare activist” predated some of the communications the federal government had with the company, according to the court. “Though she makes the best showing of all the plaintiffs, most of the lines she draws are tenuous,” the majority wrote.

Separate from the Supreme Court case, the question about government coercion has also become a focus of the House Judiciary Committee. Chair Jim Jordan (R-OH) attended oral arguments in the case and recently released a report with internal communications among high-ranking tech executives about how they responded to government outreach about posts officials deemed harmful to Americans.

This story is developing.

Read More 

Oops, a Meta ‘error’ limited political content on Instagram and Threads

Illustration by Kristen Radtke / The Verge

After Democratic strategist Keith Edwards urged Threads users to check the Instagram setting limiting political content from people they don’t follow, many people noticed theirs had abruptly changed. Journalist Taylor Lorenz confirmed that her settings had changed as well and noted that they appeared to reset every time she force-closed the Instagram app, which we’ve also confirmed on our phones.
Meta says the behavior was unintentional. “This was an error and should not have happened,” Meta communications director Andy Stone posted on Threads. “We’re working on getting it fixed.”
Meta introduced the opt-out setting that limits recommendations of “political content” to Instagram and Threads in March. At the time, the company said it wasn’t limiting political content from reaching people on Instagram but instead simply giving users the ability to stop seeing posts that don’t interest them.

“Our goal is to preserve the ability for people to choose to interact with political content, while respecting each person’s appetite for it,“ Instagram head Adam Mosseri said in a Threads post announcing the change. When Threads first rolled out, Mosseri told The Verge’s Alex Heath that the app would “not do anything to encourage” politics or “hard news.”
The opt-out setting, however, was on by default, and Instagram never sent users in-app notifications alerting them of the change.
A support page for Instagram describes how the setting, found only in the apps for Instagram, is supposed to work. Under a user’s profile menu for content preferences, there’s an option for political content, where they can turn the limit off and confirm that choice. Changing the setting and closing the app caused it to reset for everyone here who tried it, and we’ll update this article if that changes.

Illustration by Kristen Radtke / The Verge

After Democratic strategist Keith Edwards urged Threads users to check the Instagram setting limiting political content from people they don’t follow, many people noticed theirs had abruptly changed. Journalist Taylor Lorenz confirmed that her settings had changed as well and noted that they appeared to reset every time she force-closed the Instagram app, which we’ve also confirmed on our phones.

Meta says the behavior was unintentional. “This was an error and should not have happened,” Meta communications director Andy Stone posted on Threads. “We’re working on getting it fixed.”

Meta introduced the opt-out setting that limits recommendations of “political content” to Instagram and Threads in March. At the time, the company said it wasn’t limiting political content from reaching people on Instagram but instead simply giving users the ability to stop seeing posts that don’t interest them.

“Our goal is to preserve the ability for people to choose to interact with political content, while respecting each person’s appetite for it,“ Instagram head Adam Mosseri said in a Threads post announcing the change. When Threads first rolled out, Mosseri told The Verge’s Alex Heath that the app would “not do anything to encourage” politics or “hard news.”

The opt-out setting, however, was on by default, and Instagram never sent users in-app notifications alerting them of the change.

A support page for Instagram describes how the setting, found only in the apps for Instagram, is supposed to work. Under a user’s profile menu for content preferences, there’s an option for political content, where they can turn the limit off and confirm that choice. Changing the setting and closing the app caused it to reset for everyone here who tried it, and we’ll update this article if that changes.

Read More 

What the RIAA lawsuits mean for AI and copyright

Cath Virginia / The Verge | Photo from Getty Images

Udio and Suno are not, despite their names, the hottest new restaurants on the Lower East Side. They’re AI startups that let people generate impressively real-sounding songs — complete with instrumentation and vocal performances — from prompts. And on Monday, a group of major record labels sued them, alleging copyright infringement “on an almost unimaginable scale,” claiming that the companies can only do this because they illegally ingested huge amounts of copyrighted music to train their AI models.
These two lawsuits contribute to a mounting pile of legal headaches for the AI industry. Some of the most successful firms in the space have trained their models with data acquired via the unsanctioned scraping of massive amounts of information from the internet. ChatGPT, for example, was initially trained on millions of documents collected from links posted to Reddit.

These lawsuits, which are spearheaded by the Recording Industry Association of America (RIAA), tackle music rather than the written word. But like The New York Times’ lawsuit against OpenAI, they pose a question that could reshape the tech landscape as we know it: can AI firms simply take whatever they want, turn it into a product worth billions, and claim it was fair use?
“That’s the key issue that’s got to get sorted out, because it cuts across all sorts of different industries,” said Paul Fakler, a partner at the law firm Mayer Brown who specializes in intellectual property cases.
What are Udio and Suno?
Both Udio and Suno are fairly new, but they’ve already made a big splash. Suno was launched in December by a Cambridge-based team that previously worked for Kensho, another AI company. It quickly entered into a partnership with Microsoft that integrated Suno with Copilot, Microsoft’s AI chatbot.
Udio was launched just this year, raising millions of dollars from heavy hitters in the tech investing world (Andreessen Horowitz) and the music world (Will.i.am and Common, for example). Udio’s platform was used by comedian King Willonius to generate “BBL Drizzy,” the Drake diss track that went viral after producer Metro Boomin remixed it and released it to the public for anyone to rap over.
Why is the music industry suing Udio and Suno?
The RIAA’s lawsuits use lofty language, saying that this litigation is about “ensuring that copyright continues to incentivize human invention and imagination, as it has for centuries.” This sounds nice, but ultimately, the incentive it’s talking about is money.
The RIAA claims that generative AI poses a risk to record labels’ business model. “Rather than license copyrighted sound recordings, potential licensees interested in licensing such recordings for their own purposes could generate an AI-soundalike at virtually no cost,” the lawsuits state, adding that such services could “[flood] the market with ‘copycats’ and ‘soundalikes,’ thereby upending an established sample licensing business.”
The RIAA is also asking for damages of $150,000 per infringing work, which, given the massive corpuses of data that are typically used to train AI systems, is a potentially astronomical number.
Does it matter that AI-generated songs are similar to real ones?
The RIAA’s lawsuits included examples of music generated with Suno and Udio and comparisons of their musical notation to existing copyrighted works. In some cases, the generated songs had small phrases that were similar — for instance, one started with the sung line “Jason Derulo” in the exact cadence that the real-life Jason Derulo begins many of his songs. Others had extended sequences of similar notation, as in the case of a track inspired by Green Day’s “American Idiot.”
One track started with the sung line “Jason Derulo” in the exact cadence that the real-life Jason Derulo begins many of his songs
This seems pretty damning, but the RIAA isn’t claiming that these specific soundalike tracks infringe copyright — rather, it’s claiming that the AI companies used copyrighted music as a part of their training data.
Neither Suno nor Udio have made their training datasets public. And both firms are vague about the sources of their training data — though that’s par for the course in the AI industry. (OpenAI, for example, has dodged questions about whether YouTube videos were used to train its Sora video model.)
The RIAA’s lawsuits note that Udio CEO David Ding has said the company trains on the “best quality” music that is “publicly available” and that a Suno co-founder wrote in Suno’s official Discord that the company trains with a “mix of proprietary and public data.”
Fakler said that including the examples and notation comparisons in the lawsuit is “wacky,” saying it went “way beyond” what would be necessary to claim legitimate grounds for a lawsuit. For one, the labels may not own the composition rights of the songs allegedly ingested by Udio and Suno for training. Rather, they own the copyright to the sound recording, so showing similarity in musical notation doesn’t necessarily help in a copyright dispute. “I think it’s really designed for optics for PR purposes,” Fakler said.
On top of that, Fakler noted, it’s legal to create a soundalike audio recording if you have the rights to the underlying song.
When reached for comment, a Suno spokesperson shared a statement from CEO Mikey Shulman stating that its technology is “transformative” and that the company does not allow prompts that name existing artists. Udio did not respond to a request for comment.
Is it fair use?
But even if Udio and Suno used the record labels’ copyrighted works to train their models, there’s a very big question that could override everything else: is this fair use?
Fair use is a legal defense that allows for the use of copyrighted material in the creation of a meaningfully new or transformative work. The RIAA argues that the startups cannot claim fair use, saying that the outputs of Udio and Suno are meant to replace real recordings, that they are generated for a commercial purpose, that the copying was extensive rather than selective, and finally, that the resulting product poses a direct threat to labels’ business.
In Fakler’s opinion, the startups have a solid fair use argument so long as the copyrighted works were only temporarily copied and their defining features were extracted and abstracted into the weights of an AI model.
“It’s extracting all of that stuff out, just like a musician would learn those things by playing music.”
“That’s how computers work — it has to make these copies, and the computer is then analyzing all of this data so they can extract the non-copyrighted stuff,” he said. “How do we construct songs that are going to be understood as music by a listener, and have various features that we commonly find in popular music? It’s extracting all of that stuff out, just like a musician would learn those things by playing music.”
“To my mind, that is a very strong fair use argument,” said Fakler.
Of course, a judge or a jury may not agree. And what is dredged up in the discovery process — if these lawsuits should get there — could have a big effect on the case. Which music tracks were taken and how they ended up in the training set could matter, and specifics about the training process might undercut a fair use defense.
We are all in for a very long journey as the RIAA’s lawsuits, and similar ones, proceed through the courts. From text and photos to now sound recordings, the question of fair use looms over all these cases and the AI industry as a whole.

Cath Virginia / The Verge | Photo from Getty Images

Udio and Suno are not, despite their names, the hottest new restaurants on the Lower East Side. They’re AI startups that let people generate impressively real-sounding songs — complete with instrumentation and vocal performances — from prompts. And on Monday, a group of major record labels sued them, alleging copyright infringement “on an almost unimaginable scale,” claiming that the companies can only do this because they illegally ingested huge amounts of copyrighted music to train their AI models.

These two lawsuits contribute to a mounting pile of legal headaches for the AI industry. Some of the most successful firms in the space have trained their models with data acquired via the unsanctioned scraping of massive amounts of information from the internet. ChatGPT, for example, was initially trained on millions of documents collected from links posted to Reddit.

These lawsuits, which are spearheaded by the Recording Industry Association of America (RIAA), tackle music rather than the written word. But like The New York Timeslawsuit against OpenAI, they pose a question that could reshape the tech landscape as we know it: can AI firms simply take whatever they want, turn it into a product worth billions, and claim it was fair use?

“That’s the key issue that’s got to get sorted out, because it cuts across all sorts of different industries,” said Paul Fakler, a partner at the law firm Mayer Brown who specializes in intellectual property cases.

What are Udio and Suno?

Both Udio and Suno are fairly new, but they’ve already made a big splash. Suno was launched in December by a Cambridge-based team that previously worked for Kensho, another AI company. It quickly entered into a partnership with Microsoft that integrated Suno with Copilot, Microsoft’s AI chatbot.

Udio was launched just this year, raising millions of dollars from heavy hitters in the tech investing world (Andreessen Horowitz) and the music world (Will.i.am and Common, for example). Udio’s platform was used by comedian King Willonius to generate “BBL Drizzy,” the Drake diss track that went viral after producer Metro Boomin remixed it and released it to the public for anyone to rap over.

Why is the music industry suing Udio and Suno?

The RIAA’s lawsuits use lofty language, saying that this litigation is about “ensuring that copyright continues to incentivize human invention and imagination, as it has for centuries.” This sounds nice, but ultimately, the incentive it’s talking about is money.

The RIAA claims that generative AI poses a risk to record labels’ business model. “Rather than license copyrighted sound recordings, potential licensees interested in licensing such recordings for their own purposes could generate an AI-soundalike at virtually no cost,” the lawsuits state, adding that such services could “[flood] the market with ‘copycats’ and ‘soundalikes,’ thereby upending an established sample licensing business.”

The RIAA is also asking for damages of $150,000 per infringing work, which, given the massive corpuses of data that are typically used to train AI systems, is a potentially astronomical number.

Does it matter that AI-generated songs are similar to real ones?

The RIAA’s lawsuits included examples of music generated with Suno and Udio and comparisons of their musical notation to existing copyrighted works. In some cases, the generated songs had small phrases that were similar — for instance, one started with the sung line “Jason Derulo” in the exact cadence that the real-life Jason Derulo begins many of his songs. Others had extended sequences of similar notation, as in the case of a track inspired by Green Day’s “American Idiot.”

One track started with the sung line “Jason Derulo” in the exact cadence that the real-life Jason Derulo begins many of his songs

This seems pretty damning, but the RIAA isn’t claiming that these specific soundalike tracks infringe copyright — rather, it’s claiming that the AI companies used copyrighted music as a part of their training data.

Neither Suno nor Udio have made their training datasets public. And both firms are vague about the sources of their training data — though that’s par for the course in the AI industry. (OpenAI, for example, has dodged questions about whether YouTube videos were used to train its Sora video model.)

The RIAA’s lawsuits note that Udio CEO David Ding has said the company trains on the “best quality” music that is “publicly available” and that a Suno co-founder wrote in Suno’s official Discord that the company trains with a “mix of proprietary and public data.”

Fakler said that including the examples and notation comparisons in the lawsuit is “wacky,” saying it went “way beyond” what would be necessary to claim legitimate grounds for a lawsuit. For one, the labels may not own the composition rights of the songs allegedly ingested by Udio and Suno for training. Rather, they own the copyright to the sound recording, so showing similarity in musical notation doesn’t necessarily help in a copyright dispute. “I think it’s really designed for optics for PR purposes,” Fakler said.

On top of that, Fakler noted, it’s legal to create a soundalike audio recording if you have the rights to the underlying song.

When reached for comment, a Suno spokesperson shared a statement from CEO Mikey Shulman stating that its technology is “transformative” and that the company does not allow prompts that name existing artists. Udio did not respond to a request for comment.

Is it fair use?

But even if Udio and Suno used the record labels’ copyrighted works to train their models, there’s a very big question that could override everything else: is this fair use?

Fair use is a legal defense that allows for the use of copyrighted material in the creation of a meaningfully new or transformative work. The RIAA argues that the startups cannot claim fair use, saying that the outputs of Udio and Suno are meant to replace real recordings, that they are generated for a commercial purpose, that the copying was extensive rather than selective, and finally, that the resulting product poses a direct threat to labels’ business.

In Fakler’s opinion, the startups have a solid fair use argument so long as the copyrighted works were only temporarily copied and their defining features were extracted and abstracted into the weights of an AI model.

“It’s extracting all of that stuff out, just like a musician would learn those things by playing music.”

“That’s how computers work — it has to make these copies, and the computer is then analyzing all of this data so they can extract the non-copyrighted stuff,” he said. “How do we construct songs that are going to be understood as music by a listener, and have various features that we commonly find in popular music? It’s extracting all of that stuff out, just like a musician would learn those things by playing music.”

“To my mind, that is a very strong fair use argument,” said Fakler.

Of course, a judge or a jury may not agree. And what is dredged up in the discovery process — if these lawsuits should get there — could have a big effect on the case. Which music tracks were taken and how they ended up in the training set could matter, and specifics about the training process might undercut a fair use defense.

We are all in for a very long journey as the RIAA’s lawsuits, and similar ones, proceed through the courts. From text and photos to now sound recordings, the question of fair use looms over all these cases and the AI industry as a whole.

Read More 

Rimac is shifting from electric supercars to robotaxis

Verne’s robotaxi. | Image: Verne

A new robotaxi service is coming to Croatia, courtesy of the country’s leading supercar company, Rimac. The service will be called Verne, named for French novelist and poet Jules Verne, and will launch in Zagreb in 2026, the company said.
It’s an interesting pivot for a company that has been on a rocket-ship trajectory over the last few years. Founded by Mate Rimac in a garage as a one-man operation, Rimac has since become a highly desirable brand, with many legacy automakers calling upon the startup to help them build their own electric supercars. In addition to making the record-breaking Nevera hypercar, Rimac also took control of Bugatti from Volkswagen in 2021 in a surprise move that created a new company called Bugatti Rimac.

Image: Verne

And now the company of the 256mph electric hypercar is getting ready to launch its own robotaxi. I assure you, this is less random than it seems on the surface. Rimac has been working on autonomous technology since 2017, and in 2021, the company received €200 million from the EU to develop robotaxis as part of a €6.3 billion recovery plan for Croatia. (The incentive package opened the company up to a lot of criticism, including one member of the Croatian parliament calling Mate Rimac a fraud and “the Balkan Elizabeth Holmes.” ) The company has also received funding from Hyundai and Kia.
Today, Rimac is out to prove that the money isn’t going to waste. Previously dubbed Project 3 Mobility, the newly renamed Verne will be led by Rimac’s friend Marko Pejković as CEO and Adriano Mudri, the designer of Nevera, as chief designer. The company said it chose to honor the author of such classics as Twenty Thousand Leagues Under the Sea and Journey to the Center of the Earth because “he used the theme of travel as the driving force in his storytelling.”

Image: Verne

The robotaxi will be fully electric and rely on autonomous technology from Mobileye, the Intel-owned company that supplies autonomous and advanced driver-assist technology to many automakers. Verne will use Mobileye Drive, a self-driving system that utilizes the Israeli companies’ EyeQ system-on-a-chip, as well as a data crowdsourcing program called the Road Experience Management, or REM, which uses real-time data from Mobileye-equipped vehicles to build out a global 3D map.
The vehicle is Level 4 fully autonomous, meaning it lacks traditional controls like a steering wheel and pedals. Gone also are other familiar touchstones, like windshield wipers and side-view mirrors, in the interest of reducing drag and enhancing the aerodynamic experience.

Image: Verne

Verne’s first vehicle looks radically different from most self-driving cars on the road today. Rather than opt for a retrofitted minivan or a toaster-shaped shuttle with protruding sensors, the Verne robotaxi is sleeker and much smaller with the overall appearance of a two-door hatchback. The expansive greenhouse and sloping windshield enclose an interior that is more luxurious than your average robotaxi. And the vehicle’s two sliding doors are certainly eye-catching, with Rimac saying they were designed for ease of entry.
The decision to go with a two-seater may strike some as curious, considering many robotaxi operators use more high-capacity vehicles. After all, more seats equals more fares, which means more revenue. But Verne’s chief designer Mudri cites data that shows “9 out of 10 rides are used by 1 or 2 people. Therefore, we can satisfy most of all trips with a two-seater and create unmatched interior space in a compact-sized vehicle.”

Image: Verne

Image: Verne

Reducing the number of seats will make for a more spacious, luxurious ride, Verne says. But the company’s robotaxis won’t just be accessible to the superrich; in a statement, Mate Rimac promised that Verne’s autonomous ridehailing service will be “affordable for all.”
Without a steering wheel or other clunky controls, Rimac was free to go big on its interior screen. The 43-inch display nearly spans the width of the dashboard and includes widgets for media, cabin controls, and weather. The central widget is devoted to the navigation, with a design that appears similar to Tesla or Waymo, with an illuminated line stretching out from the virtual vehicle to help the rider keep track of the trip.

Image: Verne

Image: Verne

Verne says riders will be able to listen to their own music or watch movies on the widescreen display. Seventeen speakers are located throughout the vehicle, which includes a Dolby Atmos sound system.
The robotaxi can be summoned via a mobile app, much like Uber or Waymo. Through the app, customers can customize certain settings, like temperature, lighting, and even scent, before their vehicle even shows up. On the backend, all the vehicles are connected, enabling Verne to optimize fleet management tasks.

Image: Verne

Image: Verne

Verne says it will build centrally located vehicle depots called “Motherships” in the cities in which it operates. These will be hubs for the robotaxis to be cleaned, charged, and maintained. The vehicles themselves will be produced at a factory in Croatia that has yet to be built.
After Zagreb, Verne says it will roll out its robotaxi service in other European cities — first in the UK and Germany, and then later in the Middle East. While some companies have been testing autonomous vehicles in Europe, any commercial service appears to be a long way off. Meanwhile, Alphabet’s Waymo is operating in several major cities in the US, and Baidu is similarly running hundreds of driverless cars in China.
Verne is working to become the first major robotaxi operator outside those two countries. The company has already signed agreements with 11 cities in the EU, UK, and the Middle East and is negotiating with more than 30 cities worldwide, it says. And it aims to “complement public transport, not compete against it.”
“In the longer term, Verne should help remove the need for a second or third car in the household that takes up parking spaces, is used rarely, and is a significant expense,” the company says.

Verne’s robotaxi. | Image: Verne

A new robotaxi service is coming to Croatia, courtesy of the country’s leading supercar company, Rimac. The service will be called Verne, named for French novelist and poet Jules Verne, and will launch in Zagreb in 2026, the company said.

It’s an interesting pivot for a company that has been on a rocket-ship trajectory over the last few years. Founded by Mate Rimac in a garage as a one-man operation, Rimac has since become a highly desirable brand, with many legacy automakers calling upon the startup to help them build their own electric supercars. In addition to making the record-breaking Nevera hypercar, Rimac also took control of Bugatti from Volkswagen in 2021 in a surprise move that created a new company called Bugatti Rimac.

Image: Verne

And now the company of the 256mph electric hypercar is getting ready to launch its own robotaxi. I assure you, this is less random than it seems on the surface. Rimac has been working on autonomous technology since 2017, and in 2021, the company received €200 million from the EU to develop robotaxis as part of a €6.3 billion recovery plan for Croatia. (The incentive package opened the company up to a lot of criticism, including one member of the Croatian parliament calling Mate Rimac a fraud and “the Balkan Elizabeth Holmes.” ) The company has also received funding from Hyundai and Kia.

Today, Rimac is out to prove that the money isn’t going to waste. Previously dubbed Project 3 Mobility, the newly renamed Verne will be led by Rimac’s friend Marko Pejković as CEO and Adriano Mudri, the designer of Nevera, as chief designer. The company said it chose to honor the author of such classics as Twenty Thousand Leagues Under the Sea and Journey to the Center of the Earth because “he used the theme of travel as the driving force in his storytelling.”

Image: Verne

The robotaxi will be fully electric and rely on autonomous technology from Mobileye, the Intel-owned company that supplies autonomous and advanced driver-assist technology to many automakers. Verne will use Mobileye Drive, a self-driving system that utilizes the Israeli companies’ EyeQ system-on-a-chip, as well as a data crowdsourcing program called the Road Experience Management, or REM, which uses real-time data from Mobileye-equipped vehicles to build out a global 3D map.

The vehicle is Level 4 fully autonomous, meaning it lacks traditional controls like a steering wheel and pedals. Gone also are other familiar touchstones, like windshield wipers and side-view mirrors, in the interest of reducing drag and enhancing the aerodynamic experience.

Image: Verne

Verne’s first vehicle looks radically different from most self-driving cars on the road today. Rather than opt for a retrofitted minivan or a toaster-shaped shuttle with protruding sensors, the Verne robotaxi is sleeker and much smaller with the overall appearance of a two-door hatchback. The expansive greenhouse and sloping windshield enclose an interior that is more luxurious than your average robotaxi. And the vehicle’s two sliding doors are certainly eye-catching, with Rimac saying they were designed for ease of entry.

The decision to go with a two-seater may strike some as curious, considering many robotaxi operators use more high-capacity vehicles. After all, more seats equals more fares, which means more revenue. But Verne’s chief designer Mudri cites data that shows “9 out of 10 rides are used by 1 or 2 people. Therefore, we can satisfy most of all trips with a two-seater and create unmatched interior space in a compact-sized vehicle.”

Image: Verne

Image: Verne

Reducing the number of seats will make for a more spacious, luxurious ride, Verne says. But the company’s robotaxis won’t just be accessible to the superrich; in a statement, Mate Rimac promised that Verne’s autonomous ridehailing service will be “affordable for all.”

Without a steering wheel or other clunky controls, Rimac was free to go big on its interior screen. The 43-inch display nearly spans the width of the dashboard and includes widgets for media, cabin controls, and weather. The central widget is devoted to the navigation, with a design that appears similar to Tesla or Waymo, with an illuminated line stretching out from the virtual vehicle to help the rider keep track of the trip.

Image: Verne

Image: Verne

Verne says riders will be able to listen to their own music or watch movies on the widescreen display. Seventeen speakers are located throughout the vehicle, which includes a Dolby Atmos sound system.

The robotaxi can be summoned via a mobile app, much like Uber or Waymo. Through the app, customers can customize certain settings, like temperature, lighting, and even scent, before their vehicle even shows up. On the backend, all the vehicles are connected, enabling Verne to optimize fleet management tasks.

Image: Verne

Image: Verne

Verne says it will build centrally located vehicle depots called “Motherships” in the cities in which it operates. These will be hubs for the robotaxis to be cleaned, charged, and maintained. The vehicles themselves will be produced at a factory in Croatia that has yet to be built.

After Zagreb, Verne says it will roll out its robotaxi service in other European cities — first in the UK and Germany, and then later in the Middle East. While some companies have been testing autonomous vehicles in Europe, any commercial service appears to be a long way off. Meanwhile, Alphabet’s Waymo is operating in several major cities in the US, and Baidu is similarly running hundreds of driverless cars in China.

Verne is working to become the first major robotaxi operator outside those two countries. The company has already signed agreements with 11 cities in the EU, UK, and the Middle East and is negotiating with more than 30 cities worldwide, it says. And it aims to “complement public transport, not compete against it.”

“In the longer term, Verne should help remove the need for a second or third car in the household that takes up parking spaces, is used rarely, and is a significant expense,” the company says.

Read More 

Ultimate Ears announces new Everboom speaker, Boom 4 with USB-C, and more

Image: Ultimate Ears

How many Boom-branded speakers is too many? Can there even be such a thing? Ultimate Ears seems to be on a mission to find out. The company just announced the $249.99 Everboom — now the sixth speaker in its lineup. In both size and price, this one slots in between the Megaboom speaker and the $299.99 Epicboom released late last year.
The Everboom has UE’s typical traits: it’s IP67 (dustproof and waterproof), it’ll float should you drop it into a pool or lake, and it comes with a removable carabiner. There’s an outdoor boost mode that adjusts the sound profile for wide open environments while also leveling up the bass response. The battery lasts for up to 20 hours and there’s easy NFC pairing, but that’s about it for the feature list.
Aside from the new speaker, Ultimate Ears is also updating the Boom, Wonderboom, and Megaboom. All of them are available in new colors, and more importantly, they’re all finally — and yes, the word is deserved here — making the switch to USB-C. Hallelujah. The Boom and Megaboom are getting “enhanced deep bass radiators to unlock an even bigger sound.” And the Wonderboom is getting one new feature all to itself: a podcast mode that’s tuned “for enhanced listening to favorite hosts and stories.”
UE products both new and old are picking up an all-new software trick called megaphone. “When a user taps the megaphone button and speaks into their phone, their voice projects through the speaker — perfect to call people to the dance floor, belt out a few bars, or hear their voice echo across the mountains,” the company wrote in its press release. I’m already dreading how this might be used on the New York City subways, but maybe I just assume the worst in people.
With the Everboom and other hardware upgrades to UE’s other speakers, here’s how the lineup now shakes out:

Hyperboom: $399.99
Epicboom: $299.99
Everboom: $249.99 (black, blue, purple, red)
Megaboom 4: $199.99 (black, blue, purple, red)
Boom 4: $149.99 (black, blue, purple, red)
Wonderboom 4: $99.99 (black, pink, blue, and “joyous bright” — whatever that is)

All of the new products are available to order beginning today.

Image: Ultimate Ears

How many Boom-branded speakers is too many? Can there even be such a thing? Ultimate Ears seems to be on a mission to find out. The company just announced the $249.99 Everboom — now the sixth speaker in its lineup. In both size and price, this one slots in between the Megaboom speaker and the $299.99 Epicboom released late last year.

The Everboom has UE’s typical traits: it’s IP67 (dustproof and waterproof), it’ll float should you drop it into a pool or lake, and it comes with a removable carabiner. There’s an outdoor boost mode that adjusts the sound profile for wide open environments while also leveling up the bass response. The battery lasts for up to 20 hours and there’s easy NFC pairing, but that’s about it for the feature list.

Aside from the new speaker, Ultimate Ears is also updating the Boom, Wonderboom, and Megaboom. All of them are available in new colors, and more importantly, they’re all finally — and yes, the word is deserved here — making the switch to USB-C. Hallelujah. The Boom and Megaboom are getting “enhanced deep bass radiators to unlock an even bigger sound.” And the Wonderboom is getting one new feature all to itself: a podcast mode that’s tuned “for enhanced listening to favorite hosts and stories.”

UE products both new and old are picking up an all-new software trick called megaphone. “When a user taps the megaphone button and speaks into their phone, their voice projects through the speaker — perfect to call people to the dance floor, belt out a few bars, or hear their voice echo across the mountains,” the company wrote in its press release. I’m already dreading how this might be used on the New York City subways, but maybe I just assume the worst in people.

With the Everboom and other hardware upgrades to UE’s other speakers, here’s how the lineup now shakes out:

Hyperboom: $399.99
Epicboom: $299.99
Everboom: $249.99 (black, blue, purple, red)
Megaboom 4: $199.99 (black, blue, purple, red)
Boom 4: $149.99 (black, blue, purple, red)
Wonderboom 4: $99.99 (black, pink, blue, and “joyous bright” — whatever that is)

All of the new products are available to order beginning today.

Read More 

Meta tests Vision Pro-like freeform virtual screen placement for Quest headsets

Photo by Becca Farsace / The Verge

Meta is testing a feature for its Quest headsets that allows you to place windows freely, similar to the Apple Vision Pro. Multitasking with multiple windows has been part of Meta Horizon OS (formerly Meta Quest OS) for a few years now, but currently, it only supports three virtual windows docked in a side-by-side layout.
RoadtoVR points out this demonstration video from a data miner named Luna, who spotted the experimental feature in version 67 of the Meta Quest Public Test Channel.

Meta Quest OS v67 PTCSettings > Experimental Features > New Window Layout pic.twitter.com/jDq0hdoCOV— Luna (@Lunayian) June 25, 2024

It brings the Quest 3, in particular, a step closer to Apple’s spatial computing when used in mixed reality mode, but from the video, it doesn’t seem to work quite the same way. You can freely move up to three windows from 2D apps — such as the browser or OS windows like your library and settings — around your space and keep another three docked.
Other demos suggest that the windows will only remember their placement within a limited distance and return to their default positions should you switch orientation or reset the view. We haven’t tested it yet ourselves to know the full limitations here, but it looks promising.
The update also allows you to switch between curved and flat windows, as well as a dimmer that lowers the brightness of virtual environments while using 2D apps. (The latter doesn’t yet work for passthrough mode.)

The Apple Vision Pro allows you to move windows around whichever space you’re in and keep them locked in place even while you move around and after you take the headset off. That way, you can have a window sitting next to your refrigerator and another positioned alongside the TV in your living room, and then walk to and from the windows as if they’re actual objects.
I’ve seen more ads lately that highlight the Quest 3’s productivity potential instead of just the gaming-centric ones. While Meta’s headset might not handle that with the same pizzazz as the Vision Pro just yet, considering it costs $3,000 less, it really doesn’t have to.

Photo by Becca Farsace / The Verge

Meta is testing a feature for its Quest headsets that allows you to place windows freely, similar to the Apple Vision Pro. Multitasking with multiple windows has been part of Meta Horizon OS (formerly Meta Quest OS) for a few years now, but currently, it only supports three virtual windows docked in a side-by-side layout.

RoadtoVR points out this demonstration video from a data miner named Luna, who spotted the experimental feature in version 67 of the Meta Quest Public Test Channel.

Meta Quest OS v67 PTC

Settings > Experimental Features > New Window Layout pic.twitter.com/jDq0hdoCOV

— Luna (@Lunayian) June 25, 2024

It brings the Quest 3, in particular, a step closer to Apple’s spatial computing when used in mixed reality mode, but from the video, it doesn’t seem to work quite the same way. You can freely move up to three windows from 2D apps — such as the browser or OS windows like your library and settings — around your space and keep another three docked.

Other demos suggest that the windows will only remember their placement within a limited distance and return to their default positions should you switch orientation or reset the view. We haven’t tested it yet ourselves to know the full limitations here, but it looks promising.

The update also allows you to switch between curved and flat windows, as well as a dimmer that lowers the brightness of virtual environments while using 2D apps. (The latter doesn’t yet work for passthrough mode.)

The Apple Vision Pro allows you to move windows around whichever space you’re in and keep them locked in place even while you move around and after you take the headset off. That way, you can have a window sitting next to your refrigerator and another positioned alongside the TV in your living room, and then walk to and from the windows as if they’re actual objects.

I’ve seen more ads lately that highlight the Quest 3’s productivity potential instead of just the gaming-centric ones. While Meta’s headset might not handle that with the same pizzazz as the Vision Pro just yet, considering it costs $3,000 less, it really doesn’t have to.

Read More 

The owner of Toys ‘R’ Us just used OpenAI’s Sora to animate the zombie brand

Image: Toys R Us

Do you have fond memories of Toys “R” Us, back before private equity helped lay it to waste? If so, I’m really curious what you think of this partially AI-generated video that the zombie brand is calling “the first-ever brand film using OpenAI’s new text-to-video tool, Sora.”
OpenAI’s Sora wowed the world in February with photorealistic videos created by generative AI — and again when the company’s CTO refused to say where Sora was getting its training data from. (YouTube is the going theory.)
But though OpenAI has reportedly been pitching Hollywood on the tech, one of the first entities to publicly bite is brand management firm WHP Global, which currently licenses the Toys “R” Us brand to stores like Macy’s and is also exploring larger stores. Almost every Macy’s department store now contains a branded toy section, but those are typically a few paltry aisles of toys rather than the toy warehouses that once fueled kids’ dreams.
While some headlines are calling this the first commercial produced with OpenAI’s Sora, the press release doesn’t suggest it’ll air anywhere other than toysrus.com, though it also premiered in front of an audience of ad agency execs at the 2024 Cannes Lion Festival in France last week.
The PR also doesn’t try to claim it’s entirely generated by AI. Native Foreign, the creative agency that produced the footage, had “about a dozen people” working on the video and applied “corrective VFX” on top, according to director Nik Kleverov. Sora “got us about 80-85% of the way there,” he wrote on X:

Thanks Sid. Yes and no. About a dozen people at our shop worked at it. The entire “base” is Sora. Got us about 80-85% of the way there. Corrective VFX on top (not unlike every “IRL” commercial I shoot nowadays). Saved some time, took more in other ways. BTS breakdown soon— Nik Kleverov (@kleverov) June 25, 2024

It appears that Native Foreign even reused an earlier shot it generated with Sora as part of the Toys “R” Us project. Here’s a bicycle repair shop the company showed off in March, compared to the final corrected clip:

Image: Native Foreign
Note the typos in the sign.

Image: Native Foreign
The Toys “R” Us version.

The old Toys “R” Us also turned to hot technology amidst its bankruptcy filings back in 2017, releasing an AR app to attract customers. But 2018 was the end of the road for the giant toy store, as it began closing or selling its final 800 locations. The UK has recently launched some dedicated toy stores under the Toys “R” Us brand, however, as well as outlets inside of WHSmith stores.
WHP Global has also opened two larger Toys “R” Us stores in the US, and had publicly planned to open as many as 24 stores in 2024, as well as ones in airports and aboard cruise ships.
Disclosure: Vox Media, The Verge’s parent company, has a technology and content deal with OpenAI.

Image: Toys R Us

Do you have fond memories of Toys “R” Us, back before private equity helped lay it to waste? If so, I’m really curious what you think of this partially AI-generated video that the zombie brand is calling “the first-ever brand film using OpenAI’s new text-to-video tool, Sora.”

OpenAI’s Sora wowed the world in February with photorealistic videos created by generative AI — and again when the company’s CTO refused to say where Sora was getting its training data from. (YouTube is the going theory.)

But though OpenAI has reportedly been pitching Hollywood on the tech, one of the first entities to publicly bite is brand management firm WHP Global, which currently licenses the Toys “R” Us brand to stores like Macy’s and is also exploring larger stores. Almost every Macy’s department store now contains a branded toy section, but those are typically a few paltry aisles of toys rather than the toy warehouses that once fueled kids’ dreams.

While some headlines are calling this the first commercial produced with OpenAI’s Sora, the press release doesn’t suggest it’ll air anywhere other than toysrus.com, though it also premiered in front of an audience of ad agency execs at the 2024 Cannes Lion Festival in France last week.

The PR also doesn’t try to claim it’s entirely generated by AI. Native Foreign, the creative agency that produced the footage, had “about a dozen people” working on the video and applied “corrective VFX” on top, according to director Nik Kleverov. Sora “got us about 80-85% of the way there,” he wrote on X:

Thanks Sid. Yes and no. About a dozen people at our shop worked at it. The entire “base” is Sora. Got us about 80-85% of the way there. Corrective VFX on top (not unlike every “IRL” commercial I shoot nowadays). Saved some time, took more in other ways. BTS breakdown soon

— Nik Kleverov (@kleverov) June 25, 2024

It appears that Native Foreign even reused an earlier shot it generated with Sora as part of the Toys “R” Us project. Here’s a bicycle repair shop the company showed off in March, compared to the final corrected clip:

Image: Native Foreign
Note the typos in the sign.

Image: Native Foreign
The Toys “R” Us version.

The old Toys “R” Us also turned to hot technology amidst its bankruptcy filings back in 2017, releasing an AR app to attract customers. But 2018 was the end of the road for the giant toy store, as it began closing or selling its final 800 locations. The UK has recently launched some dedicated toy stores under the Toys “R” Us brand, however, as well as outlets inside of WHSmith stores.

WHP Global has also opened two larger Toys “R” Us stores in the US, and had publicly planned to open as many as 24 stores in 2024, as well as ones in airports and aboard cruise ships.

Disclosure: Vox Media, The Verge’s parent company, has a technology and content deal with OpenAI.

Read More 

Samsung just announced a date for its next Unpacked

In case you were wondering, yes, there will be lots of AI. | Image: Samsung

Samsung’s next Unpacked summer launch event will take place on July 10th in Paris, France, the company announced on Tuesday. The animation accompanying the invitation hints at foldables, and the invite itself removes all doubt: “Prepare to discover the power of Galaxy AI, now infused into the latest Galaxy Z series and the entire Galaxy ecosystem.” But we’re also on the lookout for something of a different shape: the Galaxy Ring.
Rumors indicate that the Galaxy Z Fold 6 and Z Flip 6 will be fairly minor upgrades, with the Z Flip 6 getting a slightly bigger battery and the Z Fold 6 looking a little boxier. Not the most exciting stuff, but that’s been the story with the past few generations of Samsung’s folding phones.

Instead, the more exciting announcement might not be a phone at all — rumors point to an official launch for the Galaxy Ring, first announced at the other Unpacked earlier this year. We got a little hands-on time with a prototype version at Mobile World Congress not long after that, and a few rumored details have trickled out here and there. We’ve yet to hear official pricing or confirmation of the health sensors it will carry, but that might be changing soon enough.
One thing we will surely hear about? Galaxy AI, of course. Samsung’s first Unpacked this year was all about it. Since then, it’s been the theme of every developer conference — first at I/O, then Microsoft Build and WWDC. ‘Tis the season.
Unpacked will be streamed live on Samsung.com starting at 9AM ET on Wednesday, July 10th. You can “reserve” a device and get a $50 credit when you preorder one through Samsung.

In case you were wondering, yes, there will be lots of AI. | Image: Samsung

Samsung’s next Unpacked summer launch event will take place on July 10th in Paris, France, the company announced on Tuesday. The animation accompanying the invitation hints at foldables, and the invite itself removes all doubt: “Prepare to discover the power of Galaxy AI, now infused into the latest Galaxy Z series and the entire Galaxy ecosystem.” But we’re also on the lookout for something of a different shape: the Galaxy Ring.

Rumors indicate that the Galaxy Z Fold 6 and Z Flip 6 will be fairly minor upgrades, with the Z Flip 6 getting a slightly bigger battery and the Z Fold 6 looking a little boxier. Not the most exciting stuff, but that’s been the story with the past few generations of Samsung’s folding phones.

Instead, the more exciting announcement might not be a phone at all — rumors point to an official launch for the Galaxy Ring, first announced at the other Unpacked earlier this year. We got a little hands-on time with a prototype version at Mobile World Congress not long after that, and a few rumored details have trickled out here and there. We’ve yet to hear official pricing or confirmation of the health sensors it will carry, but that might be changing soon enough.

One thing we will surely hear about? Galaxy AI, of course. Samsung’s first Unpacked this year was all about it. Since then, it’s been the theme of every developer conference — first at I/O, then Microsoft Build and WWDC. ‘Tis the season.

Unpacked will be streamed live on Samsung.com starting at 9AM ET on Wednesday, July 10th. You can “reserve” a device and get a $50 credit when you preorder one through Samsung.

Read More 

ChatGPT’s Mac app is here, but its flirty advanced voice mode has been delayed

Image: The Verge

The advanced voice mode for ChatGPT that sparked a tussle with Scarlett Johansson was an important element of OpenAI’s Spring Update event, where it also revealed a desktop app for ChatGPT.
Now, OpenAI says it will “need one more month to reach our bar to launch” an alpha version of the new voice mode to a small group of ChatGPT Plus subscribers, with plans to allow access for all Plus customers in the fall. One specific area that OpenAI says it’s improving is the ability to “detect and refuse certain content.”
As for the new video and screen sharing capabilities that we saw during the event, OpenAI writes that it will “keep you posted” on a timeline. OpenAI had said it would deliver the new capabilities in “the coming weeks.” Now, the company writes that “Exact timelines depend on meeting our high safety and reliability bar.”
The assistant features bearing a troublesome resemblance to Johansson’s virtual character in the movie Her were part of OpenAI’s demo, showing how the GPT-4o-powered bot could observe the world around the user and respond to it in real time. It could also maintain a conversation far more naturally and tolerate interruptions with what CEO Sam Altman called “human-level response times and expressiveness.”

Image: OpenAI

The desktop app, however, launched today for users on macOS. With the Mac app installed, pressing Option and Space together can open ChatGPT from anywhere, allowing it to chat about whatever’s on your screen at the time. A Windows app is set to arrive later this year.

Image: The Verge

The advanced voice mode for ChatGPT that sparked a tussle with Scarlett Johansson was an important element of OpenAI’s Spring Update event, where it also revealed a desktop app for ChatGPT.

Now, OpenAI says it will “need one more month to reach our bar to launch” an alpha version of the new voice mode to a small group of ChatGPT Plus subscribers, with plans to allow access for all Plus customers in the fall. One specific area that OpenAI says it’s improving is the ability to “detect and refuse certain content.”

As for the new video and screen sharing capabilities that we saw during the event, OpenAI writes that it will “keep you posted” on a timeline. OpenAI had said it would deliver the new capabilities in “the coming weeks.” Now, the company writes that “Exact timelines depend on meeting our high safety and reliability bar.”

The assistant features bearing a troublesome resemblance to Johansson’s virtual character in the movie Her were part of OpenAI’s demo, showing how the GPT-4o-powered bot could observe the world around the user and respond to it in real time. It could also maintain a conversation far more naturally and tolerate interruptions with what CEO Sam Altman called “human-level response times and expressiveness.”

Image: OpenAI

The desktop app, however, launched today for users on macOS. With the Mac app installed, pressing Option and Space together can open ChatGPT from anywhere, allowing it to chat about whatever’s on your screen at the time. A Windows app is set to arrive later this year.

Read More 

Forza Horizon 4 will be delisted from Microsoft stores and Steam in December

Image: Xbox Game Studios / Turn 10

The “delisting” ax is swinging once again, and this time, Forza Horizon 4 is the latest major video game to be cleaved from digital storefronts. Developer Playground Games announced on its site that sales of the 2018 open-world racer are ending due to licensing and partner agreements, and the game will be removed from Microsoft stores and Steam on December 15th, 2024.
While the days are numbered for people to still be able to buy the game, the final in-game event will run from July 25th to August 22nd — after which, some achievements linked to the seasonal Festival Playlists will no longer be unlockable.
To add a little further confusion, Forza Horizon 4’s DLC is being delisted beginning today. And if you played the game via Xbox Game Pass and already bought some DLC, you qualify for a token to download the full game and continue playing it after the delisting date. That game code will be sent to qualifying players who have a fully paid Xbox Game Pass subscription in the coming days and must be redeemed by June 25th, 2026.

Somewhat surprisingly, Playground says in its FAQ that the servers are staying on post-delisting, so both offline and online modes will remain playable. Between this and qualifying Game Pass subscribers getting a code to keep the game, it feels like Playground is trying to make the best of an unfortunate situation — one that often hits games like these with real-life car and music licenses — but, like with most delistings, it still remains a bit messy and convoluted.
Forza Horizon 4 will be just over six years old when it’s pulled from digital stores, leaving only what’s left out there of physical Xbox copies in circulation for latecomers or preservationists to explore. Sure, Forza Horizon 5 was a worthy successor, and the franchise is likely to continue as the more bombastic counterpart to Forza Motorsport, but I don’t think it will ever feel normal or, frankly, good for digital games to feel so short-lived.

Image: Xbox Game Studios / Turn 10

The “delisting” ax is swinging once again, and this time, Forza Horizon 4 is the latest major video game to be cleaved from digital storefronts. Developer Playground Games announced on its site that sales of the 2018 open-world racer are ending due to licensing and partner agreements, and the game will be removed from Microsoft stores and Steam on December 15th, 2024.

While the days are numbered for people to still be able to buy the game, the final in-game event will run from July 25th to August 22nd — after which, some achievements linked to the seasonal Festival Playlists will no longer be unlockable.

To add a little further confusion, Forza Horizon 4’s DLC is being delisted beginning today. And if you played the game via Xbox Game Pass and already bought some DLC, you qualify for a token to download the full game and continue playing it after the delisting date. That game code will be sent to qualifying players who have a fully paid Xbox Game Pass subscription in the coming days and must be redeemed by June 25th, 2026.

Somewhat surprisingly, Playground says in its FAQ that the servers are staying on post-delisting, so both offline and online modes will remain playable. Between this and qualifying Game Pass subscribers getting a code to keep the game, it feels like Playground is trying to make the best of an unfortunate situation — one that often hits games like these with real-life car and music licenses — but, like with most delistings, it still remains a bit messy and convoluted.

Forza Horizon 4 will be just over six years old when it’s pulled from digital stores, leaving only what’s left out there of physical Xbox copies in circulation for latecomers or preservationists to explore. Sure, Forza Horizon 5 was a worthy successor, and the franchise is likely to continue as the more bombastic counterpart to Forza Motorsport, but I don’t think it will ever feel normal or, frankly, good for digital games to feel so short-lived.

Read More 

Scroll to top
Generated by Feedzy