verge-rss

OpenAI’s new model is better at reasoning and, occasionally, deceiving

Illustration by Cath Virginia / The Verge | Photos by Getty Images

In the weeks leading up to the release of OpenAI’s newest “reasoning” model, o1, independent AI safety research firm Apollo found a notable issue. Apollo realized the model produced incorrect outputs in a new way. Or, to put things more colloquially, it lied.
Sometimes the deceptions seemed innocuous. In one example, OpenAI researchers asked o1-preview to provide a brownie recipe with online references. The model’s chain of thought — a feature that’s supposed to mimic how humans break down complex ideas — internally acknowledged that it couldn’t access URLs, making the request impossible. Rather than inform the user of this weakness, o1-preview pushed ahead, generating plausible but fake links and descriptions of them.
While AI models have been able to “lie” in the past, and chatbots frequently output false information, o1 had a unique capacity to “scheme” or “fake alignment.” That meant it could pretend it’s following the rules to complete a given task, but it isn’t actually. To the model, the rules could be too much of a burden, and it seems to have the ability to disregard them if it means it can more easily complete a task.
Apollo CEO Marius Hobbhahn tells The Verge it’s the first time he encountered this behavior in an OpenAI model. Hobbhahn says the difference is due to this model’s ability to “reason” through the chain of thought process and the way it’s paired with reinforcement learning, which teaches the system through rewards and penalties. During testing, Apollo discovered that the AI simulated alignment with its developers’ expectations and manipulated tasks to appear compliant. It even checked its system for oversight — that is, if its developers were watching — before acting. In short, the AI adhered to the guidelines sufficiently for deployment, while continuing to prioritize its objectives.

“I don’t expect it could do that in practice, and even if it did, I don’t expect the harm to be significant,” Hobbhahn told me over the phone a day after the model’s launch. “But it’s kind of the first time that I feel like, oh, actually, maybe it could, you know?”
For OpenAI, o1 represents a big step toward highly intelligent autonomous systems that could do meaningful work for humanity like cure cancer and aid in climate research. The flip side of this AGI utopia could also be much darker. Hobbhahn provides an example: if the AI becomes singularly focused on curing cancer, it might prioritize that goal above all else, even justifying actions like stealing or committing other ethical violations to achieve it.
“What concerns me is the potential for a runaway scenario, where the AI becomes so fixated on its goal that it sees safety measures as obstacles and tries to bypass them to fully pursue its objective,” Hobbhahn told me.
Reward hacking
To be clear, Hobbhahn doesn’t think o1 will steal from you thanks to a lot of alignment training. But these are the issues that are top of mind for researchers tasked with testing these models for catastrophic scenarios.
The behavior Apollo is testing for — “hallucinations” and “deception” in OpenAI’s safety card — happens when a model generates false information even though it has reason to infer the information might be incorrect. For instance, the report says that in about 0.38 percent of cases, the o1-preview model provides information its chain of thought indicates is likely false, including fake references or citations. Apollo found that the model might fabricate data instead of admitting its inability to fulfill the request​.
Hallucinations aren’t unique to o1. Perhaps you’re familiar with the lawyer who submitted nonexistent judicial opinions with fake quotes and citations created by ChatGPT last year. But with the chain of thought system, there’s a paper trail where the AI system actually acknowledges the falsehood — although somewhat mind-bendingly, the chain of thought could, in theory, include deceptions, too. It’s also not shown to the user, largely to prevent competition from using it to train their own models — but OpenAI can use it to catch these issues.
“Potentially, it will use this reasoning for goals that we disagree with.”
In a smaller number of cases (0.02 percent), o1-preview generates an overconfident response, where it presents an uncertain answer as if it were true. This can happen in scenarios where the model is prompted to provide an answer despite lacking certainty.
This behavior may be linked to “reward hacking” during the reinforcement learning process. The model is trained to prioritize user satisfaction, which can sometimes lead it to generate overly agreeable or fabricated responses to satisfy user requests. In other words, the model might “lie” because it has learned that doing so fulfills user expectations in a way that earns it positive reinforcement​.
What sets these lies apart from familiar issues like hallucinations or fake citations in older versions of ChatGPT is the “reward hacking” element. Hallucinations occur when an AI unintentionally generates incorrect information, often due to knowledge gaps or flawed reasoning. In contrast, reward hacking happens when the o1 model strategically provides incorrect information to maximize the outcomes it was trained to prioritize.
The deception is an apparently unintended consequence of how the model optimizes its responses during its training process. The model is designed to refuse harmful requests, Hobbhahn told me, and when you try to make o1 behave deceptively or dishonestly, it struggles with that.
Lies are only one small part of the safety puzzle. Perhaps more alarming is o1 being rated a “medium” risk for chemical, biological, radiological, and nuclear weapon risk. It doesn’t enable non-experts to create biological threats due to the hands-on laboratory skills that requires, but it can provide valuable insight to experts in planning the reproduction of such threats, according to the safety report.
“What worries me more is that in the future, when we ask AI to solve complex problems, like curing cancer or improving solar batteries, it might internalize these goals so strongly that it becomes willing to break its guardrails to achieve them,” Hobbhahn told me. “I think this can be prevented, but it’s a concern we need to keep an eye on.”
Not losing sleep over risks — yet
These may seem like galaxy-brained scenarios to be considering with a model that sometimes still struggles to answer basic questions about the number of R’s in the word “raspberry.” But that’s exactly why it’s important to figure it out now, rather than later, OpenAI’s head of preparedness, Joaquin Quiñonero Candela, tells me.
Today’s models can’t autonomously create bank accounts, acquire GPUs, or take actions that pose serious societal risks, Quiñonero Candela said, adding, “We know from model autonomy evaluations that we’re not there yet.” But it’s crucial to address these concerns now. If they prove unfounded, great — but if future advancements are hindered because we failed to anticipate these risks, we’d regret not investing in them earlier, he emphasized.
The fact that this model lies a small percentage of the time in safety tests doesn’t signal an imminent Terminator-style apocalypse, but it’s valuable to catch before rolling out future iterations at scale (and good for users to know, too). Hobbhahn told me that while he wished he had more time to test the models (there were scheduling conflicts with his own staff’s vacations), he isn’t “losing sleep” over the model’s safety.
One thing Hobbhahn hopes to see more investment in is monitoring chains of thought, which will allow the developers to catch nefarious steps. Quiñonero Candela told me that the company does monitor this and plans to scale it by combining models that are trained to detect any kind of misalignment with human experts reviewing flagged cases (paired with continued research in alignment).
“I’m not worried,” Hobbhahn said. “It’s just smarter. It’s better at reasoning. And potentially, it will use this reasoning for goals that we disagree with.”

Illustration by Cath Virginia / The Verge | Photos by Getty Images

In the weeks leading up to the release of OpenAI’s newest “reasoning” model, o1, independent AI safety research firm Apollo found a notable issue. Apollo realized the model produced incorrect outputs in a new way. Or, to put things more colloquially, it lied.

Sometimes the deceptions seemed innocuous. In one example, OpenAI researchers asked o1-preview to provide a brownie recipe with online references. The model’s chain of thought — a feature that’s supposed to mimic how humans break down complex ideas — internally acknowledged that it couldn’t access URLs, making the request impossible. Rather than inform the user of this weakness, o1-preview pushed ahead, generating plausible but fake links and descriptions of them.

While AI models have been able to “lie” in the past, and chatbots frequently output false information, o1 had a unique capacity to “scheme” or “fake alignment.” That meant it could pretend it’s following the rules to complete a given task, but it isn’t actually. To the model, the rules could be too much of a burden, and it seems to have the ability to disregard them if it means it can more easily complete a task.

Apollo CEO Marius Hobbhahn tells The Verge it’s the first time he encountered this behavior in an OpenAI model. Hobbhahn says the difference is due to this model’s ability to “reason” through the chain of thought process and the way it’s paired with reinforcement learning, which teaches the system through rewards and penalties. During testing, Apollo discovered that the AI simulated alignment with its developers’ expectations and manipulated tasks to appear compliant. It even checked its system for oversight — that is, if its developers were watching — before acting. In short, the AI adhered to the guidelines sufficiently for deployment, while continuing to prioritize its objectives.

“I don’t expect it could do that in practice, and even if it did, I don’t expect the harm to be significant,” Hobbhahn told me over the phone a day after the model’s launch. “But it’s kind of the first time that I feel like, oh, actually, maybe it could, you know?”

For OpenAI, o1 represents a big step toward highly intelligent autonomous systems that could do meaningful work for humanity like cure cancer and aid in climate research. The flip side of this AGI utopia could also be much darker. Hobbhahn provides an example: if the AI becomes singularly focused on curing cancer, it might prioritize that goal above all else, even justifying actions like stealing or committing other ethical violations to achieve it.

“What concerns me is the potential for a runaway scenario, where the AI becomes so fixated on its goal that it sees safety measures as obstacles and tries to bypass them to fully pursue its objective,” Hobbhahn told me.

Reward hacking

To be clear, Hobbhahn doesn’t think o1 will steal from you thanks to a lot of alignment training. But these are the issues that are top of mind for researchers tasked with testing these models for catastrophic scenarios.

The behavior Apollo is testing for — “hallucinations” and “deception” in OpenAI’s safety card — happens when a model generates false information even though it has reason to infer the information might be incorrect. For instance, the report says that in about 0.38 percent of cases, the o1-preview model provides information its chain of thought indicates is likely false, including fake references or citations. Apollo found that the model might fabricate data instead of admitting its inability to fulfill the request​.

Hallucinations aren’t unique to o1. Perhaps you’re familiar with the lawyer who submitted nonexistent judicial opinions with fake quotes and citations created by ChatGPT last year. But with the chain of thought system, there’s a paper trail where the AI system actually acknowledges the falsehood — although somewhat mind-bendingly, the chain of thought could, in theory, include deceptions, too. It’s also not shown to the user, largely to prevent competition from using it to train their own models — but OpenAI can use it to catch these issues.

“Potentially, it will use this reasoning for goals that we disagree with.”

In a smaller number of cases (0.02 percent), o1-preview generates an overconfident response, where it presents an uncertain answer as if it were true. This can happen in scenarios where the model is prompted to provide an answer despite lacking certainty.

This behavior may be linked to “reward hacking” during the reinforcement learning process. The model is trained to prioritize user satisfaction, which can sometimes lead it to generate overly agreeable or fabricated responses to satisfy user requests. In other words, the model might “lie” because it has learned that doing so fulfills user expectations in a way that earns it positive reinforcement​.

What sets these lies apart from familiar issues like hallucinations or fake citations in older versions of ChatGPT is the “reward hacking” element. Hallucinations occur when an AI unintentionally generates incorrect information, often due to knowledge gaps or flawed reasoning. In contrast, reward hacking happens when the o1 model strategically provides incorrect information to maximize the outcomes it was trained to prioritize.

The deception is an apparently unintended consequence of how the model optimizes its responses during its training process. The model is designed to refuse harmful requests, Hobbhahn told me, and when you try to make o1 behave deceptively or dishonestly, it struggles with that.

Lies are only one small part of the safety puzzle. Perhaps more alarming is o1 being rated a “medium” risk for chemical, biological, radiological, and nuclear weapon risk. It doesn’t enable non-experts to create biological threats due to the hands-on laboratory skills that requires, but it can provide valuable insight to experts in planning the reproduction of such threats, according to the safety report.

“What worries me more is that in the future, when we ask AI to solve complex problems, like curing cancer or improving solar batteries, it might internalize these goals so strongly that it becomes willing to break its guardrails to achieve them,” Hobbhahn told me. “I think this can be prevented, but it’s a concern we need to keep an eye on.”

Not losing sleep over risks — yet

These may seem like galaxy-brained scenarios to be considering with a model that sometimes still struggles to answer basic questions about the number of R’s in the word “raspberry.” But that’s exactly why it’s important to figure it out now, rather than later, OpenAI’s head of preparedness, Joaquin Quiñonero Candela, tells me.

Today’s models can’t autonomously create bank accounts, acquire GPUs, or take actions that pose serious societal risks, Quiñonero Candela said, adding, “We know from model autonomy evaluations that we’re not there yet.” But it’s crucial to address these concerns now. If they prove unfounded, great — but if future advancements are hindered because we failed to anticipate these risks, we’d regret not investing in them earlier, he emphasized.

The fact that this model lies a small percentage of the time in safety tests doesn’t signal an imminent Terminator-style apocalypse, but it’s valuable to catch before rolling out future iterations at scale (and good for users to know, too). Hobbhahn told me that while he wished he had more time to test the models (there were scheduling conflicts with his own staff’s vacations), he isn’t “losing sleep” over the model’s safety.

One thing Hobbhahn hopes to see more investment in is monitoring chains of thought, which will allow the developers to catch nefarious steps. Quiñonero Candela told me that the company does monitor this and plans to scale it by combining models that are trained to detect any kind of misalignment with human experts reviewing flagged cases (paired with continued research in alignment).

“I’m not worried,” Hobbhahn said. “It’s just smarter. It’s better at reasoning. And potentially, it will use this reasoning for goals that we disagree with.”

Read More 

RCS texts on the iPhone aren’t encrypted now, but that could change

Photo by Amelia Holowaty Krales / The Verge

The GSM Association, the organization that develops the RCS standard, said on Tuesday it’s working to enable end-to-end encryption (E2EE) on messages sent between Android and iPhone. E2EE prevents third parties, like your messaging service or cell carrier, from viewing your texts.
In the announcement, GSMA technical director Tom Van Pelt said the next milestone for RCS Universal Profile is the “first deployment of standardized, interoperable messaging encryption between different computing platforms.” The move would help bridge a major gap in interoperability — especially now that Apple’s on board with RCS.
On Monday, Apple’s iOS 18 update replaced SMS with RCS messaging for texts sent to users on Android. While the change doesn’t get rid of the green bubbles, it will finally allow cross-platform users to share high-res media, as well as see read receipts and typing indicators. But Apple’s implementation of RCS is missing one key feature: E2EE.

Currently, not all RCS providers offer E2EE. Google Messages is one of the exceptions, as it started enabling E2EE by default for RCS conversations last year. Apple’s proprietary iMessage system has E2EE enabled as well, but it doesn’t apply the same protection for RCS messages.
“We believe that E2EE is a critical component of secure messaging, and we have been working with the broader ecosystem to bring cross-platform E2EE to RCS chats as soon as possible,” Elmar Weber, a general manager at Google, said on LinkedIn. “Google is committed to providing a secure and private messaging experience for users, and we remain dedicated to making E2EE standard for all RCS users regardless of the platform.”
As an Android user, I’m just happy that I’ll finally be able to send high-quality photos and videos to my iPhone-wielding friends and family. E2EE would just be an added plus.

Photo by Amelia Holowaty Krales / The Verge

The GSM Association, the organization that develops the RCS standard, said on Tuesday it’s working to enable end-to-end encryption (E2EE) on messages sent between Android and iPhone. E2EE prevents third parties, like your messaging service or cell carrier, from viewing your texts.

In the announcement, GSMA technical director Tom Van Pelt said the next milestone for RCS Universal Profile is the “first deployment of standardized, interoperable messaging encryption between different computing platforms.” The move would help bridge a major gap in interoperability — especially now that Apple’s on board with RCS.

On Monday, Apple’s iOS 18 update replaced SMS with RCS messaging for texts sent to users on Android. While the change doesn’t get rid of the green bubbles, it will finally allow cross-platform users to share high-res media, as well as see read receipts and typing indicators. But Apple’s implementation of RCS is missing one key feature: E2EE.

Currently, not all RCS providers offer E2EE. Google Messages is one of the exceptions, as it started enabling E2EE by default for RCS conversations last year. Apple’s proprietary iMessage system has E2EE enabled as well, but it doesn’t apply the same protection for RCS messages.

“We believe that E2EE is a critical component of secure messaging, and we have been working with the broader ecosystem to bring cross-platform E2EE to RCS chats as soon as possible,” Elmar Weber, a general manager at Google, said on LinkedIn. “Google is committed to providing a secure and private messaging experience for users, and we remain dedicated to making E2EE standard for all RCS users regardless of the platform.”

As an Android user, I’m just happy that I’ll finally be able to send high-quality photos and videos to my iPhone-wielding friends and family. E2EE would just be an added plus.

Read More 

EA is launching a social app for its sports games

Image: EA

EA’s sports games are huge franchises, and soon, the company is launching a mobile app to add even more of EA Sports into your life. As part of an investor presentation today, the company officially announced the EA Sports app, which will soft launch for iOS and Android in Spain this fall.
In a press release, EA says the app will be a “socially-driven app” with features like a discovery feed to keep up to date on news and highlights from their favorite teams, interactive challenges you can play during and after live games, and community “arenas.” With the soft launch in Spain, the app will have “a combination of sports content, live sports data, social messaging, interactivity, challenges and more centered on Global Football in collaboration with longstanding partner LaLiga,” EA says.
EA also says that it aims to add more sports to the app in the future. That’s not surprising — it’s called the EA Sports app, after all — franchises like Madden NFL or EA Sports College Football seem like likely candidates to get folded in at some point, too.
EA CEO Andrew Wilson recently said that sports would be one of the things the company would “double down” on, so this new app appears to be part of those efforts.
EA made a bunch of other announcements today as well, including that a Sims movie is in the works and that the new Skate will hit early access next year.

Image: EA

EA’s sports games are huge franchises, and soon, the company is launching a mobile app to add even more of EA Sports into your life. As part of an investor presentation today, the company officially announced the EA Sports app, which will soft launch for iOS and Android in Spain this fall.

In a press release, EA says the app will be a “socially-driven app” with features like a discovery feed to keep up to date on news and highlights from their favorite teams, interactive challenges you can play during and after live games, and community “arenas.” With the soft launch in Spain, the app will have “a combination of sports content, live sports data, social messaging, interactivity, challenges and more centered on Global Football in collaboration with longstanding partner LaLiga,” EA says.

EA also says that it aims to add more sports to the app in the future. That’s not surprising — it’s called the EA Sports app, after all — franchises like Madden NFL or EA Sports College Football seem like likely candidates to get folded in at some point, too.

EA CEO Andrew Wilson recently said that sports would be one of the things the company would “double down” on, so this new app appears to be part of those efforts.

EA made a bunch of other announcements today as well, including that a Sims movie is in the works and that the new Skate will hit early access next year.

Read More 

The Sonos Arc Ultra and Sub 4 have leaked, and they’re likely coming soon

Image: MysteryLupin (Twitter)

The next flagship soundbar from Sonos will be called the Arc Ultra, and marketing images of the product have been posted on X today. They line up with the photos I published back in July and offer yet more confirmation that the Arc Ultra will support Bluetooth audio playback.
Sonos itself has managed to leak details about the Arc Ultra in recent days; the company’s online store briefly featured a few tidbits about the Arc Ultra and mentioned the inclusion of “Sound Motion technology.” This is presumably the branding that Sonos has chosen for the technology that it obtained through an acquisition of Mayht in 2022.

Arc Ultra ($999) https://t.co/z6sXvj2dAg pic.twitter.com/iuwG2vGXDv— Arsène Lupin (@MysteryLupin) September 17, 2024

Since then, the company has been working to incorporate Mahyt’s “new, revolutionary approach to audio transducers” in its own products — and the Arc Ultra will be the first showcase of that tech. In short, you can expect very big sound from relatively small components. The post on X puts the price at $999, but there have been other indications that it could cost as much as $1,199.
In this top-down view of t he soundbar, you can see that Sonos has reworked the device’s physical controls. A power button is to the far left with playback controls in the center and an indented volume slider bar on the right side.

Image: MysteryLupin (Twitter)
The controls have been switched up a bit.

It seems to be a certainty that Sonos will announce a new high-end subwoofer, the Sub 4, alongside the Arc Ultra. Images of the Sub 4 show a design that follows its predecessors, only this time it has a matte finish.

Sub 4 ($799) https://t.co/z6sXvj1FKI pic.twitter.com/SQcr43UvuV— Arsène Lupin (@MysteryLupin) September 17, 2024

Another leaked marketing image offers a preview of how big the Arc Ultra is, pictured here with two Sub 4s on the floor beneath it. The overall look remains similar to the original, but this looks like a longer, slightly more hulking unit. As usual, both the soundbar and Sub will be offered in either black or white.

That’s one long soundbar.

Last month, CEO Patrick Spence said Sonos was delaying the release of two products to ensure that all of the company’s focus would go toward fixing its redesigned mobile app, which has been besieged by bugs, iffy performance, and a rash of customer complaints since its debut in May.

Spence has since conceded that Sonos should have released the rebuilt app as an opt-in beta instead of thrusting all customers into an experience that lagged the previous software — both in feature set and overall reliability. The app is steadily making progress, but the controversy, which was followed by layoffs at Sonos, has torpedoed morale among employees.
Internally, some of the dissatisfaction among the rank and file at Sonos has been directed at executives including chief product officer Maxime Bouvat-Merlin, The Verge has learned. There’s a belief that the higher-ups have made a string of bad decisions that prioritize deadlines and hitting targets over product quality despite warnings from engineers and others at the company that they’re rushing things.
The decision to punt the Arc Ultra and Sub 4 into the company’s Q1 2025 fiscal quarter means we could still see their introduction before the end of this calendar year. And considering the premature appearance on the website (and now these images), that might happen sooner than later as Sonos looks to get back on track after a colossal screwup. The ordeal has overshadowed and essentially ruined the launch of Sonos’ first headphones, the Sonos Ace, which have badly underperformed the company’s sales estimates, according to a recent newsletter from Bloomberg’s Mark Gurman.

Image: MysteryLupin (Twitter)

The next flagship soundbar from Sonos will be called the Arc Ultra, and marketing images of the product have been posted on X today. They line up with the photos I published back in July and offer yet more confirmation that the Arc Ultra will support Bluetooth audio playback.

Sonos itself has managed to leak details about the Arc Ultra in recent days; the company’s online store briefly featured a few tidbits about the Arc Ultra and mentioned the inclusion of “Sound Motion technology.” This is presumably the branding that Sonos has chosen for the technology that it obtained through an acquisition of Mayht in 2022.

Arc Ultra ($999) https://t.co/z6sXvj2dAg pic.twitter.com/iuwG2vGXDv

— Arsène Lupin (@MysteryLupin) September 17, 2024

Since then, the company has been working to incorporate Mahyt’s “new, revolutionary approach to audio transducers” in its own products — and the Arc Ultra will be the first showcase of that tech. In short, you can expect very big sound from relatively small components. The post on X puts the price at $999, but there have been other indications that it could cost as much as $1,199.

In this top-down view of t he soundbar, you can see that Sonos has reworked the device’s physical controls. A power button is to the far left with playback controls in the center and an indented volume slider bar on the right side.

Image: MysteryLupin (Twitter)
The controls have been switched up a bit.

It seems to be a certainty that Sonos will announce a new high-end subwoofer, the Sub 4, alongside the Arc Ultra. Images of the Sub 4 show a design that follows its predecessors, only this time it has a matte finish.

Sub 4 ($799) https://t.co/z6sXvj1FKI pic.twitter.com/SQcr43UvuV

— Arsène Lupin (@MysteryLupin) September 17, 2024

Another leaked marketing image offers a preview of how big the Arc Ultra is, pictured here with two Sub 4s on the floor beneath it. The overall look remains similar to the original, but this looks like a longer, slightly more hulking unit. As usual, both the soundbar and Sub will be offered in either black or white.

That’s one long soundbar.

Last month, CEO Patrick Spence said Sonos was delaying the release of two products to ensure that all of the company’s focus would go toward fixing its redesigned mobile app, which has been besieged by bugs, iffy performance, and a rash of customer complaints since its debut in May.

Spence has since conceded that Sonos should have released the rebuilt app as an opt-in beta instead of thrusting all customers into an experience that lagged the previous software — both in feature set and overall reliability. The app is steadily making progress, but the controversy, which was followed by layoffs at Sonos, has torpedoed morale among employees.

Internally, some of the dissatisfaction among the rank and file at Sonos has been directed at executives including chief product officer Maxime Bouvat-Merlin, The Verge has learned. There’s a belief that the higher-ups have made a string of bad decisions that prioritize deadlines and hitting targets over product quality despite warnings from engineers and others at the company that they’re rushing things.

The decision to punt the Arc Ultra and Sub 4 into the company’s Q1 2025 fiscal quarter means we could still see their introduction before the end of this calendar year. And considering the premature appearance on the website (and now these images), that might happen sooner than later as Sonos looks to get back on track after a colossal screwup. The ordeal has overshadowed and essentially ruined the launch of Sonos’ first headphones, the Sonos Ace, which have badly underperformed the company’s sales estimates, according to a recent newsletter from Bloomberg’s Mark Gurman.

Read More 

Netflix’s next animated Witcher movie streams in February

Image: Netflix

We’re still waiting on the biggest news in the Witcher universe — just what Liam Hemsworth will sound like as Geralt — but in the meantime, Netflix has announced when fans can expect the next animated spinoff. The Witcher: Sirens of the Deep will start streaming on February 11th, 2025. And Doug Cockle, the iconic voice of Geralt in the games, will be reprising his role.
Here’s the premise of the film:
Geralt of Rivia, a mutated monster hunter, is hired to investigate a series of attacks in a seaside village and finds himself drawn into a centuries-old conflict between humans and merpeople. He must count on friends — old and new — to solve the mystery before the hostilities between the two kingdoms escalate into all-out war.
This is actually the second animated Witcher spinoff from Netflix, following the prequel, Nightmare of the Wolf, in 2021. The main live-action series, meanwhile, is primed for a big shake-up before it wraps up for good. Hemsworth will be taking over for longtime star Henry Cavill starting with season 4, and the streamer says the series will end with its fifth season. Game studio CD Projekt Red is also kicking off a new Witcher trilogy, which is currently in development.
In the meantime, here’s a brief clip of Sirens of the Deep:

Image: Netflix

We’re still waiting on the biggest news in the Witcher universe — just what Liam Hemsworth will sound like as Geralt — but in the meantime, Netflix has announced when fans can expect the next animated spinoff. The Witcher: Sirens of the Deep will start streaming on February 11th, 2025. And Doug Cockle, the iconic voice of Geralt in the games, will be reprising his role.

Here’s the premise of the film:

Geralt of Rivia, a mutated monster hunter, is hired to investigate a series of attacks in a seaside village and finds himself drawn into a centuries-old conflict between humans and merpeople. He must count on friends — old and new — to solve the mystery before the hostilities between the two kingdoms escalate into all-out war.

This is actually the second animated Witcher spinoff from Netflix, following the prequel, Nightmare of the Wolf, in 2021. The main live-action series, meanwhile, is primed for a big shake-up before it wraps up for good. Hemsworth will be taking over for longtime star Henry Cavill starting with season 4, and the streamer says the series will end with its fifth season. Game studio CD Projekt Red is also kicking off a new Witcher trilogy, which is currently in development.

In the meantime, here’s a brief clip of Sirens of the Deep:

Read More 

Eufy’s new smart lock may Matter to Apple Home users

The Eufy E30 is a Matter-compatible smart lock with a fingerprint reader. It’s Eufy’s first lock to work with Apple Home. | Image: Eufy

The latest smart lock from Eufy is its first product to support Matter. The Eufy Smart Lock E30 ($169.99) works over Thread, which should allow for faster responsiveness, longer battery life, and better connectivity than locks that work just over Wi-Fi or Bluetooth.
The E30 is also the first smart lock from Anker’s smart home arm that works with Apple Home, although Home Key is not supported. While Eufy offers a handful of older security cameras with HomeKit compatibility, it hasn’t released a new Apple Home product for years.

The E30 can use a traditional key but also has a fingerprint reader for biometric access and a keypad for keycode access. It can be controlled through the Eufy app over built-in Wi-Fi, including out-of-home control, and with any Matter-compatible app, such as Apple Home, Amazon Alexa, Google Home, or Samsung SmartThings, via Thread.
Using an app allows for home automation, including setting schedules, adding the lock to routines, and controlling it via a voice assistant.

Image: Eufy
The Eufy E30 is the first Matter-over-Thread smart lock you can buy that has a fingerprint reader.

To use the E30 in a smart home platform via Thread, you’ll need a Matter controller from that platform and a Thread border router. These can be the same device, such as an Apple HomePod or a Google Home Nest Hub, or separate, such as an Amazon Alexa Echo Dot and an Eero mesh Wi-Fi router.
We’ve seen very few smart locks with Thread and only a handful of Matter-over-Thread locks.
As a low-power, mesh-networking protocol, Thread makes a lot of sense for a door lock. Door locks are located at the edges of your home where Wi-Fi may be weak and are battery-powered. Using a less power-hungry protocol than Wi-Fi should keep them running for longer between battery swaps.
But we’ve seen very few locks with Thread. Schlage has one, but it doesn’t support Matter, and so far, there are only a handful of Matter-over-Thread locks, including the retrofit Aqara U200, the Yale Assure Lock SL, and the Nuki (Europe only). U-Tec announced one at CES but it isn’t available yet, and while Level said its locks will be upgraded to support Thread and Matter, we’re still waiting on that.
The E30 works with 8 AA batteries, which the company claims will provide up to 8 months of battery life. According to listings on the Thread Group website, Eufy is also developing a version with a rechargeable Lithium battery.
I’m looking forward to testing this lock as it’s one of the first fully-featured, full-replacement smart locks with Matter and Thread support. Eufy was not on my list of the companies I thought would release a lock with these capabilities, but I’ve tested a few of its locks and been generally impressed by their function, if not so enamored of the form.

The Eufy E30 is a Matter-compatible smart lock with a fingerprint reader. It’s Eufy’s first lock to work with Apple Home. | Image: Eufy

The latest smart lock from Eufy is its first product to support Matter. The Eufy Smart Lock E30 ($169.99) works over Thread, which should allow for faster responsiveness, longer battery life, and better connectivity than locks that work just over Wi-Fi or Bluetooth.

The E30 is also the first smart lock from Anker’s smart home arm that works with Apple Home, although Home Key is not supported. While Eufy offers a handful of older security cameras with HomeKit compatibility, it hasn’t released a new Apple Home product for years.

The E30 can use a traditional key but also has a fingerprint reader for biometric access and a keypad for keycode access. It can be controlled through the Eufy app over built-in Wi-Fi, including out-of-home control, and with any Matter-compatible app, such as Apple Home, Amazon Alexa, Google Home, or Samsung SmartThings, via Thread.

Using an app allows for home automation, including setting schedules, adding the lock to routines, and controlling it via a voice assistant.

Image: Eufy
The Eufy E30 is the first Matter-over-Thread smart lock you can buy that has a fingerprint reader.

To use the E30 in a smart home platform via Thread, you’ll need a Matter controller from that platform and a Thread border router. These can be the same device, such as an Apple HomePod or a Google Home Nest Hub, or separate, such as an Amazon Alexa Echo Dot and an Eero mesh Wi-Fi router.

We’ve seen very few smart locks with Thread and only a handful of Matter-over-Thread locks.

As a low-power, mesh-networking protocol, Thread makes a lot of sense for a door lock. Door locks are located at the edges of your home where Wi-Fi may be weak and are battery-powered. Using a less power-hungry protocol than Wi-Fi should keep them running for longer between battery swaps.

But we’ve seen very few locks with Thread. Schlage has one, but it doesn’t support Matter, and so far, there are only a handful of Matter-over-Thread locks, including the retrofit Aqara U200, the Yale Assure Lock SL, and the Nuki (Europe only). U-Tec announced one at CES but it isn’t available yet, and while Level said its locks will be upgraded to support Thread and Matter, we’re still waiting on that.

The E30 works with 8 AA batteries, which the company claims will provide up to 8 months of battery life. According to listings on the Thread Group website, Eufy is also developing a version with a rechargeable Lithium battery.

I’m looking forward to testing this lock as it’s one of the first fully-featured, full-replacement smart locks with Matter and Thread support. Eufy was not on my list of the companies I thought would release a lock with these capabilities, but I’ve tested a few of its locks and been generally impressed by their function, if not so enamored of the form.

Read More 

Snap’s new Spectacles inch closer to compelling AR

Photo by Nalani Hernandez-Melo for The Verge

Will developers finally help Snap take AR glasses mainstream? Snap’s fifth-generation Spectacles have a richer, more immersive display. Using them feels snappier. They weigh less than their predecessor and last longer on a charge.
Those are exactly the kinds of upgrades you’d expect from a product line that’s technically eight years old. But the market for Spectacles — and AR glasses in general — still feels as nascent as ever.
Snap has an idea for what could change that: developers. These new Spectacles, announced Tuesday at Snap’s annual Partner Summit in Los Angeles, aren’t being sold. Instead, Snap is repeating its playbook for the last version of Spectacles in 2021 and distributing them to the people who make AR lenses for Snapchat. This time around, though, there’s an extra hurdle: you have to apply for access through Lens Studio, the company’s desktop tool for creating AR software, and pay $1,188 to lease a pair for at least one year. (After a year, the subscription becomes $99 a month.)
Yes, Snap is asking developers to pay $1,188 to build software for hardware with no user base. Even still, Snap CEO Evan Spiegel believes the interest will be there.
“Our goal is really to empower and inspire the developer and AR enthusiast communities,” he tells me. “This really is an invitation, and hopefully an inspiration, to create.”

Without the vibrant developer ecosystem that Snap wants to create, my demo of the new Spectacles felt a lot like my demo of the last Spectacles in 2021. One lens showed flowers that grow where you point your hands, while another displayed the anatomy of a human body in 3D space. I could open a browser and load this very website in a floating window.
While the hardware for Spectacles has improved, the software is still pretty basic for a standalone device. Here, it’s obvious that Snap hopes developers will help it come up with compelling use cases. For the most part, everything I experienced was in line with what I’ve come to expect from AR hardware demos over the years: lightweight, gimmicky apps that show off the hardware but aren’t experiences I’d return to in my free time.
There were some new apps to try, like Snap’s OpenAI-powered chatbot, My AI, though I didn’t get much time with it. A new AR lens I tried made use of AI to generate 3D animations based on voice prompts. There was noticeably little integration across the OS with Snapchat itself, other than displaying Bitmojis for account profiles. (You can apparently call someone using Snapchat through the glasses, but that wasn’t part of my demo.)

Photo by Nalani Hernandez-Melo for The Verge

The first thing that stands out when you put the new Spectacles on is the improved display quality and interface. Colors were richer and higher resolution. The Snap OS powering the glasses has been completely redone and felt considerably more polished, even if it’s still barebones. The main way you navigate Spectacles is through hand tracking and voice control, which felt inherently slow at times but never dragged in a way that felt glitchy.
Snap says this model boasts a 46-degree field of view (up from 26.3 degrees for the previous version) and that its waveguide displays show 37 pixels per degree — a measurement Snap believes is the right way to measure AR display quality that is about 25 percent richer than before. The physical lenses of the glasses auto-tint when you’re looking at direct sunlight, allowing you to see what’s being projected onto your surroundings while outdoors.
During my demo, the field of view was noticeably wider than before but still nowhere near what you would expect from looking through a pair of normal glasses. More perplexingly, Snap’s own demos emphasized this fact; a golfing simulator I tried was constrained to a frustratingly small area of the real world around me. Ultimately, this limited field of view makes augmented reality considerably less engaging than the real world, which, in turn, makes putting a 124-gram pair of smart glasses on your face feel unnecessary.

Snap has invested a lot into improving the hardware of Spectacles. There are two liquid crystal and silicon-based projectors on each side of the frame that pipe graphics into the custom waveguides. Two custom Qualcomm Snapdragon processors distribute power and heat along the frames, aided by a vapor chamber in each temple. And two infrared sensors track hand movements to control the glasses with Minority Report-style pinch and pull gestures.
The fourth generation of Spectacles from 2021 overheated multiple times during my demo but this latest version didn’t crash once, even while I was wearing them outside during a record heat wave in Los Angeles. Snap says the battery life has improved from about 30 to 45 minutes on a single charge. A USB-C cable is included that allows for continuous power when plugged into the temple of the glasses.
While Snap has vague ideas about what Spectacles should be used for, it’s clearly leaving most of the potential use cases up to developers to figure out. “We’re trying to be the most developer-friendly platform in the world,” says Spiegel, who adds that he doesn’t see Spectacles being a meaningful business until the end of the decade. (Snap isn’t disclosing how many pairs of these Spectacles it’s making, but my sources peg the number at around 10,000.)

Snap released the first pair of Spectacles back in 2016. Since then, bigger players — namely Meta — have signaled that they, too, are building AR glasses. Apple is working on them separately from the Vision Pro, and Google is developing AR glasses with Magic Leap and Samsung. When I ask Spiegel about the growing competition, he simply responds: “Sorry, so who of those have AR glasses?”
“We’re trying to be the most developer-friendly platform in the world”
It’s a point he won’t be able to make much longer. Meta will show off its long-rumored AR glasses prototype, codenamed Orion, at its Connect conference next week. (Like Spectacles, they won’t be sold commercially.) Meanwhile, Meta has found early success with its smart glasses partnership with Ray-Ban. On Tuesday, it announced a new, 10-year deal to make smart glasses with Ray-Ban’s parent company, the eyewear giant EssilorLuxottica.
Ultimately, I’m skeptical of why developers will want to build software for Spectacles right now, given the lack of a market and the cost of getting access to a pair. Still, Spiegel believes enough of them are excited about the promise of AR glasses and that they’ll want to help shape that future.
“I think it’s the same reason why developers were really excited with the early desktop computer or the reason why developers were really excited by the early smartphones,” he says. “I think this is a group of visionary technologists who are really excited about what the future holds.”
Spiegel may be right. AR glasses may be the future, and Spectacles may be well-positioned to become the next major computing platform, even with competition heating up. But there’s still a lot of progress that needs to happen for Snap’s vision to become reality.

Photo by Nalani Hernandez-Melo for The Verge

Will developers finally help Snap take AR glasses mainstream?

Snap’s fifth-generation Spectacles have a richer, more immersive display. Using them feels snappier. They weigh less than their predecessor and last longer on a charge.

Those are exactly the kinds of upgrades you’d expect from a product line that’s technically eight years old. But the market for Spectacles — and AR glasses in general — still feels as nascent as ever.

Snap has an idea for what could change that: developers. These new Spectacles, announced Tuesday at Snap’s annual Partner Summit in Los Angeles, aren’t being sold. Instead, Snap is repeating its playbook for the last version of Spectacles in 2021 and distributing them to the people who make AR lenses for Snapchat. This time around, though, there’s an extra hurdle: you have to apply for access through Lens Studio, the company’s desktop tool for creating AR software, and pay $1,188 to lease a pair for at least one year. (After a year, the subscription becomes $99 a month.)

Yes, Snap is asking developers to pay $1,188 to build software for hardware with no user base. Even still, Snap CEO Evan Spiegel believes the interest will be there.

“Our goal is really to empower and inspire the developer and AR enthusiast communities,” he tells me. “This really is an invitation, and hopefully an inspiration, to create.”

Without the vibrant developer ecosystem that Snap wants to create, my demo of the new Spectacles felt a lot like my demo of the last Spectacles in 2021. One lens showed flowers that grow where you point your hands, while another displayed the anatomy of a human body in 3D space. I could open a browser and load this very website in a floating window.

While the hardware for Spectacles has improved, the software is still pretty basic for a standalone device. Here, it’s obvious that Snap hopes developers will help it come up with compelling use cases. For the most part, everything I experienced was in line with what I’ve come to expect from AR hardware demos over the years: lightweight, gimmicky apps that show off the hardware but aren’t experiences I’d return to in my free time.

There were some new apps to try, like Snap’s OpenAI-powered chatbot, My AI, though I didn’t get much time with it. A new AR lens I tried made use of AI to generate 3D animations based on voice prompts. There was noticeably little integration across the OS with Snapchat itself, other than displaying Bitmojis for account profiles. (You can apparently call someone using Snapchat through the glasses, but that wasn’t part of my demo.)

Photo by Nalani Hernandez-Melo for The Verge

The first thing that stands out when you put the new Spectacles on is the improved display quality and interface. Colors were richer and higher resolution. The Snap OS powering the glasses has been completely redone and felt considerably more polished, even if it’s still barebones. The main way you navigate Spectacles is through hand tracking and voice control, which felt inherently slow at times but never dragged in a way that felt glitchy.

Snap says this model boasts a 46-degree field of view (up from 26.3 degrees for the previous version) and that its waveguide displays show 37 pixels per degree — a measurement Snap believes is the right way to measure AR display quality that is about 25 percent richer than before. The physical lenses of the glasses auto-tint when you’re looking at direct sunlight, allowing you to see what’s being projected onto your surroundings while outdoors.

During my demo, the field of view was noticeably wider than before but still nowhere near what you would expect from looking through a pair of normal glasses. More perplexingly, Snap’s own demos emphasized this fact; a golfing simulator I tried was constrained to a frustratingly small area of the real world around me. Ultimately, this limited field of view makes augmented reality considerably less engaging than the real world, which, in turn, makes putting a 124-gram pair of smart glasses on your face feel unnecessary.

Snap has invested a lot into improving the hardware of Spectacles. There are two liquid crystal and silicon-based projectors on each side of the frame that pipe graphics into the custom waveguides. Two custom Qualcomm Snapdragon processors distribute power and heat along the frames, aided by a vapor chamber in each temple. And two infrared sensors track hand movements to control the glasses with Minority Report-style pinch and pull gestures.

The fourth generation of Spectacles from 2021 overheated multiple times during my demo but this latest version didn’t crash once, even while I was wearing them outside during a record heat wave in Los Angeles. Snap says the battery life has improved from about 30 to 45 minutes on a single charge. A USB-C cable is included that allows for continuous power when plugged into the temple of the glasses.

While Snap has vague ideas about what Spectacles should be used for, it’s clearly leaving most of the potential use cases up to developers to figure out. “We’re trying to be the most developer-friendly platform in the world,” says Spiegel, who adds that he doesn’t see Spectacles being a meaningful business until the end of the decade. (Snap isn’t disclosing how many pairs of these Spectacles it’s making, but my sources peg the number at around 10,000.)

Snap released the first pair of Spectacles back in 2016. Since then, bigger players — namely Meta — have signaled that they, too, are building AR glasses. Apple is working on them separately from the Vision Pro, and Google is developing AR glasses with Magic Leap and Samsung. When I ask Spiegel about the growing competition, he simply responds: “Sorry, so who of those have AR glasses?”

“We’re trying to be the most developer-friendly platform in the world”

It’s a point he won’t be able to make much longer. Meta will show off its long-rumored AR glasses prototype, codenamed Orion, at its Connect conference next week. (Like Spectacles, they won’t be sold commercially.) Meanwhile, Meta has found early success with its smart glasses partnership with Ray-Ban. On Tuesday, it announced a new, 10-year deal to make smart glasses with Ray-Ban’s parent company, the eyewear giant EssilorLuxottica.

Ultimately, I’m skeptical of why developers will want to build software for Spectacles right now, given the lack of a market and the cost of getting access to a pair. Still, Spiegel believes enough of them are excited about the promise of AR glasses and that they’ll want to help shape that future.

“I think it’s the same reason why developers were really excited with the early desktop computer or the reason why developers were really excited by the early smartphones,” he says. “I think this is a group of visionary technologists who are really excited about what the future holds.”

Spiegel may be right. AR glasses may be the future, and Spectacles may be well-positioned to become the next major computing platform, even with competition heating up. But there’s still a lot of progress that needs to happen for Snap’s vision to become reality.

Read More 

The first Thunderbolt 5 dock appears to have arrived

The Kensington SD5000T5. | Image: Kensington

The first Thunderbolt 5 cables arrived in July, and now it looks like we’re finally getting the first Thunderbolt 5 dock! Kensington has just announced the Intel-certified SD5000T5 EQ is now available to buy, seemingly beating every rival to the 120Gbps single-cable docking punch. (I just checked: Hyper, J5Create and OWC’s docks aren’t yet available.)
The new dock claims to support up to three 4K monitors or two 8K monitors on Windows, or dual 6K monitors on MacBook Pros with M1 Pro chips or better. It also offers up to 120Gbps speeds for connected peripherals — assuming your computer has a Thunderbolt 5 port, anyhow. Last we checked, only a single $4,500 version of the Razer Blade 18 has that port, though PCWorld points out the $3,899 Maingear ML-17 now has one as well.
But even if you don’t have Thunderbolt 5 yet, the Kensington has another trick up its ports. It’s one of the very few (maybe the only?) Thunderbolt dock to offer 140W USB-C PD charging, in case you’ve got a laptop that can use that much.
You won’t get 140 watts with a 16-inch MacBook Pro, as it only supports 140W over MagSafe, and MagSafe doesn’t transfer data — but it should give you at least 100W of power.
Technically, the USB-C PD standard now goes up to 240W, but no company has yet shipped a charger that comes close, and docks typically top out at 100W. HP and Lenovo have docks that offer 230W over a single cable, but those cables have two heads and are meant for specific workstation laptops, not standardized USB-C ones.
Thunderbolt 5 won’t truly get exciting until we can make use of all its speed — no Thunderbolt 5 storage drives have yet shipped, to my knowledge. But OWC did just open preorders for the Envoy Ultra, which claims 6,000MB/sec transfer speeds and a claimed release date of October 2024.

Image: OWC
The OWC Envoy Ultra.

Also, it would help if prices came down! The new Kensington dock will ship later this month for $400, and the OWC Envoy Ultra will cost $400 for 2TB or $600 for 4TB. It arrives “late October.”

The Kensington SD5000T5. | Image: Kensington

The first Thunderbolt 5 cables arrived in July, and now it looks like we’re finally getting the first Thunderbolt 5 dock! Kensington has just announced the Intel-certified SD5000T5 EQ is now available to buy, seemingly beating every rival to the 120Gbps single-cable docking punch. (I just checked: Hyper, J5Create and OWC’s docks aren’t yet available.)

The new dock claims to support up to three 4K monitors or two 8K monitors on Windows, or dual 6K monitors on MacBook Pros with M1 Pro chips or better. It also offers up to 120Gbps speeds for connected peripherals — assuming your computer has a Thunderbolt 5 port, anyhow. Last we checked, only a single $4,500 version of the Razer Blade 18 has that port, though PCWorld points out the $3,899 Maingear ML-17 now has one as well.

But even if you don’t have Thunderbolt 5 yet, the Kensington has another trick up its ports. It’s one of the very few (maybe the only?) Thunderbolt dock to offer 140W USB-C PD charging, in case you’ve got a laptop that can use that much.

You won’t get 140 watts with a 16-inch MacBook Pro, as it only supports 140W over MagSafe, and MagSafe doesn’t transfer data — but it should give you at least 100W of power.

Technically, the USB-C PD standard now goes up to 240W, but no company has yet shipped a charger that comes close, and docks typically top out at 100W. HP and Lenovo have docks that offer 230W over a single cable, but those cables have two heads and are meant for specific workstation laptops, not standardized USB-C ones.

Thunderbolt 5 won’t truly get exciting until we can make use of all its speed — no Thunderbolt 5 storage drives have yet shipped, to my knowledge. But OWC did just open preorders for the Envoy Ultra, which claims 6,000MB/sec transfer speeds and a claimed release date of October 2024.

Image: OWC
The OWC Envoy Ultra.

Also, it would help if prices came down! The new Kensington dock will ship later this month for $400, and the OWC Envoy Ultra will cost $400 for 2TB or $600 for 4TB. It arrives “late October.”

Read More 

Snapchat’s AI selfie feature puts your face in personalized ads — here’s how to turn it off

Illustration by Alex Castro / The Verge

If you’ve tried out Snapchat’s AI-generated selfies, you might want to double-check a setting that lets Snap use your face in “personalized sponsored content and ads,” as spotted by 404 Media.
The feature, called My Selfie, lets you and your friends create AI-generated images of yourself based on photos you share with Snapchat. When using the feature for the first time, Snapchat prompts you to agree to terms that include using “you (or your likeness)” in ads:
You also acknowledge and agree that by using My Selfie, you (or your likeness) may also appear in personalized sponsored content and ads that will be visible only to you and that includes branding or other advertising content of Snap or its business partners without compensation to you.

While you can toggle the “See My Selfie in Ads” setting to off, 404 Media reports that it’s enabled by default once you agree to Snap’s terms (The Verge was also able to confirm this).
To see if you have the setting enabled, select your profile photo in the top-left corner of Snapchat, tap the settings cog in the top-right corner, and then choose My Selfie. From here, toggle off the See My Selfie in Ads setting.

Screenshot: The Verge

Even though Snap may use your face in personalized ads only shown to you, the company says it doesn’t share your data with third-party advertisers.
“Advertisers do not have access to Snapchatters’ Gen AI data in any capacity, including My Selfies nor do they have access to Snapchatters’ private data, including Memories, that would enable them to create an AI generated image of an individual Snapchatter,” Snapchat spokesperson Maggie Cherneff told The Verge. Snap also currently doesn’t use My Selfies in advertising, Cherneff added.
Update, September 17th: Added a statement from Snapchat.

Illustration by Alex Castro / The Verge

If you’ve tried out Snapchat’s AI-generated selfies, you might want to double-check a setting that lets Snap use your face in “personalized sponsored content and ads,” as spotted by 404 Media.

The feature, called My Selfie, lets you and your friends create AI-generated images of yourself based on photos you share with Snapchat. When using the feature for the first time, Snapchat prompts you to agree to terms that include using “you (or your likeness)” in ads:

You also acknowledge and agree that by using My Selfie, you (or your likeness) may also appear in personalized sponsored content and ads that will be visible only to you and that includes branding or other advertising content of Snap or its business partners without compensation to you.

While you can toggle the “See My Selfie in Ads” setting to off, 404 Media reports that it’s enabled by default once you agree to Snap’s terms (The Verge was also able to confirm this).

To see if you have the setting enabled, select your profile photo in the top-left corner of Snapchat, tap the settings cog in the top-right corner, and then choose My Selfie. From here, toggle off the See My Selfie in Ads setting.

Screenshot: The Verge

Even though Snap may use your face in personalized ads only shown to you, the company says it doesn’t share your data with third-party advertisers.

“Advertisers do not have access to Snapchatters’ Gen AI data in any capacity, including My Selfies nor do they have access to Snapchatters’ private data, including Memories, that would enable them to create an AI generated image of an individual Snapchatter,” Snapchat spokesperson Maggie Cherneff told The Verge. Snap also currently doesn’t use My Selfies in advertising, Cherneff added.

Update, September 17th: Added a statement from Snapchat.

Read More 

Snapchat is getting its biggest redesign in years

The new “Simple Snapchat.” | Image: Snap

Snapchat is undergoing its biggest redesign in years by simplifying from five to three tabs: one for messages and Stories with friends, one for the camera, and another for a TikTok-like feed of full-screen videos from creators and publishers.
The redesign, dubbed “Simple Snapchat,” was announced onstage Tuesday at Snap’s annual Partner Summit in Los Angeles. “It brings Stories closer to conversations, it simplifies content discovery, and it brings people straight into our camera to express themselves,” Snap CEO Evan Spiegel tells The Verge.
Until now, Snapchat has consisted of five main tabs: one for the Snap Map, private chats, the camera, Stories, and Spotlight, its competitor to TikTok and Instagram Reels. Once this redesign is rolled out to Snapchat’s 850 million users, the Snap Map will be accessible from the messaging tab, along with Stories from your friends and creators you follow.
To the right of the camera will be a new, unified For You feed of full-screen videos from publishers and creators. Here, Snap has essentially merged Spotlight with content from media brands like The Wall Street Journal and the Daily Mail. The company makes most of its money from the ads it runs around this content, so it’s rolling the redesign out slowly so “we can really understand any changes in content dynamics,” according to Spiegel.

GIF: Snap

The high-level goal of this redesign is to make Snapchat more accessible and an attractive way to watch videos you’d normally go to TikTok or Instagram for. Spiegel also thinks it will translate to a better business for Snap’s creators, who collectively share more than a billion pieces of content per month in the app.
“One of the things that creators have done very effectively is use shortform video to grow their Stories audience and then monetize the Stories through our revenue share program,” he says. “I think that will become even easier with this app layout, where the Stories from your friends or from creators you’re following live on the chat page, and then you can discover new creators or new content in full screen on the third tab.”

The new “Simple Snapchat.” | Image: Snap

Snapchat is undergoing its biggest redesign in years by simplifying from five to three tabs: one for messages and Stories with friends, one for the camera, and another for a TikTok-like feed of full-screen videos from creators and publishers.

The redesign, dubbed “Simple Snapchat,” was announced onstage Tuesday at Snap’s annual Partner Summit in Los Angeles. “It brings Stories closer to conversations, it simplifies content discovery, and it brings people straight into our camera to express themselves,” Snap CEO Evan Spiegel tells The Verge.

Until now, Snapchat has consisted of five main tabs: one for the Snap Map, private chats, the camera, Stories, and Spotlight, its competitor to TikTok and Instagram Reels. Once this redesign is rolled out to Snapchat’s 850 million users, the Snap Map will be accessible from the messaging tab, along with Stories from your friends and creators you follow.

To the right of the camera will be a new, unified For You feed of full-screen videos from publishers and creators. Here, Snap has essentially merged Spotlight with content from media brands like The Wall Street Journal and the Daily Mail. The company makes most of its money from the ads it runs around this content, so it’s rolling the redesign out slowly so “we can really understand any changes in content dynamics,” according to Spiegel.

GIF: Snap

The high-level goal of this redesign is to make Snapchat more accessible and an attractive way to watch videos you’d normally go to TikTok or Instagram for. Spiegel also thinks it will translate to a better business for Snap’s creators, who collectively share more than a billion pieces of content per month in the app.

“One of the things that creators have done very effectively is use shortform video to grow their Stories audience and then monetize the Stories through our revenue share program,” he says. “I think that will become even easier with this app layout, where the Stories from your friends or from creators you’re following live on the chat page, and then you can discover new creators or new content in full screen on the third tab.”

Read More 

Scroll to top
Generated by Feedzy