verge-rss

The EPA is cracking down on cybersecurity threats

The East Bay Municipal Utility District Wastewater Treatment Plant on March 20th, 2024, in Oakland, California.  | Photo by Justin Sullivan / Getty Images

The Environmental Protection Agency is ramping up its inspections of critical water infrastructure after warning of “alarming vulnerabilities” to cyberattacks.
The agency issued an enforcement alert yesterday warning utilities to take quick action to mitigate threats to the nation’s drinking water. The EPA plans to increase inspections and says it will take civil and criminal enforcement actions as needed.
“Cyberattacks against [community water systems] are increasing in frequency and severity across the country,” the alert says. “Possible impacts include disrupting the treatment, distribution, and storage of water for the community, damaging pumps and valves, and altering the levels of chemicals to hazardous amounts.”
“Cyberattacks against [community water systems] are increasing in frequency and severity across the country.”
More than 70 percent of water systems inspected since September 2023 failed to comply with mandates under the Safe Drinking Water Act (SDWA) that are meant to reduce the risk of physical and cyberattacks, the EPA said. That includes failing to take basic steps like changing default passwords or cutting off former employees’ access to facilities. Since 2020, the EPA has taken more than 100 enforcement actions for violations of that section of the SDWA.
“Foreign governments have disrupted some water systems with cyberattacks and may have embedded the capability to disable them in the future,” the enforcement alert says. One example it cites is Volt Typhoon, a People’s Republic of China state-sponsored cyber group that has “compromised the IT environments of multiple critical infrastructure organizations,” according to a Department of Homeland Security advisory issued in February.
Hacktivists in Russia likely linked to the Sandworm group that attacked Ukraine’s power grid caused an overflow at a water facility in Texas in January, CyberScoop reports, although the incident didn’t disrupt service to customers. Last year, a Pennsylvania water facility was forced to rely on manual operations after an attack by hackers linked to the Iranian Islamic Revolutionary Guard Corps.
The EPA’s enforcement alert asks utilities to follow recommendations for maintaining cyber hygiene, including conducting awareness training for employees, backing up OT / IT systems, and avoiding public-facing internet.
It follows a letter EPA administrator Michael Regan and national security advisor Jake Sullivan sent to state governors earlier this year warning them of cyber risks to the nation’s drinking and wastewater systems. It led to a March convening where the National Security Council asked each state to come up with an action plan to address those vulnerabilities by late June.

The East Bay Municipal Utility District Wastewater Treatment Plant on March 20th, 2024, in Oakland, California.  | Photo by Justin Sullivan / Getty Images

The Environmental Protection Agency is ramping up its inspections of critical water infrastructure after warning of “alarming vulnerabilities” to cyberattacks.

The agency issued an enforcement alert yesterday warning utilities to take quick action to mitigate threats to the nation’s drinking water. The EPA plans to increase inspections and says it will take civil and criminal enforcement actions as needed.

“Cyberattacks against [community water systems] are increasing in frequency and severity across the country,” the alert says. “Possible impacts include disrupting the treatment, distribution, and storage of water for the community, damaging pumps and valves, and altering the levels of chemicals to hazardous amounts.”

“Cyberattacks against [community water systems] are increasing in frequency and severity across the country.”

More than 70 percent of water systems inspected since September 2023 failed to comply with mandates under the Safe Drinking Water Act (SDWA) that are meant to reduce the risk of physical and cyberattacks, the EPA said. That includes failing to take basic steps like changing default passwords or cutting off former employees’ access to facilities. Since 2020, the EPA has taken more than 100 enforcement actions for violations of that section of the SDWA.

“Foreign governments have disrupted some water systems with cyberattacks and may have embedded the capability to disable them in the future,” the enforcement alert says. One example it cites is Volt Typhoon, a People’s Republic of China state-sponsored cyber group that has “compromised the IT environments of multiple critical infrastructure organizations,” according to a Department of Homeland Security advisory issued in February.

Hacktivists in Russia likely linked to the Sandworm group that attacked Ukraine’s power grid caused an overflow at a water facility in Texas in January, CyberScoop reports, although the incident didn’t disrupt service to customers. Last year, a Pennsylvania water facility was forced to rely on manual operations after an attack by hackers linked to the Iranian Islamic Revolutionary Guard Corps.

The EPA’s enforcement alert asks utilities to follow recommendations for maintaining cyber hygiene, including conducting awareness training for employees, backing up OT / IT systems, and avoiding public-facing internet.

It follows a letter EPA administrator Michael Regan and national security advisor Jake Sullivan sent to state governors earlier this year warning them of cyber risks to the nation’s drinking and wastewater systems. It led to a March convening where the National Security Council asked each state to come up with an action plan to address those vulnerabilities by late June.

Read More 

Hellblade II: headphones on, heart rate up

Image: Ninja Theory

If you can, play Hellblade II with headphones to fully experience the depth of the game’s unique storytelling. In Senua’s Saga: Hellblade II, the first and only instruction the game provides is that the game is best experienced with headphones. Typically, I always ignore that advice. I’m not a “headphones on, lights off” kind of player. I don’t need ambiance. But for Hellblade II, I decided, “Why not?” What followed was an aural experience that thrilled, frightened, and unsettled the Hel(lblade) outta me.
Note: Of course, if, for whatever reason, you can’t play with headphones, the game’s subtitles and closed captioning do good work conveying the game’s unique audio design.
In Hellblade II, the follow-up to 2017’s Hellblade: Senua’s Sacrifice, Senua, a Pict warrior, embarks on another harrowing journey. Instead of venturing to the gates of Helheim, she travels north to confront the viking raiders that have been stealing and enslaving her people. On her journey, Senua is accompanied by a Homeric chorus of voices that reflect her struggle with psychosis. The team at Ninja Theory made it a point to explain that they consulted with mental health experts — including a professor of psychiatry at Cambridge University and people who live with the disorder — in order to portray psychosis respectfully and accurately. That manifests as hearing voices clip in and out of each side of my headphones. At the beginning of the game, I would whip my head left or right as the voices jumped around before I finally grew used to them.

There’s a richness to the voice performances that are lost if they’re diffused through open air instead of beamed directly into your ears. They speak in short, staccato sentences, contextualizing Senua’s feelings about a particular encounter. When she meets someone, her voices wonder if they can be trusted. In fights, they shout encouragements and admonitions. “Get up. Get up!” or “They’re so strong!” I like the voices. They remind me of my own rapid-fire internal monologue quipping about the myriad things that flit around in my head.
Senua’s voices also help with navigation and puzzle solving, but not so much that it becomes obnoxious. In Hellblade II, there’s none of that “the dialogue tells you the solution” stuff that happens in other games. For the first puzzle section involving a path blocked by a large symbol, the voices yell “Focus!” prompting you to press the right trigger to engage Senua’s focus ability. When I got lost during a particularly bewildering section in a dark forest, the voices remarked only once that I was lost then shut up, leaving me to figure out the solution in blessed silence.

Senua’s Saga: Hellblade II starter pack pic.twitter.com/tOAEtdELKe— Ninja Theory (@NinjaTheory) May 20, 2024

In delving into a drained lake, I could hear water dripping all around me as the caverns echoed with the sound of my breathing. The further I traveled, the darker it got, and the more sinister the sounds became. My breathing slowly transformed into the guttural sounds of disquieted spirits. That should have been a problem. I hate the sounds typically associated with horror — that crawling, squelching sound used whenever a game or movie wants to convey that something’s gross and wet. But the scary sounds in Hellblade II never crossed the threshold into repulsive or triggering for me (misophonia sufferers, rejoice). Instead, the sounds were soft and quiet but no less sinister, sounding as though they were just over my shoulder in the real world.

Image: Ninja Theory

When audio enhances visual, Hellblade II becomes a harrowing experience.

My aural journey with Hellblade II wasn’t limited to sound effects and voices. The music also played an integral role in crafting a visceral full-body experience with the game. Early on, there’s an encounter where it all combines — music, sound effects, and voices — to create a goose bump-inducing moment that I won’t spoil. The tempo of the music combined with the on-screen action created a beat that I could physically feel reverberating in my chest as I played. It’s a game-defining moment that really nails the skill and creativity of Hellblade II’s sound team.
I’m an aural person, someone who places great emphasis on sound, and Hellblade II felt like it was a game made for me. In Hellblade II, there is no in-game UI. There are no tutorial pop-ups that pause the action to tell you what buttons do what or how to interact with the environment. A UI adds a layer of artificiality, reminding you that you that this is make-believe. Without it, the game created a level of reality that I’ve never really experienced before, forcing me to fully inhabit Senua as a character. And with headphones in, I heard so much more of the world and got a greater feeling for Senua’s unique experience in it.
Senua’s Saga: Hellblade II is out now on Xbox and Game Pass.

Image: Ninja Theory

If you can, play Hellblade II with headphones to fully experience the depth of the game’s unique storytelling.

In Senua’s Saga: Hellblade II, the first and only instruction the game provides is that the game is best experienced with headphones. Typically, I always ignore that advice. I’m not a “headphones on, lights off” kind of player. I don’t need ambiance. But for Hellblade II, I decided, “Why not?” What followed was an aural experience that thrilled, frightened, and unsettled the Hel(lblade) outta me.

Note: Of course, if, for whatever reason, you can’t play with headphones, the game’s subtitles and closed captioning do good work conveying the game’s unique audio design.

In Hellblade II, the follow-up to 2017’s Hellblade: Senua’s Sacrifice, Senua, a Pict warrior, embarks on another harrowing journey. Instead of venturing to the gates of Helheim, she travels north to confront the viking raiders that have been stealing and enslaving her people. On her journey, Senua is accompanied by a Homeric chorus of voices that reflect her struggle with psychosis. The team at Ninja Theory made it a point to explain that they consulted with mental health experts — including a professor of psychiatry at Cambridge University and people who live with the disorder — in order to portray psychosis respectfully and accurately. That manifests as hearing voices clip in and out of each side of my headphones. At the beginning of the game, I would whip my head left or right as the voices jumped around before I finally grew used to them.

There’s a richness to the voice performances that are lost if they’re diffused through open air instead of beamed directly into your ears. They speak in short, staccato sentences, contextualizing Senua’s feelings about a particular encounter. When she meets someone, her voices wonder if they can be trusted. In fights, they shout encouragements and admonitions. “Get up. Get up!” or “They’re so strong!” I like the voices. They remind me of my own rapid-fire internal monologue quipping about the myriad things that flit around in my head.

Senua’s voices also help with navigation and puzzle solving, but not so much that it becomes obnoxious. In Hellblade II, there’s none of that “the dialogue tells you the solution” stuff that happens in other games. For the first puzzle section involving a path blocked by a large symbol, the voices yell “Focus!” prompting you to press the right trigger to engage Senua’s focus ability. When I got lost during a particularly bewildering section in a dark forest, the voices remarked only once that I was lost then shut up, leaving me to figure out the solution in blessed silence.

Senua’s Saga: Hellblade II starter pack pic.twitter.com/tOAEtdELKe

— Ninja Theory (@NinjaTheory) May 20, 2024

In delving into a drained lake, I could hear water dripping all around me as the caverns echoed with the sound of my breathing. The further I traveled, the darker it got, and the more sinister the sounds became. My breathing slowly transformed into the guttural sounds of disquieted spirits. That should have been a problem. I hate the sounds typically associated with horror — that crawling, squelching sound used whenever a game or movie wants to convey that something’s gross and wet. But the scary sounds in Hellblade II never crossed the threshold into repulsive or triggering for me (misophonia sufferers, rejoice). Instead, the sounds were soft and quiet but no less sinister, sounding as though they were just over my shoulder in the real world.

Image: Ninja Theory

When audio enhances visual, Hellblade II becomes a harrowing experience.

My aural journey with Hellblade II wasn’t limited to sound effects and voices. The music also played an integral role in crafting a visceral full-body experience with the game. Early on, there’s an encounter where it all combines — music, sound effects, and voices — to create a goose bump-inducing moment that I won’t spoil. The tempo of the music combined with the on-screen action created a beat that I could physically feel reverberating in my chest as I played. It’s a game-defining moment that really nails the skill and creativity of Hellblade II’s sound team.

I’m an aural person, someone who places great emphasis on sound, and Hellblade II felt like it was a game made for me. In Hellblade II, there is no in-game UI. There are no tutorial pop-ups that pause the action to tell you what buttons do what or how to interact with the environment. A UI adds a layer of artificiality, reminding you that you that this is make-believe. Without it, the game created a level of reality that I’ve never really experienced before, forcing me to fully inhabit Senua as a character. And with headphones in, I heard so much more of the world and got a greater feeling for Senua’s unique experience in it.

Senua’s Saga: Hellblade II is out now on Xbox and Game Pass.

Read More 

Uber and Lyft to stay in Minneapolis after state lowers driver pay requirements

Illustration by Alex Castro / The Verge

Uber and Lyft will continue operating in Minneapolis after the state legislature approved a lower minimum pay rate for drivers over the weekend, according to reports from the Minnesota Reformer and StarTribune. The new rates will go into effect on January 1st, 2025, if the bill becomes law, and will guarantee drivers at least $1.28 per mile and 31 cents per minute.
In March, Uber and Lyft threatened to leave Minneapolis after city officials passed an ordinance to increase rates to $1.40 per mile and 51 cents per minute while carrying a rider. The ridehailing companies argued that the ordinance was “deeply flawed,” as city officials determined the rate before the state released a study on how much drivers should be paid to earn Minneapolis’ minimum wage.
The new rates devised by Minnesota lawmakers preempt the ordinance passed by Minneapolis officials. In addition to establishing a minimum rate across the state, the bill will let Uber and Lyft drivers appeal account deactivations while also mandating vehicle insurance and compensation for injuries while on the job.

“Through direct engagement with all stakeholders, we have found enough common ground to balance a new pay increase for drivers with what riders can afford to pay and preserve the service,” Lyft spokesperson CJ Macklin said in an emailed statement to The Verge. Uber spokesperson Josh Gold tells The Verge that the company will continue providing services in the state even though “the coming price increases may hurt riders and drivers alike.”
All that’s left is for Minnesota Governor Tim Walz to sign the bill. During a press conference on Saturday, Walz said it will allow people in Minnesota “to continue to use these services if they see fit.” Last year, Governor Walz vetoed legislation that he claimed would’ve made Minnesota one of the most expensive states for ridesharing.
In 2020, Uber and Lyft threatened to stop offering their services in California over a new law that would classify their drivers as employees. This isn’t the end of Uber and Lyft’s costly driver classification problem, either. The companies are facing a lawsuit in Massachusetts that accuses the ridesharing services of misclassifying their drivers as independent contractors, while New Jersey is suing Lyft over similar arguments.

Illustration by Alex Castro / The Verge

Uber and Lyft will continue operating in Minneapolis after the state legislature approved a lower minimum pay rate for drivers over the weekend, according to reports from the Minnesota Reformer and StarTribune. The new rates will go into effect on January 1st, 2025, if the bill becomes law, and will guarantee drivers at least $1.28 per mile and 31 cents per minute.

In March, Uber and Lyft threatened to leave Minneapolis after city officials passed an ordinance to increase rates to $1.40 per mile and 51 cents per minute while carrying a rider. The ridehailing companies argued that the ordinance was “deeply flawed,” as city officials determined the rate before the state released a study on how much drivers should be paid to earn Minneapolis’ minimum wage.

The new rates devised by Minnesota lawmakers preempt the ordinance passed by Minneapolis officials. In addition to establishing a minimum rate across the state, the bill will let Uber and Lyft drivers appeal account deactivations while also mandating vehicle insurance and compensation for injuries while on the job.

“Through direct engagement with all stakeholders, we have found enough common ground to balance a new pay increase for drivers with what riders can afford to pay and preserve the service,” Lyft spokesperson CJ Macklin said in an emailed statement to The Verge. Uber spokesperson Josh Gold tells The Verge that the company will continue providing services in the state even though “the coming price increases may hurt riders and drivers alike.”

All that’s left is for Minnesota Governor Tim Walz to sign the bill. During a press conference on Saturday, Walz said it will allow people in Minnesota “to continue to use these services if they see fit.” Last year, Governor Walz vetoed legislation that he claimed would’ve made Minnesota one of the most expensive states for ridesharing.

In 2020, Uber and Lyft threatened to stop offering their services in California over a new law that would classify their drivers as employees. This isn’t the end of Uber and Lyft’s costly driver classification problem, either. The companies are facing a lawsuit in Massachusetts that accuses the ridesharing services of misclassifying their drivers as independent contractors, while New Jersey is suing Lyft over similar arguments.

Read More 

Hellblade II’s stunning world is alive with horror

Image: Xbox

Traversing the nightmare realms of the human mind has always been central to horror. The stories we tell ourselves about what’s moving in the darkness, the creation of monsters out of unknown sounds — from this emerges our fascination with terror. In Senua’s Saga: Hellblade II, players navigate a cold, dark ninth-century Iceland in the often wet shoes of the Pictish warrior Senua on her second adventure in the third-person action-horror franchise. I was very fond of the first Hellblade, which saw Senua journey into Hel to save the soul of her lover. But the sequel pulled me in even further, with some of the most unforgettable and unsettling psychological horror I’ve ever experienced.
Senua, who lives with psychosis that causes her to hear constant voices, is a fierce warrior who has “killed” gods before in her personal quests at redemption. Cast out by her people in the Orkney Islands for being different, or a “witch,” she has taken her battles across the sea to rescue her people from slavers.
She washes up alone on a cold Viking shore and must find her way through a mysterious land. Her aim, vaguely, has to do with getting her people back, if not stopping the slavers who are taking her people. She soon discovers this is a land of literal giants, who remind me slightly of Attack on Titan: their designs are horrific and genuinely terrifying. The giants have all but destroyed the land, but Senua, being who she is, finds a way to stop them.

Let me be clear: this is the best-looking game I’ve ever played. Senua’s face is so detailed, I struggled to discern the real-time gameplay from the cutscenes; the environmental details are so intricate, it was like looking at photos. With fantastical lighting, inversions of the world as Senua is sucked into various states of mind, and stunning cinematic direction, there was no moment when I was not ogling my TV.
Perhaps more important, however, is the sound design and direction. The voices in Senua’s head portray themselves as furies, gods, and spirits, alternating between berating and encouraging. In terms of game design, they help the player navigate the world — telling you to look in this direction, highlighting when an enemy is about to hit, and so on. Add to this beautiful and horrific soundscapes, with chanting voices or drone-like sounds puncturing the environment.
The world is alive with horror, as it was difficult to discern what was “real” and what was Senua’s psychosis. This is illustrated in combat: Senua is an incredible warrior but the game bookmarks combat. What I mean is that, when a battle happens, you are locked on to a single enemy where you must parry, block, and strike. Senua has an ability called “focus” that builds up with successful blocks and strikes, which is effectively bullet time with swords. Combat is exhausting and the one weak spot of the game, as I always felt she was not responsive to my blocks and parries, resulting in constant strikes from enemies. Senua has no health bar and it is rather hard to die, but the combat proved frustrating even if it was incredible to witness: it’s brutal and dynamic, with fights happening around Senua, enemies crashing into her organically, and kills blending smoothly into cutscenes.

Image: Xbox

Unlike the first game, Senua has companions whom I genuinely grew to like: a spiritual leader; a reformed slaver; and the daughter of a betrayed chief. Villages are filled with people, with one sequence showing off a small battalion fighting alongside Senua. However, Senua remains inherently alone, and the journey is a lonely one — despite the voices constantly in your head. Senua solves environmental puzzles to navigate the world, which, like the first game, did get a little repetitive. These involve her having to match shapes that unlock doors. However, there was at least more diversity to the puzzles this time to keep things interesting, aside from the boring shape minigame.
The game is short, and I finished it in about 10 hours. But it packs a number of gorgeous scenes and scenarios into that brief span, from battling a monster in a volcano pit to the rhythm of metal music to navigating a Silent Hill-esque forest. This was an exhausting, beautiful experience that shattered me often, built me back up, and constantly challenged what it meant to manage one’s darkness. Senua is a broken person, but because of that, she fits into this broken world.
Senua’s Saga: Hellblade II is available now on Xbox and PC.

Image: Xbox

Traversing the nightmare realms of the human mind has always been central to horror. The stories we tell ourselves about what’s moving in the darkness, the creation of monsters out of unknown sounds — from this emerges our fascination with terror. In Senua’s Saga: Hellblade II, players navigate a cold, dark ninth-century Iceland in the often wet shoes of the Pictish warrior Senua on her second adventure in the third-person action-horror franchise. I was very fond of the first Hellblade, which saw Senua journey into Hel to save the soul of her lover. But the sequel pulled me in even further, with some of the most unforgettable and unsettling psychological horror I’ve ever experienced.

Senua, who lives with psychosis that causes her to hear constant voices, is a fierce warrior who has “killed” gods before in her personal quests at redemption. Cast out by her people in the Orkney Islands for being different, or a “witch,” she has taken her battles across the sea to rescue her people from slavers.

She washes up alone on a cold Viking shore and must find her way through a mysterious land. Her aim, vaguely, has to do with getting her people back, if not stopping the slavers who are taking her people. She soon discovers this is a land of literal giants, who remind me slightly of Attack on Titan: their designs are horrific and genuinely terrifying. The giants have all but destroyed the land, but Senua, being who she is, finds a way to stop them.

Let me be clear: this is the best-looking game I’ve ever played. Senua’s face is so detailed, I struggled to discern the real-time gameplay from the cutscenes; the environmental details are so intricate, it was like looking at photos. With fantastical lighting, inversions of the world as Senua is sucked into various states of mind, and stunning cinematic direction, there was no moment when I was not ogling my TV.

Perhaps more important, however, is the sound design and direction. The voices in Senua’s head portray themselves as furies, gods, and spirits, alternating between berating and encouraging. In terms of game design, they help the player navigate the world — telling you to look in this direction, highlighting when an enemy is about to hit, and so on. Add to this beautiful and horrific soundscapes, with chanting voices or drone-like sounds puncturing the environment.

The world is alive with horror, as it was difficult to discern what was “real” and what was Senua’s psychosis. This is illustrated in combat: Senua is an incredible warrior but the game bookmarks combat. What I mean is that, when a battle happens, you are locked on to a single enemy where you must parry, block, and strike. Senua has an ability called “focus” that builds up with successful blocks and strikes, which is effectively bullet time with swords. Combat is exhausting and the one weak spot of the game, as I always felt she was not responsive to my blocks and parries, resulting in constant strikes from enemies. Senua has no health bar and it is rather hard to die, but the combat proved frustrating even if it was incredible to witness: it’s brutal and dynamic, with fights happening around Senua, enemies crashing into her organically, and kills blending smoothly into cutscenes.

Image: Xbox

Unlike the first game, Senua has companions whom I genuinely grew to like: a spiritual leader; a reformed slaver; and the daughter of a betrayed chief. Villages are filled with people, with one sequence showing off a small battalion fighting alongside Senua. However, Senua remains inherently alone, and the journey is a lonely one — despite the voices constantly in your head. Senua solves environmental puzzles to navigate the world, which, like the first game, did get a little repetitive. These involve her having to match shapes that unlock doors. However, there was at least more diversity to the puzzles this time to keep things interesting, aside from the boring shape minigame.

The game is short, and I finished it in about 10 hours. But it packs a number of gorgeous scenes and scenarios into that brief span, from battling a monster in a volcano pit to the rhythm of metal music to navigating a Silent Hill-esque forest. This was an exhausting, beautiful experience that shattered me often, built me back up, and constantly challenged what it meant to manage one’s darkness. Senua is a broken person, but because of that, she fits into this broken world.

Senua’s Saga: Hellblade II is available now on Xbox and PC.

Read More 

Microsoft is in its AI PC era

Image: Alex Parkin / The Verge

We’ve heard the story so many times, that this is the time that Windows on Arm will work and we’ll get a revolution in powerful, portable, long-lasting PCs. For a decade, the story has been fiction. This time, though… I don’t know. It sort of seems real.
On this episode of The Vergecast, Tom Warren joins from Seattle to tell us all about Microsoft’s latest event, what a Copilot Plus PC is, how the new Surface Pro and Surface Laptop feel, why Recall could be the AI killer app Microsoft needs, and more. If Microsoft is right about how good the Qualcomm X chips are, this could be one of the biggest weeks in Windows history. Don’t forget to subscribe to Notepad!

After that, Kylie Robison comes on the show to talk about all things OpenAI. That lively, fun, flirty GPT-4o demo is still on our minds — is this the AI future we’re looking for? And what’s going to happen to Sky, the voice everyone heard and immediately said sounded like Scarlett Johansson, now that ScarJo herself has taken issue with it? The kind of company OpenAI wants to be, and the kinds of products it wants to build, seem hugely important in the next phase of the tech industry.
Finally, we take a question from the Vergecast Hotline about the difference between a touchscreen MacBook — which might happen — and a Mac-like iPad — which probably won’t. It’s about hardware and software, but mostly, it’s about business models. But hey, Apple, can we at least get better browsers?
If you want to know more about everything we discuss in this episode, here are some links to get you started, beginning with Microsoft:

Microsoft’s 2024 Surface AI event: news, rumors, and lots of Qualcomm laptops
Microsoft Surface event: the 6 biggest announcements
Microsoft announces Copilot Plus PCs with built-in AI hardware
The new, faster Surface Pro is Microsoft’s all-purpose AI PC
Microsoft Surface Pro hands-on
Microsoft announces an Arm-powered Surface Laptop
Hands-on with the Surface Laptop on Arm
Recall is Microsoft’s key to unlocking the future of PCs
All the Copilot Plus PC laptops announced at Microsoft Surface 2024
Inside Microsoft’s mission to take down the MacBook Air

And on OpenAI:

OpenAI releases GPT-4o, a faster model that’s free for all ChatGPT users
ChatGPT will be able to talk to you like Scarlett Johansson in Her
OpenAI chief scientist Ilya Sutskever is officially leaving
OpenAI researcher resigns, claiming safety has taken ‘a backseat to shiny products’
OpenAI pulls its Scarlett Johansson-like voice for ChatGPT
Scarlett Johansson told OpenAI not to use her voice — and she’s not happy they might have anyway
OpenAI is ‘in conversations’ with Scarlett Johansson over the ChatGPT voice that sounds just like her

Image: Alex Parkin / The Verge

We’ve heard the story so many times, that this is the time that Windows on Arm will work and we’ll get a revolution in powerful, portable, long-lasting PCs. For a decade, the story has been fiction. This time, though… I don’t know. It sort of seems real.

On this episode of The Vergecast, Tom Warren joins from Seattle to tell us all about Microsoft’s latest event, what a Copilot Plus PC is, how the new Surface Pro and Surface Laptop feel, why Recall could be the AI killer app Microsoft needs, and more. If Microsoft is right about how good the Qualcomm X chips are, this could be one of the biggest weeks in Windows history. Don’t forget to subscribe to Notepad!

After that, Kylie Robison comes on the show to talk about all things OpenAI. That lively, fun, flirty GPT-4o demo is still on our minds — is this the AI future we’re looking for? And what’s going to happen to Sky, the voice everyone heard and immediately said sounded like Scarlett Johansson, now that ScarJo herself has taken issue with it? The kind of company OpenAI wants to be, and the kinds of products it wants to build, seem hugely important in the next phase of the tech industry.

Finally, we take a question from the Vergecast Hotline about the difference between a touchscreen MacBook — which might happen — and a Mac-like iPad — which probably won’t. It’s about hardware and software, but mostly, it’s about business models. But hey, Apple, can we at least get better browsers?

If you want to know more about everything we discuss in this episode, here are some links to get you started, beginning with Microsoft:

Microsoft’s 2024 Surface AI event: news, rumors, and lots of Qualcomm laptops
Microsoft Surface event: the 6 biggest announcements
Microsoft announces Copilot Plus PCs with built-in AI hardware
The new, faster Surface Pro is Microsoft’s all-purpose AI PC
Microsoft Surface Pro hands-on
Microsoft announces an Arm-powered Surface Laptop
Hands-on with the Surface Laptop on Arm
Recall is Microsoft’s key to unlocking the future of PCs
All the Copilot Plus PC laptops announced at Microsoft Surface 2024
Inside Microsoft’s mission to take down the MacBook Air

And on OpenAI:

OpenAI releases GPT-4o, a faster model that’s free for all ChatGPT users
ChatGPT will be able to talk to you like Scarlett Johansson in Her
OpenAI chief scientist Ilya Sutskever is officially leaving
OpenAI researcher resigns, claiming safety has taken ‘a backseat to shiny products’
OpenAI pulls its Scarlett Johansson-like voice for ChatGPT
Scarlett Johansson told OpenAI not to use her voice — and she’s not happy they might have anyway
OpenAI is ‘in conversations’ with Scarlett Johansson over the ChatGPT voice that sounds just like her

Read More 

Adobe Lightroom gets a magic eraser, and it’s impressive

Adobe is adding more mobile app-like AI editing tools to Lightroom. | Image: Adobe

Adobe is adding some new generative AI tools to Lightroom that aim to make the photo editing platform easier to use for both professional creatives and inexperienced users alike, even from a phone. These include an in-development object removal feature that’s rolling out in beta and new AI lens-blurring effects that are now generally available to all Lightroom users.
“Generative Remove” — powered by the company’s Firefly AI model — is now available to try in early access across Lightroom’s mobile, web, and desktop apps. Described as Lightroom’s “most powerful remove tool yet,” the feature allows users to “paint” over unwanted objects or people in images and then delete them with a click of a button.
It’s pretty similar to the “Magic Eraser” tools provided by Canva and Google’s Pixel devices or the one-click delete capabilities Adobe demonstrated last October for Project Stardust — a developmental “object-aware” photo editing engine that’s also powered by the company’s Firefly AI. Unlike Photoshop’s popular Content-Aware Fill tool (which tries to fill blank spaces by matching nearby pixels), the Generative Remove tool instead generates three different variations that replace the removed object, allowing users to select the option they feel looks most natural.

Image: Adobe
It does a decent job of matching complex or detailed backgrounds.

The live demonstration Adobe gave me over a video call was one of the most impressive I’ve seen from Adobe’s Firefly-powered products. The tool removed every example object in its entirety without leaving any strange artifacts behind, and the backgrounds generated to replace them — while not being an accurate depiction of what’s behind the object — looked natural enough to be convincing. The process of removing objects from photographs by professionals used to require some fairly laborious masking and editing workflows, so not only does this make a boring task easier but it’s also less daunting for new users.

The Generative Remove feature is free to use while in beta, after which it’ll likely adopt the “Generative Credit” system used by other Firefly-powered tools, with credit packs currently starting at $4.99. When the feature is generally available, it’ll also support Content Credentials, which applies a metadata label to images edited using Adobe’s generative AI tools.
A new AI-powered Lens Blur tool is also generally available today for all Lightroom users. This feature can apply a variety of different blurring effects to any part of an image with a single click and automatically estimates field-of-view depth to make background blur appear more natural. Lens Blur is operated like a filter — users can apply an automated preset or adjust specific parameters until they get their desired effect.

Image: Adobe
The Lens Blur feature provides several filter-like presets to choose from if users can’t be bothered to make manual adjustments.

Adobe has been cramming Firefly-powered tools into several of its creative software applications since launching the generative AI model last year, so these Lightroom additions aren’t exactly surprising. Still, making these features as easy to operate as the Facetune app and Google’s Magic Eraser could help Adobe tempt new users to the platform if they no longer feel so intimidated by its complicated, professional-focused interface.

Adobe is adding more mobile app-like AI editing tools to Lightroom. | Image: Adobe

Adobe is adding some new generative AI tools to Lightroom that aim to make the photo editing platform easier to use for both professional creatives and inexperienced users alike, even from a phone. These include an in-development object removal feature that’s rolling out in beta and new AI lens-blurring effects that are now generally available to all Lightroom users.

“Generative Remove” — powered by the company’s Firefly AI model — is now available to try in early access across Lightroom’s mobile, web, and desktop apps. Described as Lightroom’s “most powerful remove tool yet,” the feature allows users to “paint” over unwanted objects or people in images and then delete them with a click of a button.

It’s pretty similar to the “Magic Eraser” tools provided by Canva and Google’s Pixel devices or the one-click delete capabilities Adobe demonstrated last October for Project Stardust — a developmental “object-aware” photo editing engine that’s also powered by the company’s Firefly AI. Unlike Photoshop’s popular Content-Aware Fill tool (which tries to fill blank spaces by matching nearby pixels), the Generative Remove tool instead generates three different variations that replace the removed object, allowing users to select the option they feel looks most natural.

Image: Adobe
It does a decent job of matching complex or detailed backgrounds.

The live demonstration Adobe gave me over a video call was one of the most impressive I’ve seen from Adobe’s Firefly-powered products. The tool removed every example object in its entirety without leaving any strange artifacts behind, and the backgrounds generated to replace them — while not being an accurate depiction of what’s behind the object — looked natural enough to be convincing. The process of removing objects from photographs by professionals used to require some fairly laborious masking and editing workflows, so not only does this make a boring task easier but it’s also less daunting for new users.

The Generative Remove feature is free to use while in beta, after which it’ll likely adopt the “Generative Credit” system used by other Firefly-powered tools, with credit packs currently starting at $4.99. When the feature is generally available, it’ll also support Content Credentials, which applies a metadata label to images edited using Adobe’s generative AI tools.

A new AI-powered Lens Blur tool is also generally available today for all Lightroom users. This feature can apply a variety of different blurring effects to any part of an image with a single click and automatically estimates field-of-view depth to make background blur appear more natural. Lens Blur is operated like a filter — users can apply an automated preset or adjust specific parameters until they get their desired effect.

Image: Adobe
The Lens Blur feature provides several filter-like presets to choose from if users can’t be bothered to make manual adjustments.

Adobe has been cramming Firefly-powered tools into several of its creative software applications since launching the generative AI model last year, so these Lightroom additions aren’t exactly surprising. Still, making these features as easy to operate as the Facetune app and Google’s Magic Eraser could help Adobe tempt new users to the platform if they no longer feel so intimidated by its complicated, professional-focused interface.

Read More 

Sonos’ Roam 2 speaker is easier to use and available today for $179

Image: Sonos

Sonos is refreshing its small portable speaker, the Roam, with a second-generation model that mostly adds some minor quality-of-life improvements. It’s reasonable to think of the new Roam 2, available today for $199, as more of a Roam 1.5: this is really what Sonos should’ve released in the first place.
As I reported earlier this month, the Roam 2 can be used right out of the box like any other Bluetooth speaker. That differs from the original, which awkwardly required an initial setup process with the Sonos app — on your home Wi-Fi network, no less — before you could use it on the go. Seems silly for a portable audio product, right? Thankfully Sonos now agrees.
The Roam 2 also gains a dedicated Bluetooth pairing button on the back, whereas the previous model piled that and other functionality into the power button. You’d have to hold it down for varying lengths of time for different results. Now it’s a much more foolproof approach.

Photo by Chris Welch / The Verge
The original Sonos Roam has faced complaints over deteriorating battery performance.

But one thing owners of the original are likely wondering about is longevity. Unlike the Move and Move 2, the Roam (and now Roam 2) lacks a user-replaceable battery. And there have been complaints aplenty about the tiny speaker gradually holding less of a charge over time. The compact speaker’s max continuous listening time of 10 hours is unchanged, but Sonos CEO Patrick Spence told me there are under-the-hood tweaks.
“We’ve optimized the battery; there’s a little bit better battery life,” he told me at a media briefing in New York last week without getting too specific. He said Sonos’ main learnings from the original Roam were that customers wanted more tactile buttons and the freedom to use it immediately after purchase. (The new Ace headphones can similarly be paired to a device without delay.)

Image: Sonos
The design hasn’t changed, nor has the sound quality.

The Roam continues to support features like Sound Swap, making it easy to switch music between it and the company’s other speakers. You can also pair any Bluetooth-compatible source to the speaker and then share that audio across your entire Sonos system. The USB-C port remains strictly for power, however; it doesn’t have the line-in capabilities of the Move 2.
“You’re right that it’s an incremental change,” Spence said of the Roam 2 overall. Internally, the speaker has the same driver layout as before — one tweeter and one midwoofer — and Sonos isn’t hyping any sound quality improvements. So if you’ve heard the first-gen speaker, you’ve already got a good idea of what this one is capable of.
The Roam 2 comes in black, green, orange, or white; you can tell you’re looking at the new model because the Sonos logo now matches the body color instead of the white text from last time. It’s on sale at retailers beginning today, although some Best Buy locations seem to have put it on display even earlier.

Image: Sonos

Sonos is refreshing its small portable speaker, the Roam, with a second-generation model that mostly adds some minor quality-of-life improvements. It’s reasonable to think of the new Roam 2, available today for $199, as more of a Roam 1.5: this is really what Sonos should’ve released in the first place.

As I reported earlier this month, the Roam 2 can be used right out of the box like any other Bluetooth speaker. That differs from the original, which awkwardly required an initial setup process with the Sonos app — on your home Wi-Fi network, no less — before you could use it on the go. Seems silly for a portable audio product, right? Thankfully Sonos now agrees.

The Roam 2 also gains a dedicated Bluetooth pairing button on the back, whereas the previous model piled that and other functionality into the power button. You’d have to hold it down for varying lengths of time for different results. Now it’s a much more foolproof approach.

Photo by Chris Welch / The Verge
The original Sonos Roam has faced complaints over deteriorating battery performance.

But one thing owners of the original are likely wondering about is longevity. Unlike the Move and Move 2, the Roam (and now Roam 2) lacks a user-replaceable battery. And there have been complaints aplenty about the tiny speaker gradually holding less of a charge over time. The compact speaker’s max continuous listening time of 10 hours is unchanged, but Sonos CEO Patrick Spence told me there are under-the-hood tweaks.

“We’ve optimized the battery; there’s a little bit better battery life,” he told me at a media briefing in New York last week without getting too specific. He said Sonos’ main learnings from the original Roam were that customers wanted more tactile buttons and the freedom to use it immediately after purchase. (The new Ace headphones can similarly be paired to a device without delay.)

Image: Sonos
The design hasn’t changed, nor has the sound quality.

The Roam continues to support features like Sound Swap, making it easy to switch music between it and the company’s other speakers. You can also pair any Bluetooth-compatible source to the speaker and then share that audio across your entire Sonos system. The USB-C port remains strictly for power, however; it doesn’t have the line-in capabilities of the Move 2.

“You’re right that it’s an incremental change,” Spence said of the Roam 2 overall. Internally, the speaker has the same driver layout as before — one tweeter and one midwoofer — and Sonos isn’t hyping any sound quality improvements. So if you’ve heard the first-gen speaker, you’ve already got a good idea of what this one is capable of.

The Roam 2 comes in black, green, orange, or white; you can tell you’re looking at the new model because the Sonos logo now matches the body color instead of the white text from last time. It’s on sale at retailers beginning today, although some Best Buy locations seem to have put it on display even earlier.

Read More 

The Sonos Ace headphones are here, and they’re damn impressive

The company’s app redesign fumble threatens to steal the thunder from what otherwise looks (and feels) like a strong debut in a new category. There’s so much riding on the new $450 Sonos Ace headphones. With demand cooling for the company’s speakers and soundbars ever since a pandemic boom, Sonos could use a hit product — or at least a strong debut in a massive product category. The Ace could certainly end up being that, but these headphones arrive under the shadow of Sonos’ recent app redesign, which has angered many customers who were left without many features after updating.
Sonos has vowed to restore those software functions in the weeks ahead, but the whole situation — and the unshakeable feeling that the app overhaul was rushed out the door — has rocked the trust between the audio brand and some of its most loyal customers. This is not where Sonos wanted to find itself in the lead-up to what CEO Patrick Spence has described as its most-requested new device ever. But it’s where we are now as the Ace headphones go up for preorder ahead of a June 5th release.
Last week, the company hosted media in New York City for a first look at the Sonos Ace. I got to test out the noise-canceling headphones — not for long enough to form any serious judgments on sound quality — and experience their headline feature, which is the ability to instantly transfer TV audio from a Sonos soundbar to the headphones with the push of a button. The Ace headphones support spatial audio and head tracking, providing a cinematic private listening experience for those times when you might otherwise need quiet in the TV room. (Spatial audio can also be used during regular music listening.)

The content key (metal slider) is how you adjust volume, play/pause, and beam TV audio from a Sonos soundbar to the Ace.

During the briefing, I sat down with Spence to discuss the headphones, which he said have been requested by “tens of thousands” of customers. Rumors of Sonos entering this space have swirled for many years. There were many prototypes along the way, but the Ace hardware you see here underwent a development period of around two years. And they certainly borrow some ideas from their contemporaries.
These look like what you’d get if you put Sony’s WH-1000XM5 and Apple’s AirPods Max into a blender. The pleather ear cushions are magnetic and easily removable, though Sonos tosses in some thoughtful touches of its own; the insides are color-coded so you can easily tell which goes on what side. There’s a fingerprint-resistant coating on the exterior of the headphones to reduce smudges — particularly helpful for the black pair. And the memory foam headband has varying levels of padding to avoid putting too much pressure on any one section of your head.
Mercifully, the Ace are far lighter than the AirPods Max. There’s not quite as much metal throughout, but they still feel very well put together. And on my ears, they felt wonderfully comfortable. “We’ve done more work on this product than anyone in the industry to make sure it fits a variety of different heads, ears — both men and women — and I think this is going to be the most comfortable premium headphone yet,” Spence told media.
Try as I might, I couldn’t find any obvious first-generation hardware flaws in my brief time with them. Maybe they’ll reveal themselves as I review the Ace, but on first impression, it’s clear that Sonos sweated the small details. (One more example: inside the fabric carrying case is a pouch for the USB-C and headphone cables that also attaches magnetically.) The controls are done right too, with physical buttons for everything and no tap or swipe gestures to memorize.

The Ace’s vegan leather ear pads are magnetic and easily replaceable. And the insides are different colors so you know which goes where.

But if you were expecting the Sonos Ace to inherit all the same functionality as the company’s home audio speakers, you’re in for some disappointment. These don’t play music over Wi-Fi. The best you’ll get is aptX Adaptive on modern Android devices for higher-bitrate Bluetooth streaming from compatible music services. You can’t group the Ace with Sonos speakers or set the headphones as their own “zone” in the app — yes, you’ll need the divisive new app to change settings or adjust EQ — and while I’ve long dreamt of some intelligent automatic handoff between headphones and speakers whenever you arrive home, that’s not present, either.

Image: Sonos
You can privately listen to TV audio sent to the headphones from the company’s soundbars.

Right now, the one big Sonos-y trick of the Ace is their ability to receive audio from the company’s soundbars for private listening. (Only the flagship Arc will support this feature at launch, with the Beam Gen 2, Beam, and Ray coming later.) You hold down the “content key” — that’s the metal slider that also controls volume and play/pause — and within a couple seconds, the soundbar beams over Dolby Atmos audio to the headphones, complete with spatial audio head tracking.
This works for any input device running through the soundbar. Streaming boxes? Sure. Gaming consoles? Check. You can walk around the house and keep listening to a sports game in the background as you clean up or focus on other things. TV Audio Swap will be exclusively available to people with iOS devices at launch, with Android support for this major feature coming “soon.” So Android users can take advantage of better Bluetooth audio (thanks to aptX), while the iOS side gets to enjoy the headlining home theater trick.

The headphones contain a sensor that can detect when you’re moving around, at which point they’ll turn off head tracking.

Stereo content is upmixed by default in home theater mode, but you can always disable spatial audio if you prefer to hear proper stereo without any wizardry applied. Sonos’ sound guru Giles Martin told me the company is being “careful” about how aggressively it virtualizes stereo. The head tracking effect is fairly subtle because, as Martin noted, if it’s too obvious or gimmicky, people will likely just turn it off. The headphones can detect when you’ve stood up to go grab something from the fridge, and in those situations, head tracking temporarily gets disabled until you’re stationary again.
All the intensive audio processing and binaural encoding gets done on the soundbar side, but here’s an interesting thing: Sonos is using Wi-Fi to beam audio over to the headphones in this home theater mode. It’s not lossless, however. One of the company’s engineers told me it’s 345kbps and also confirmed that this Wi-Fi streaming does eat into battery life, which is normally rated at 30 hours (with ANC on). But Sonos isn’t sharing battery estimates for home theater playback — partially because the headphones support fast charging if you ever do run them down.

The Ace come in black or white — and Sonos really obsessed over that white shade.

The memory foam ear cushions are covered in vegan leather.

Private listening between TVs (or streaming devices) and headphones is by no means a new concept; you can listen to the Apple TV with Apple’s AirPods. Roku has included a headphone jack on many of its remotes for years. And you can pair Bluetooth earbuds with any number of Google TVs.
But Sonos believes the Ace can dial up the immersion to a level far beyond its competitors, and this is partly due to a new feature the company calls TrueCinema. Your soundbar will perform a calibration of the room’s acoustic qualities — sort of like TruePlay — while the mics on the headphones will help pinpoint your seating position and adapt the spatial audio to your unique space. Theoretically, this data will make the 3D spatial audio surround sound feel all the more convincing and like you’re not wearing headphones at all. I’ll need more hands-on time to determine if TrueCinema is really a difference-maker. As is, the feature won’t roll out until later this year.

The fabric carrying case has a pouch for accessories.

Can Sonos really go toe to toe with Bose and Sony in active noise cancellation? Will the Ace’s aware / transparency mode prove as natural-sounding as the AirPods Max, which remain undefeated in that department? And how will the sound quality stack up after some extended listening time?
Stay tuned for our full review of the Sonos Ace in the coming days, and if you’re curious about anything in particular, don’t hesitate to drop a comment.
Photography by Chris Welch / The Verge

The company’s app redesign fumble threatens to steal the thunder from what otherwise looks (and feels) like a strong debut in a new category.

There’s so much riding on the new $450 Sonos Ace headphones. With demand cooling for the company’s speakers and soundbars ever since a pandemic boom, Sonos could use a hit product — or at least a strong debut in a massive product category. The Ace could certainly end up being that, but these headphones arrive under the shadow of Sonos’ recent app redesign, which has angered many customers who were left without many features after updating.

Sonos has vowed to restore those software functions in the weeks ahead, but the whole situation — and the unshakeable feeling that the app overhaul was rushed out the door — has rocked the trust between the audio brand and some of its most loyal customers. This is not where Sonos wanted to find itself in the lead-up to what CEO Patrick Spence has described as its most-requested new device ever. But it’s where we are now as the Ace headphones go up for preorder ahead of a June 5th release.

Last week, the company hosted media in New York City for a first look at the Sonos Ace. I got to test out the noise-canceling headphones — not for long enough to form any serious judgments on sound quality — and experience their headline feature, which is the ability to instantly transfer TV audio from a Sonos soundbar to the headphones with the push of a button. The Ace headphones support spatial audio and head tracking, providing a cinematic private listening experience for those times when you might otherwise need quiet in the TV room. (Spatial audio can also be used during regular music listening.)

The content key (metal slider) is how you adjust volume, play/pause, and beam TV audio from a Sonos soundbar to the Ace.

During the briefing, I sat down with Spence to discuss the headphones, which he said have been requested by “tens of thousands” of customers. Rumors of Sonos entering this space have swirled for many years. There were many prototypes along the way, but the Ace hardware you see here underwent a development period of around two years. And they certainly borrow some ideas from their contemporaries.

These look like what you’d get if you put Sony’s WH-1000XM5 and Apple’s AirPods Max into a blender. The pleather ear cushions are magnetic and easily removable, though Sonos tosses in some thoughtful touches of its own; the insides are color-coded so you can easily tell which goes on what side. There’s a fingerprint-resistant coating on the exterior of the headphones to reduce smudges — particularly helpful for the black pair. And the memory foam headband has varying levels of padding to avoid putting too much pressure on any one section of your head.

Mercifully, the Ace are far lighter than the AirPods Max. There’s not quite as much metal throughout, but they still feel very well put together. And on my ears, they felt wonderfully comfortable. “We’ve done more work on this product than anyone in the industry to make sure it fits a variety of different heads, ears — both men and women — and I think this is going to be the most comfortable premium headphone yet,” Spence told media.

Try as I might, I couldn’t find any obvious first-generation hardware flaws in my brief time with them. Maybe they’ll reveal themselves as I review the Ace, but on first impression, it’s clear that Sonos sweated the small details. (One more example: inside the fabric carrying case is a pouch for the USB-C and headphone cables that also attaches magnetically.) The controls are done right too, with physical buttons for everything and no tap or swipe gestures to memorize.

The Ace’s vegan leather ear pads are magnetic and easily replaceable. And the insides are different colors so you know which goes where.

But if you were expecting the Sonos Ace to inherit all the same functionality as the company’s home audio speakers, you’re in for some disappointment. These don’t play music over Wi-Fi. The best you’ll get is aptX Adaptive on modern Android devices for higher-bitrate Bluetooth streaming from compatible music services. You can’t group the Ace with Sonos speakers or set the headphones as their own “zone” in the app — yes, you’ll need the divisive new app to change settings or adjust EQ — and while I’ve long dreamt of some intelligent automatic handoff between headphones and speakers whenever you arrive home, that’s not present, either.

Image: Sonos
You can privately listen to TV audio sent to the headphones from the company’s soundbars.

Right now, the one big Sonos-y trick of the Ace is their ability to receive audio from the company’s soundbars for private listening. (Only the flagship Arc will support this feature at launch, with the Beam Gen 2, Beam, and Ray coming later.) You hold down the “content key” — that’s the metal slider that also controls volume and play/pause — and within a couple seconds, the soundbar beams over Dolby Atmos audio to the headphones, complete with spatial audio head tracking.

This works for any input device running through the soundbar. Streaming boxes? Sure. Gaming consoles? Check. You can walk around the house and keep listening to a sports game in the background as you clean up or focus on other things. TV Audio Swap will be exclusively available to people with iOS devices at launch, with Android support for this major feature coming “soon.” So Android users can take advantage of better Bluetooth audio (thanks to aptX), while the iOS side gets to enjoy the headlining home theater trick.

The headphones contain a sensor that can detect when you’re moving around, at which point they’ll turn off head tracking.

Stereo content is upmixed by default in home theater mode, but you can always disable spatial audio if you prefer to hear proper stereo without any wizardry applied. Sonos’ sound guru Giles Martin told me the company is being “careful” about how aggressively it virtualizes stereo. The head tracking effect is fairly subtle because, as Martin noted, if it’s too obvious or gimmicky, people will likely just turn it off. The headphones can detect when you’ve stood up to go grab something from the fridge, and in those situations, head tracking temporarily gets disabled until you’re stationary again.

All the intensive audio processing and binaural encoding gets done on the soundbar side, but here’s an interesting thing: Sonos is using Wi-Fi to beam audio over to the headphones in this home theater mode. It’s not lossless, however. One of the company’s engineers told me it’s 345kbps and also confirmed that this Wi-Fi streaming does eat into battery life, which is normally rated at 30 hours (with ANC on). But Sonos isn’t sharing battery estimates for home theater playback — partially because the headphones support fast charging if you ever do run them down.

The Ace come in black or white — and Sonos really obsessed over that white shade.

The memory foam ear cushions are covered in vegan leather.

Private listening between TVs (or streaming devices) and headphones is by no means a new concept; you can listen to the Apple TV with Apple’s AirPods. Roku has included a headphone jack on many of its remotes for years. And you can pair Bluetooth earbuds with any number of Google TVs.

But Sonos believes the Ace can dial up the immersion to a level far beyond its competitors, and this is partly due to a new feature the company calls TrueCinema. Your soundbar will perform a calibration of the room’s acoustic qualities — sort of like TruePlay — while the mics on the headphones will help pinpoint your seating position and adapt the spatial audio to your unique space. Theoretically, this data will make the 3D spatial audio surround sound feel all the more convincing and like you’re not wearing headphones at all. I’ll need more hands-on time to determine if TrueCinema is really a difference-maker. As is, the feature won’t roll out until later this year.

The fabric carrying case has a pouch for accessories.

Can Sonos really go toe to toe with Bose and Sony in active noise cancellation? Will the Ace’s aware / transparency mode prove as natural-sounding as the AirPods Max, which remain undefeated in that department? And how will the sound quality stack up after some extended listening time?

Stay tuned for our full review of the Sonos Ace in the coming days, and if you’re curious about anything in particular, don’t hesitate to drop a comment.

Photography by Chris Welch / The Verge

Read More 

Aqara’s new smart outlet can lock the door when your phone starts charging

The Aqara Wall Outlet H2 can trigger scenes based upon real-time power usage. | Image: Aqara

Aqara just announced the €39.99 Wall Outlet H2, which offers a one-for-one smart replacement for standard 16A wall sockets used in Europe. Its party trick is the ability to monitor the real-time power usage of connected devices and appliances to initiate automations around the home.
For example, if you charge your phone when going to sleep you could automatically activate a nighttime scene — arm motion detectors, lock the door, close blinds, and turn off lights — just as soon as the H2 begins pulling more than 10W of power. Or, it can detect when the laundry cycle ends to shut the machine off and send you a notification — that way you don’t have to listen to it beep incessantly until you finally walk upstairs with heavy regret for having bought the cheap entry-level Bosch you’ve grown to hate (yes: this is me).
Wall Outlet H2 is built on Zigbee 3.0, not Thread like Eve’s smart outlet, but can be integrated into Matter-compliant homes if you have a compatible Aqara hub like the new M3 installed. Even without Matter, Aqara’s hubs ensure compatibility with Apple Home, Alexa, Google Home, Samsung SmartThings, Home Assistant, and other smart home solutions.

Image: Aqara
The Aqara H2 fits standard 55mm wall plates.

The H2 can automatically deactivate power at defined thresholds up to 3840W. It can also turn off the jack as soon as power dips below a user-defined load of 0W to 2W for over 30 minutes — something that can improve battery charging safety and the lifespan of some battery cells. It’s designed to fit standard 55mm wall plates in single or multi-outlet configurations. An LED light on the unit itself can be configured to show the current state or be shut off entirely.
Aqara’s approach to energy management is similar to Ikea’s recently announced Energy Insights feature and Inspelning energy-monitoring smart plug, in that it’s currently taking a proprietary approach. Hopefully they’ll both work with Matter’s new energy monitoring capabilities somewhere down the road. Intelligent control over home energy consumption holds the potential to not only make homes smarter, but to reduce energy use and save money.
The €39.99 (roughly $43) Aqara Wall Outlet H2 will be available starting May 30th in select European countries.

The Aqara Wall Outlet H2 can trigger scenes based upon real-time power usage. | Image: Aqara

Aqara just announced the €39.99 Wall Outlet H2, which offers a one-for-one smart replacement for standard 16A wall sockets used in Europe. Its party trick is the ability to monitor the real-time power usage of connected devices and appliances to initiate automations around the home.

For example, if you charge your phone when going to sleep you could automatically activate a nighttime scene — arm motion detectors, lock the door, close blinds, and turn off lights — just as soon as the H2 begins pulling more than 10W of power. Or, it can detect when the laundry cycle ends to shut the machine off and send you a notification — that way you don’t have to listen to it beep incessantly until you finally walk upstairs with heavy regret for having bought the cheap entry-level Bosch you’ve grown to hate (yes: this is me).

Wall Outlet H2 is built on Zigbee 3.0, not Thread like Eve’s smart outlet, but can be integrated into Matter-compliant homes if you have a compatible Aqara hub like the new M3 installed. Even without Matter, Aqara’s hubs ensure compatibility with Apple Home, Alexa, Google Home, Samsung SmartThings, Home Assistant, and other smart home solutions.

Image: Aqara
The Aqara H2 fits standard 55mm wall plates.

The H2 can automatically deactivate power at defined thresholds up to 3840W. It can also turn off the jack as soon as power dips below a user-defined load of 0W to 2W for over 30 minutes — something that can improve battery charging safety and the lifespan of some battery cells. It’s designed to fit standard 55mm wall plates in single or multi-outlet configurations. An LED light on the unit itself can be configured to show the current state or be shut off entirely.

Aqara’s approach to energy management is similar to Ikea’s recently announced Energy Insights feature and Inspelning energy-monitoring smart plug, in that it’s currently taking a proprietary approach. Hopefully they’ll both work with Matter’s new energy monitoring capabilities somewhere down the road. Intelligent control over home energy consumption holds the potential to not only make homes smarter, but to reduce energy use and save money.

The €39.99 (roughly $43) Aqara Wall Outlet H2 will be available starting May 30th in select European countries.

Read More 

Volvo teams up with Aurora to reveal an autonomous semi truck

Image: Aurora

Volvo revealed its first “production-ready” self-driving truck that it’s making with Aurora, the autonomous driving technology company founded by former executives from Google, Uber, and Tesla.
The truck is based on Volvo’s new VNL, which is a Class 8 semi truck built for long-haul transportation. The autonomous version of the truck features an array of sensors and cameras to power Aurora’s Level 4 autonomous driving system, which enables the truck to operate without a human behind the wheel. The companies say the truck is “purpose-designed and purpose-built” for Aurora’s self-driving hardware and software stack.
“This truck is the first of our standardized global autonomous technology platform, which will enable us to introduce additional models in the future, bringing autonomy to all Volvo Group truck brands, and to other geographies and use cases,” Nils Jaeger, president of Volvo Autonomous Solutions, said in a statement.
The truck is based on Volvo’s new VNL, which is a Class 8 semi truck built for long-haul transportation
The idea of the vehicle being purpose-built is important for the mass production of self-driving trucks, which is crucial if the companies are ever to achieve a return on the massive amounts of money that they’ve invested in AV development. The trucks will be built at Volvo’s New River Valley plant in Dublin, Virginia, which is the company’s largest in the world.
Volvo, which makes about 10 percent of the world’s Class 8 trucks, first started working with Aurora in 2018 toward solutions for self-driving trucks. The companies have tested their technology on public roads, with Aurora having driven 1.5 million miles on commercial roads.

Aurora has said it plans to deploy 20 fully autonomous trucks this year, with an eye on expanding to about 100 trucks in 2025 and eventually selling to other companies. The company also is working with German auto supplier Continental to deploy driverless trucks at scale in 2027.
Autonomous trucks were once thought to precede robotaxis and personally owned autonomous vehicles in mass adoption but have run into similar obstacles as those other vehicle types along the way. Some companies have gone out of business, while others have cut plans to deploy driverless trucks as timelines have stretched into the future and funding has dried up. Other automakers are still bullish, designing their own autonomous trucks with hard deadlines for deployment.
Moreover, public opinion toward autonomous vehicles has trended downward, thanks in part to missteps of companies like Tesla and Cruise, the latter of which was forced to pause operations nationwide after a pedestrian was hurt by one of its vehicles.
Aurora hasn’t had any public mishaps, nor has it attracted the kind of negative attention from the government as some of its peers. The company reported a net loss of $165 million in the first quarter of 2024, which is a 16 percent improvement over the same period the year before.

Image: Aurora

Volvo revealed its first “production-ready” self-driving truck that it’s making with Aurora, the autonomous driving technology company founded by former executives from Google, Uber, and Tesla.

The truck is based on Volvo’s new VNL, which is a Class 8 semi truck built for long-haul transportation. The autonomous version of the truck features an array of sensors and cameras to power Aurora’s Level 4 autonomous driving system, which enables the truck to operate without a human behind the wheel. The companies say the truck is “purpose-designed and purpose-built” for Aurora’s self-driving hardware and software stack.

“This truck is the first of our standardized global autonomous technology platform, which will enable us to introduce additional models in the future, bringing autonomy to all Volvo Group truck brands, and to other geographies and use cases,” Nils Jaeger, president of Volvo Autonomous Solutions, said in a statement.

The truck is based on Volvo’s new VNL, which is a Class 8 semi truck built for long-haul transportation

The idea of the vehicle being purpose-built is important for the mass production of self-driving trucks, which is crucial if the companies are ever to achieve a return on the massive amounts of money that they’ve invested in AV development. The trucks will be built at Volvo’s New River Valley plant in Dublin, Virginia, which is the company’s largest in the world.

Volvo, which makes about 10 percent of the world’s Class 8 trucks, first started working with Aurora in 2018 toward solutions for self-driving trucks. The companies have tested their technology on public roads, with Aurora having driven 1.5 million miles on commercial roads.

Aurora has said it plans to deploy 20 fully autonomous trucks this year, with an eye on expanding to about 100 trucks in 2025 and eventually selling to other companies. The company also is working with German auto supplier Continental to deploy driverless trucks at scale in 2027.

Autonomous trucks were once thought to precede robotaxis and personally owned autonomous vehicles in mass adoption but have run into similar obstacles as those other vehicle types along the way. Some companies have gone out of business, while others have cut plans to deploy driverless trucks as timelines have stretched into the future and funding has dried up. Other automakers are still bullish, designing their own autonomous trucks with hard deadlines for deployment.

Moreover, public opinion toward autonomous vehicles has trended downward, thanks in part to missteps of companies like Tesla and Cruise, the latter of which was forced to pause operations nationwide after a pedestrian was hurt by one of its vehicles.

Aurora hasn’t had any public mishaps, nor has it attracted the kind of negative attention from the government as some of its peers. The company reported a net loss of $165 million in the first quarter of 2024, which is a 16 percent improvement over the same period the year before.

Read More 

Scroll to top
Generated by Feedzy