verge-rss

All the new from Microsoft Build 2024

Image: The Verge

Expect a bunch of new developer tools to go along with Microsoft’s fresh Copilot Plus PCs announcement. Microsoft is kicking off its three-day Build developer conference on Tuesday, May 21st, with a livestream starting at 11:30AM ET / 8:30AM PT. It’ll lead into an in-person keynote led by CEO Satya Nadella, which commences at 12PM ET / 9AM PT, followed by developer sessions that will come available to check out online.
Build is Microsoft’s developer conference where the company provides in-depth sessions for developers and professionals alike to get familiar with tools supporting new Windows 11 and Microsoft 365 features. This time, we’re expecting plenty of new AI announcements and sessions related to just-announced AI features like Recall.
Microsoft made a huge splash in the PC world on Monday announcing new Arm-powered “Copilot Plus PCs,” including a brand-new Surface Laptop and tablet. The new devices come with an emulation layer called Prism that promises seamless compatibility with x86 apps on Windows — taking a page out of Apple’s successful transition to its own M-series chips.
However, as Tom Warren wrote in his Notepad newsletter after the Surface event, “Apple’s success with the M1 was thanks to developers quickly porting apps to be fully native. Windows needs that same level of support from its developer community.” We’ll see this week if Microsoft has all the tools it needs to make that happen.
Read on for all the latest Build news.

Image: The Verge

Expect a bunch of new developer tools to go along with Microsoft’s fresh Copilot Plus PCs announcement.

Microsoft is kicking off its three-day Build developer conference on Tuesday, May 21st, with a livestream starting at 11:30AM ET / 8:30AM PT. It’ll lead into an in-person keynote led by CEO Satya Nadella, which commences at 12PM ET / 9AM PT, followed by developer sessions that will come available to check out online.

Build is Microsoft’s developer conference where the company provides in-depth sessions for developers and professionals alike to get familiar with tools supporting new Windows 11 and Microsoft 365 features. This time, we’re expecting plenty of new AI announcements and sessions related to just-announced AI features like Recall.

Microsoft made a huge splash in the PC world on Monday announcing new Arm-powered “Copilot Plus PCs,” including a brand-new Surface Laptop and tablet. The new devices come with an emulation layer called Prism that promises seamless compatibility with x86 apps on Windows — taking a page out of Apple’s successful transition to its own M-series chips.

However, as Tom Warren wrote in his Notepad newsletter after the Surface event, “Apple’s success with the M1 was thanks to developers quickly porting apps to be fully native. Windows needs that same level of support from its developer community.” We’ll see this week if Microsoft has all the tools it needs to make that happen.

Read on for all the latest Build news.

Read More 

Microsoft Teams is adding a Slack-favorite emoji feature

Microsoft is adding custom emoji to Teams. | Image: The Verge

Microsoft is adding a new feature to its Teams communications platform that enables users to upload their own custom emoji to use in reactions and messages. Announced during its Build developer conference on Tuesday, Microsoft says the new custom emoji will be available to try next month via the Teams public preview, with the goal of helping Teams users collaborate and express themselves “more creatively and authentically.”
IT admins for businesses that use Teams will have the ability to limit which users can upload or delete custom emoji, or they can turn the feature off entirely. Once custom emoji are uploaded into Teams, they’ll only be visible within the same organization domain. Microsoft says that general availability for custom emoji is expected sometime this July.

Image: Microsoft
Here’s an example of how the custom emoji options will be presented to Teams users.

The announcement comes three months after Microsoft employees discovered an early version of the feature on an internal Teams build, finding an animated emoji of the widely memed Pepe the Frog character in some reactions and messages. Currently, Microsoft Teams only supports official Unicode emoji — the standardized array of public domain emoji supported by most smartphones and social media platforms.
In contrast to other communications platforms, Microsoft is very late to the custom emoji party. It’s been a prominent feature on Slack and Discord for years already, while Google Chat rolled out a similar feature for Workspace users back in 2022.

Microsoft is adding custom emoji to Teams. | Image: The Verge

Microsoft is adding a new feature to its Teams communications platform that enables users to upload their own custom emoji to use in reactions and messages. Announced during its Build developer conference on Tuesday, Microsoft says the new custom emoji will be available to try next month via the Teams public preview, with the goal of helping Teams users collaborate and express themselves “more creatively and authentically.”

IT admins for businesses that use Teams will have the ability to limit which users can upload or delete custom emoji, or they can turn the feature off entirely. Once custom emoji are uploaded into Teams, they’ll only be visible within the same organization domain. Microsoft says that general availability for custom emoji is expected sometime this July.

Image: Microsoft
Here’s an example of how the custom emoji options will be presented to Teams users.

The announcement comes three months after Microsoft employees discovered an early version of the feature on an internal Teams build, finding an animated emoji of the widely memed Pepe the Frog character in some reactions and messages. Currently, Microsoft Teams only supports official Unicode emoji — the standardized array of public domain emoji supported by most smartphones and social media platforms.

In contrast to other communications platforms, Microsoft is very late to the custom emoji party. It’s been a prominent feature on Slack and Discord for years already, while Google Chat rolled out a similar feature for Workspace users back in 2022.

Read More 

Microsoft Edge will translate and dub YouTube videos as you’re watching them

Image: The Verge

Microsoft Edge will soon offer real-time video translation on sites like YouTube, LinkedIn, Coursera, and more. As part of this year’s Build event, Microsoft announced that the new AI-powered feature will be able to translate spoken content through both dubbing and subtitles live as you’re watching it.
So far, the feature supports the translation of Spanish into English as well as the translation of English to German, Hindi, Italian, Russian, and Spanish. In addition to offering a neat way to translate videos into a user’s native tongue, Edge’s new AI feature should also make videos more accessible to those who are deaf or hard of hearing.
Edge will also support real-time translation for videos on news sites such as Reuters, CNBC, and Bloomberg. Microsoft plans on adding more languages and supported websites in the future.
This adds to the array of AI features Microsoft has added to Edge through an integration with Copilot. Edge already offers the ability to summarize YouTube videos, but it can’t generate text summaries of every video, as it relies on the video’s transcript to create the summary.

Image: The Verge

Microsoft Edge will soon offer real-time video translation on sites like YouTube, LinkedIn, Coursera, and more. As part of this year’s Build event, Microsoft announced that the new AI-powered feature will be able to translate spoken content through both dubbing and subtitles live as you’re watching it.

So far, the feature supports the translation of Spanish into English as well as the translation of English to German, Hindi, Italian, Russian, and Spanish. In addition to offering a neat way to translate videos into a user’s native tongue, Edge’s new AI feature should also make videos more accessible to those who are deaf or hard of hearing.

Edge will also support real-time translation for videos on news sites such as Reuters, CNBC, and Bloomberg. Microsoft plans on adding more languages and supported websites in the future.

This adds to the array of AI features Microsoft has added to Edge through an integration with Copilot. Edge already offers the ability to summarize YouTube videos, but it can’t generate text summaries of every video, as it relies on the video’s transcript to create the summary.

Read More 

Microsoft brings out a small language model that can look at pictures

Illustration: The Verge

Microsoft announced a new version of its small language model, Phi-3, which can look at images and tell you what’s in them.
Phi-3-vision is a multimodal model — aka it can read both text and images — and is best used on mobile devices. Microsoft says Phi-3-vision, now available on preview, is a 4.2 billion parameter model (parameters refer to how complex a model is and how much of its training it understands) that can do general visual reasoning tasks like asking questions about charts or images.
But Phi-3-vision is far smaller than other image-focused AI models like OpenAI’s DALL-E or Stability AI’s Stable Diffusion. Unlike those models, Phi-3-vision doesn’t generate images, but it can understand what’s in an image and analyze it for a user.
Microsoft announced Phi-3 in April with the release of Phi-3-mini, the smallest Phi-3 model at 3.8 billion parameters. The Phi-3 family has two other members: Phi-3-small (7 billion parameters) and Phi-3-medium (14 billion parameters).
AI model developers have been putting out small, lightweight AI models like Phi-3 as demand to use more cost-effective and less compute-intensive AI services grows. Small models can be used to power AI features on devices like phones and laptops without the need to take up too much computer memory. Microsoft already released other small models in addition to Phi-3 and its predecessor, Phi-2. Its math problem solving model, Orca-Math, reportedly answers math questions better than its bigger counterparts, like Google’s Gemini Pro.
Phi-3-vision is now available on preview. Other members of the Phi-3 family — Phi-3-mini, Phi-3-small, and Phi-3-medium — are now available through Azure’s model library.

Illustration: The Verge

Microsoft announced a new version of its small language model, Phi-3, which can look at images and tell you what’s in them.

Phi-3-vision is a multimodal model — aka it can read both text and images — and is best used on mobile devices. Microsoft says Phi-3-vision, now available on preview, is a 4.2 billion parameter model (parameters refer to how complex a model is and how much of its training it understands) that can do general visual reasoning tasks like asking questions about charts or images.

But Phi-3-vision is far smaller than other image-focused AI models like OpenAI’s DALL-E or Stability AI’s Stable Diffusion. Unlike those models, Phi-3-vision doesn’t generate images, but it can understand what’s in an image and analyze it for a user.

Microsoft announced Phi-3 in April with the release of Phi-3-mini, the smallest Phi-3 model at 3.8 billion parameters. The Phi-3 family has two other members: Phi-3-small (7 billion parameters) and Phi-3-medium (14 billion parameters).

AI model developers have been putting out small, lightweight AI models like Phi-3 as demand to use more cost-effective and less compute-intensive AI services grows. Small models can be used to power AI features on devices like phones and laptops without the need to take up too much computer memory. Microsoft already released other small models in addition to Phi-3 and its predecessor, Phi-2. Its math problem solving model, Orca-Math, reportedly answers math questions better than its bigger counterparts, like Google’s Gemini Pro.

Phi-3-vision is now available on preview. Other members of the Phi-3 family — Phi-3-mini, Phi-3-small, and Phi-3-medium — are now available through Azure’s model library.

Read More 

Microsoft’s new Copilot AI agents act like virtual employees to automate tasks

Image: The Verge

Microsoft will soon allow businesses and developers to build AI-powered Copilots that can work like virtual employees and perform tasks automatically. Instead of Copilot sitting idle waiting for queries, it will be able to do things like monitor email inboxes and automate a series of tasks or data entry that employees normally have to do manually.
It’s a big change in the behavior of Copilot in what the industry commonly calls AI agents, or the ability for chatbots to intelligently perform complex tasks autonomously.
“We very quickly realized that constraining Copilot to just being conversational was extremely limiting in what Copilot can do today,” explains Charles Lamanna, corporate vice president of business apps and platforms at Microsoft, in an interview with The Verge. “Instead of having a Copilot that waits there until someone chats with it, what if you could make your Copilot more proactive and for it to be able to work in the background on automated tasks.”

Image: Microsoft
Microsoft’s new Copilot Studio homepage.

Microsoft is previewing this new capability today to a very small group of early access testers ahead of a public preview inside Copilot Studio later this year. Businesses will be able to create a Copilot agent that could handle IT help desk service tasks, employee onboarding, and much more. “Copilots are evolving from copilots that work with you, to copilots that work for you,” says Microsoft in a blog post.
These Copilot agents will be triggered by certain events and work with a business’s own data. Here’s how Microsoft describes a potential Copilot for employee onboarding:
Imagine you’re a new hire. A proactive copilot greets you, reasoning over HR data and answers your questions, introduces you to your buddy, gives you the training and deadlines, helps you with the forms and sets up your first week of meetings. Now, HR and the employees can work on their regular tasks, without the hassle of administration.
This type of automation will naturally lead to questions about job losses and fears about where AI heads next. Lamanna argues that Copilot agents can remove the repetitive and mundane tasks of jobs, like data entry, instead of replacing jobs entirely.
“What makes a job, what makes a role? It’s a bunch of different tasks and generally it’s a very large number of very diverse and heterogeneous tasks. If someone did one thing over and over again, it probably would have already been automated by current technology,” says Lamanna. “We think with Copilot and Copilot Studio, some tasks will be automated completely… but the good news is most of the things that are automated are things that nobody really wants to do.”

Sign up for Notepad by Tom Warren, a weekly newsletter uncovering the secrets and strategy behind Microsoft’s era-defining bets on AI, gaming, and computing. Subscribe to get the latest straight to your inbox.

Monthly
$7/month
Get every issue of Notepad straight to your inbox. The first month is free.
START YOUR TRIAL

Annual
$70/year
Get a year of Notepad at a discounted rate. The first month is free.
START YOUR TRIAL

Bundle
$100/person/year
Get one year of both Notepad and Command Line. The first month is free.
SUBSCRIBE TO BOTH

We accept credit card, Apple Pay and Google Pay.

Microsoft’s argument that it only wants to reduce the boring bits of your job sounds idealistic for now, but with the constant fight for AI dominance between tech companies, it feels like we’re increasingly on the verge of more than basic automation. Lamanna believes human judgment and collaboration are still important parts of getting work done and that not everything will be suitable for automation.
There are also still a lot of problems with generative AI right now, especially around hallucinations where it just confidently makes stuff up. Microsoft says it has built a number of controls into Copilot Studio for this AI agent push so that Copilot doesn’t simply go rogue and automate tasks freely. That’s a big concern that we’ve seen play out already with Meta’s own AI ad tools misfiring and blowing through cash.

Image: Microsoft
Agents inside Copilot Studio.

You can build Microsoft’s Copilot agents with the ability to flag certain scenarios for humans to review, which will be useful for more complex queries and data. This all means Copilot should operate within the confines of what has been defined and the instructions and actions that are associated with these automated tasks.
Microsoft is also making it easier for businesses to bring their own data into their custom Copilot, with data connections to public websites, SharePoint, OneDrive, and more. This is one part of a broader effort inside Microsoft to make Copilot more than just a chatbot that generates things.
“Copilot in 2023 — and Microsoft — was very focused on searching over your data, summarizing your content, and generating new content. We think Copilot in 2024 is going to be very focused on customization,” says Lamanna. New Copilot extensions will enable part of this customization, allowing developers to build connectors that extend Copilot across line of business systems.
Microsoft also wants Copilot to work with groups of people more, instead of these one-to-one experiences that have existed over the past year. A new Team Copilot feature will allow the assistant to manage meeting agendas and notes, moderate lengthy team chats, or help assign tasks and track deadlines in Microsoft Planner. Microsoft plans to preview Team Copilot later this year.
At Google I/O last week, the search giant also showed some early concepts for its own AI agents that automate tasks for you, demonstrating how Gmail users would be able to use an AI agent to automatically complete a return form for some shoes and have someone collect them.
The big question that remains is how all of these AI agents will work in reality. We constantly see AI fail on basic text prompts, provide incorrect answers to queries, or add extra fingers to images, so do businesses and consumers really trust it enough to automate tasks in the background? I guess we’re about to find out.

Image: The Verge

Microsoft will soon allow businesses and developers to build AI-powered Copilots that can work like virtual employees and perform tasks automatically. Instead of Copilot sitting idle waiting for queries, it will be able to do things like monitor email inboxes and automate a series of tasks or data entry that employees normally have to do manually.

It’s a big change in the behavior of Copilot in what the industry commonly calls AI agents, or the ability for chatbots to intelligently perform complex tasks autonomously.

“We very quickly realized that constraining Copilot to just being conversational was extremely limiting in what Copilot can do today,” explains Charles Lamanna, corporate vice president of business apps and platforms at Microsoft, in an interview with The Verge. “Instead of having a Copilot that waits there until someone chats with it, what if you could make your Copilot more proactive and for it to be able to work in the background on automated tasks.”

Image: Microsoft
Microsoft’s new Copilot Studio homepage.

Microsoft is previewing this new capability today to a very small group of early access testers ahead of a public preview inside Copilot Studio later this year. Businesses will be able to create a Copilot agent that could handle IT help desk service tasks, employee onboarding, and much more. “Copilots are evolving from copilots that work with you, to copilots that work for you,” says Microsoft in a blog post.

These Copilot agents will be triggered by certain events and work with a business’s own data. Here’s how Microsoft describes a potential Copilot for employee onboarding:

Imagine you’re a new hire. A proactive copilot greets you, reasoning over HR data and answers your questions, introduces you to your buddy, gives you the training and deadlines, helps you with the forms and sets up your first week of meetings. Now, HR and the employees can work on their regular tasks, without the hassle of administration.

This type of automation will naturally lead to questions about job losses and fears about where AI heads next. Lamanna argues that Copilot agents can remove the repetitive and mundane tasks of jobs, like data entry, instead of replacing jobs entirely.

“What makes a job, what makes a role? It’s a bunch of different tasks and generally it’s a very large number of very diverse and heterogeneous tasks. If someone did one thing over and over again, it probably would have already been automated by current technology,” says Lamanna. “We think with Copilot and Copilot Studio, some tasks will be automated completely… but the good news is most of the things that are automated are things that nobody really wants to do.”

Microsoft’s argument that it only wants to reduce the boring bits of your job sounds idealistic for now, but with the constant fight for AI dominance between tech companies, it feels like we’re increasingly on the verge of more than basic automation. Lamanna believes human judgment and collaboration are still important parts of getting work done and that not everything will be suitable for automation.

There are also still a lot of problems with generative AI right now, especially around hallucinations where it just confidently makes stuff up. Microsoft says it has built a number of controls into Copilot Studio for this AI agent push so that Copilot doesn’t simply go rogue and automate tasks freely. That’s a big concern that we’ve seen play out already with Meta’s own AI ad tools misfiring and blowing through cash.

Image: Microsoft
Agents inside Copilot Studio.

You can build Microsoft’s Copilot agents with the ability to flag certain scenarios for humans to review, which will be useful for more complex queries and data. This all means Copilot should operate within the confines of what has been defined and the instructions and actions that are associated with these automated tasks.

Microsoft is also making it easier for businesses to bring their own data into their custom Copilot, with data connections to public websites, SharePoint, OneDrive, and more. This is one part of a broader effort inside Microsoft to make Copilot more than just a chatbot that generates things.

“Copilot in 2023 — and Microsoft — was very focused on searching over your data, summarizing your content, and generating new content. We think Copilot in 2024 is going to be very focused on customization,” says Lamanna. New Copilot extensions will enable part of this customization, allowing developers to build connectors that extend Copilot across line of business systems.

Microsoft also wants Copilot to work with groups of people more, instead of these one-to-one experiences that have existed over the past year. A new Team Copilot feature will allow the assistant to manage meeting agendas and notes, moderate lengthy team chats, or help assign tasks and track deadlines in Microsoft Planner. Microsoft plans to preview Team Copilot later this year.

At Google I/O last week, the search giant also showed some early concepts for its own AI agents that automate tasks for you, demonstrating how Gmail users would be able to use an AI agent to automatically complete a return form for some shoes and have someone collect them.

The big question that remains is how all of these AI agents will work in reality. We constantly see AI fail on basic text prompts, provide incorrect answers to queries, or add extra fingers to images, so do businesses and consumers really trust it enough to automate tasks in the background? I guess we’re about to find out.

Read More 

Elden Ring: Shadow of the Erdtree’s new trailer is a dramatic call to arms

Image: Bandai Namco

With only a month until release day, there’s a new story trailer for Shadow of the Erdtree, and the Elden Ring lore hounds have their work cut out for them.
FromSoftware has been slowly teasing Shadow of the Erdtree and its story beats ever since the gameplay trailer was released back in February. All we knew then was the DLC would take place in the Land of Shadow and feature a bunch of new horrors straight from the slightly warped imagination of game director Hidetaka Miyazaki.
This new Land of Shadow was connected somehow to Miquella — the dude in the egg with the withered arm. The trailer mentions him prominently, giving the impression we’ll learn more about him, his connection to the Land of Shadow, and what events led him to abandon the place to its fiery fate.
Speaking of fiery fate, the trailer also spends its short runtime on Messmer, the red-haired hottie with the snakes who seems like he’ll be the DLC’s final boss. (Although you never can tell with FromSoftware games. Once you think you’ve beaten the final boss, some cosmic entity arrives and blows up your last flask of healing.) Messmer is apparently responsible for the great purge of the Land of Shadow, and I’m curious to know why.
The story trailer was short and light and cryptic, as FromSoftware stories tend to be. I’m looking forward to the hours of YouTube deep dives I’ll digest as creators (shout out to Iron Pineapple) extrapolate the deepest of lore from the smallest of moments. Shadow of the Erdtree launches on June 21st on Xbox, PlayStation, and PC.

Image: Bandai Namco

With only a month until release day, there’s a new story trailer for Shadow of the Erdtree, and the Elden Ring lore hounds have their work cut out for them.

FromSoftware has been slowly teasing Shadow of the Erdtree and its story beats ever since the gameplay trailer was released back in February. All we knew then was the DLC would take place in the Land of Shadow and feature a bunch of new horrors straight from the slightly warped imagination of game director Hidetaka Miyazaki.

This new Land of Shadow was connected somehow to Miquella — the dude in the egg with the withered arm. The trailer mentions him prominently, giving the impression we’ll learn more about him, his connection to the Land of Shadow, and what events led him to abandon the place to its fiery fate.

Speaking of fiery fate, the trailer also spends its short runtime on Messmer, the red-haired hottie with the snakes who seems like he’ll be the DLC’s final boss. (Although you never can tell with FromSoftware games. Once you think you’ve beaten the final boss, some cosmic entity arrives and blows up your last flask of healing.) Messmer is apparently responsible for the great purge of the Land of Shadow, and I’m curious to know why.

The story trailer was short and light and cryptic, as FromSoftware stories tend to be. I’m looking forward to the hours of YouTube deep dives I’ll digest as creators (shout out to Iron Pineapple) extrapolate the deepest of lore from the smallest of moments. Shadow of the Erdtree launches on June 21st on Xbox, PlayStation, and PC.

Read More 

Comcast bundles Netflix, Peacock, and Apple TV Plus for $15 / month

Image: Comcast

Get ready for yet another streaming bundle: Comcast is launching a new $15 per month subscription that combines Netflix, Apple TV Plus, and Peacock. The “StreamSaver” bundle will be available to Xfinity Internet and TV customers on May 29th — and it offers monthly savings of around $8.
If you do subscribe, expect to sit through some commercials, as the bundle includes Netflix’s ad-supported plan ($6.99 / month), Peacock’s ad-supported Premium plan ($5.99 / month), and Apple TV Plus, which is ad-free by default ($9.99 / month). Variety first reported on rumors of the bundle last week.

Comcast is also bundling its bundle with its $20 per month Now TV service, which offers over 40 live TV channels and comes with a Peacock Premium subscription. You can subscribe to the streaming bundle and Now TV for a combined price of $30 per month when the bundle launches on the 29th.
Similarly, last December, Verizon introduced a streaming bundle with Netflix and Max for $10 per month with ads. Disney Plus, Hulu, and Max have even teamed up to cut out the middleman — like Comcast or Verizon — and launch a not-yet-priced streaming bundle of their own this summer.
As streaming prices continue to trek upward, bundles will likely become crucial in keeping people subscribed.
Disclosure: Comcast is an investor in Vox Media, The Verge’s parent company.

Image: Comcast

Get ready for yet another streaming bundle: Comcast is launching a new $15 per month subscription that combines Netflix, Apple TV Plus, and Peacock. The “StreamSaver” bundle will be available to Xfinity Internet and TV customers on May 29th — and it offers monthly savings of around $8.

If you do subscribe, expect to sit through some commercials, as the bundle includes Netflix’s ad-supported plan ($6.99 / month), Peacock’s ad-supported Premium plan ($5.99 / month), and Apple TV Plus, which is ad-free by default ($9.99 / month). Variety first reported on rumors of the bundle last week.

Comcast is also bundling its bundle with its $20 per month Now TV service, which offers over 40 live TV channels and comes with a Peacock Premium subscription. You can subscribe to the streaming bundle and Now TV for a combined price of $30 per month when the bundle launches on the 29th.

Similarly, last December, Verizon introduced a streaming bundle with Netflix and Max for $10 per month with ads. Disney Plus, Hulu, and Max have even teamed up to cut out the middleman — like Comcast or Verizon — and launch a not-yet-priced streaming bundle of their own this summer.

As streaming prices continue to trek upward, bundles will likely become crucial in keeping people subscribed.

Disclosure: Comcast is an investor in Vox Media, The Verge’s parent company.

Read More 

The EPA is cracking down on cybersecurity threats

The East Bay Municipal Utility District Wastewater Treatment Plant on March 20th, 2024, in Oakland, California.  | Photo by Justin Sullivan / Getty Images

The Environmental Protection Agency is ramping up its inspections of critical water infrastructure after warning of “alarming vulnerabilities” to cyberattacks.
The agency issued an enforcement alert yesterday warning utilities to take quick action to mitigate threats to the nation’s drinking water. The EPA plans to increase inspections and says it will take civil and criminal enforcement actions as needed.
“Cyberattacks against [community water systems] are increasing in frequency and severity across the country,” the alert says. “Possible impacts include disrupting the treatment, distribution, and storage of water for the community, damaging pumps and valves, and altering the levels of chemicals to hazardous amounts.”
“Cyberattacks against [community water systems] are increasing in frequency and severity across the country.”
More than 70 percent of water systems inspected since September 2023 failed to comply with mandates under the Safe Drinking Water Act (SDWA) that are meant to reduce the risk of physical and cyberattacks, the EPA said. That includes failing to take basic steps like changing default passwords or cutting off former employees’ access to facilities. Since 2020, the EPA has taken more than 100 enforcement actions for violations of that section of the SDWA.
“Foreign governments have disrupted some water systems with cyberattacks and may have embedded the capability to disable them in the future,” the enforcement alert says. One example it cites is Volt Typhoon, a People’s Republic of China state-sponsored cyber group that has “compromised the IT environments of multiple critical infrastructure organizations,” according to a Department of Homeland Security advisory issued in February.
Hacktivists in Russia likely linked to the Sandworm group that attacked Ukraine’s power grid caused an overflow at a water facility in Texas in January, CyberScoop reports, although the incident didn’t disrupt service to customers. Last year, a Pennsylvania water facility was forced to rely on manual operations after an attack by hackers linked to the Iranian Islamic Revolutionary Guard Corps.
The EPA’s enforcement alert asks utilities to follow recommendations for maintaining cyber hygiene, including conducting awareness training for employees, backing up OT / IT systems, and avoiding public-facing internet.
It follows a letter EPA administrator Michael Regan and national security advisor Jake Sullivan sent to state governors earlier this year warning them of cyber risks to the nation’s drinking and wastewater systems. It led to a March convening where the National Security Council asked each state to come up with an action plan to address those vulnerabilities by late June.

The East Bay Municipal Utility District Wastewater Treatment Plant on March 20th, 2024, in Oakland, California.  | Photo by Justin Sullivan / Getty Images

The Environmental Protection Agency is ramping up its inspections of critical water infrastructure after warning of “alarming vulnerabilities” to cyberattacks.

The agency issued an enforcement alert yesterday warning utilities to take quick action to mitigate threats to the nation’s drinking water. The EPA plans to increase inspections and says it will take civil and criminal enforcement actions as needed.

“Cyberattacks against [community water systems] are increasing in frequency and severity across the country,” the alert says. “Possible impacts include disrupting the treatment, distribution, and storage of water for the community, damaging pumps and valves, and altering the levels of chemicals to hazardous amounts.”

“Cyberattacks against [community water systems] are increasing in frequency and severity across the country.”

More than 70 percent of water systems inspected since September 2023 failed to comply with mandates under the Safe Drinking Water Act (SDWA) that are meant to reduce the risk of physical and cyberattacks, the EPA said. That includes failing to take basic steps like changing default passwords or cutting off former employees’ access to facilities. Since 2020, the EPA has taken more than 100 enforcement actions for violations of that section of the SDWA.

“Foreign governments have disrupted some water systems with cyberattacks and may have embedded the capability to disable them in the future,” the enforcement alert says. One example it cites is Volt Typhoon, a People’s Republic of China state-sponsored cyber group that has “compromised the IT environments of multiple critical infrastructure organizations,” according to a Department of Homeland Security advisory issued in February.

Hacktivists in Russia likely linked to the Sandworm group that attacked Ukraine’s power grid caused an overflow at a water facility in Texas in January, CyberScoop reports, although the incident didn’t disrupt service to customers. Last year, a Pennsylvania water facility was forced to rely on manual operations after an attack by hackers linked to the Iranian Islamic Revolutionary Guard Corps.

The EPA’s enforcement alert asks utilities to follow recommendations for maintaining cyber hygiene, including conducting awareness training for employees, backing up OT / IT systems, and avoiding public-facing internet.

It follows a letter EPA administrator Michael Regan and national security advisor Jake Sullivan sent to state governors earlier this year warning them of cyber risks to the nation’s drinking and wastewater systems. It led to a March convening where the National Security Council asked each state to come up with an action plan to address those vulnerabilities by late June.

Read More 

Hellblade II: headphones on, heart rate up

Image: Ninja Theory

If you can, play Hellblade II with headphones to fully experience the depth of the game’s unique storytelling. In Senua’s Saga: Hellblade II, the first and only instruction the game provides is that the game is best experienced with headphones. Typically, I always ignore that advice. I’m not a “headphones on, lights off” kind of player. I don’t need ambiance. But for Hellblade II, I decided, “Why not?” What followed was an aural experience that thrilled, frightened, and unsettled the Hel(lblade) outta me.
Note: Of course, if, for whatever reason, you can’t play with headphones, the game’s subtitles and closed captioning do good work conveying the game’s unique audio design.
In Hellblade II, the follow-up to 2017’s Hellblade: Senua’s Sacrifice, Senua, a Pict warrior, embarks on another harrowing journey. Instead of venturing to the gates of Helheim, she travels north to confront the viking raiders that have been stealing and enslaving her people. On her journey, Senua is accompanied by a Homeric chorus of voices that reflect her struggle with psychosis. The team at Ninja Theory made it a point to explain that they consulted with mental health experts — including a professor of psychiatry at Cambridge University and people who live with the disorder — in order to portray psychosis respectfully and accurately. That manifests as hearing voices clip in and out of each side of my headphones. At the beginning of the game, I would whip my head left or right as the voices jumped around before I finally grew used to them.

There’s a richness to the voice performances that are lost if they’re diffused through open air instead of beamed directly into your ears. They speak in short, staccato sentences, contextualizing Senua’s feelings about a particular encounter. When she meets someone, her voices wonder if they can be trusted. In fights, they shout encouragements and admonitions. “Get up. Get up!” or “They’re so strong!” I like the voices. They remind me of my own rapid-fire internal monologue quipping about the myriad things that flit around in my head.
Senua’s voices also help with navigation and puzzle solving, but not so much that it becomes obnoxious. In Hellblade II, there’s none of that “the dialogue tells you the solution” stuff that happens in other games. For the first puzzle section involving a path blocked by a large symbol, the voices yell “Focus!” prompting you to press the right trigger to engage Senua’s focus ability. When I got lost during a particularly bewildering section in a dark forest, the voices remarked only once that I was lost then shut up, leaving me to figure out the solution in blessed silence.

Senua’s Saga: Hellblade II starter pack pic.twitter.com/tOAEtdELKe— Ninja Theory (@NinjaTheory) May 20, 2024

In delving into a drained lake, I could hear water dripping all around me as the caverns echoed with the sound of my breathing. The further I traveled, the darker it got, and the more sinister the sounds became. My breathing slowly transformed into the guttural sounds of disquieted spirits. That should have been a problem. I hate the sounds typically associated with horror — that crawling, squelching sound used whenever a game or movie wants to convey that something’s gross and wet. But the scary sounds in Hellblade II never crossed the threshold into repulsive or triggering for me (misophonia sufferers, rejoice). Instead, the sounds were soft and quiet but no less sinister, sounding as though they were just over my shoulder in the real world.

Image: Ninja Theory

When audio enhances visual, Hellblade II becomes a harrowing experience.

My aural journey with Hellblade II wasn’t limited to sound effects and voices. The music also played an integral role in crafting a visceral full-body experience with the game. Early on, there’s an encounter where it all combines — music, sound effects, and voices — to create a goose bump-inducing moment that I won’t spoil. The tempo of the music combined with the on-screen action created a beat that I could physically feel reverberating in my chest as I played. It’s a game-defining moment that really nails the skill and creativity of Hellblade II’s sound team.
I’m an aural person, someone who places great emphasis on sound, and Hellblade II felt like it was a game made for me. In Hellblade II, there is no in-game UI. There are no tutorial pop-ups that pause the action to tell you what buttons do what or how to interact with the environment. A UI adds a layer of artificiality, reminding you that you that this is make-believe. Without it, the game created a level of reality that I’ve never really experienced before, forcing me to fully inhabit Senua as a character. And with headphones in, I heard so much more of the world and got a greater feeling for Senua’s unique experience in it.
Senua’s Saga: Hellblade II is out now on Xbox and Game Pass.

Image: Ninja Theory

If you can, play Hellblade II with headphones to fully experience the depth of the game’s unique storytelling.

In Senua’s Saga: Hellblade II, the first and only instruction the game provides is that the game is best experienced with headphones. Typically, I always ignore that advice. I’m not a “headphones on, lights off” kind of player. I don’t need ambiance. But for Hellblade II, I decided, “Why not?” What followed was an aural experience that thrilled, frightened, and unsettled the Hel(lblade) outta me.

Note: Of course, if, for whatever reason, you can’t play with headphones, the game’s subtitles and closed captioning do good work conveying the game’s unique audio design.

In Hellblade II, the follow-up to 2017’s Hellblade: Senua’s Sacrifice, Senua, a Pict warrior, embarks on another harrowing journey. Instead of venturing to the gates of Helheim, she travels north to confront the viking raiders that have been stealing and enslaving her people. On her journey, Senua is accompanied by a Homeric chorus of voices that reflect her struggle with psychosis. The team at Ninja Theory made it a point to explain that they consulted with mental health experts — including a professor of psychiatry at Cambridge University and people who live with the disorder — in order to portray psychosis respectfully and accurately. That manifests as hearing voices clip in and out of each side of my headphones. At the beginning of the game, I would whip my head left or right as the voices jumped around before I finally grew used to them.

There’s a richness to the voice performances that are lost if they’re diffused through open air instead of beamed directly into your ears. They speak in short, staccato sentences, contextualizing Senua’s feelings about a particular encounter. When she meets someone, her voices wonder if they can be trusted. In fights, they shout encouragements and admonitions. “Get up. Get up!” or “They’re so strong!” I like the voices. They remind me of my own rapid-fire internal monologue quipping about the myriad things that flit around in my head.

Senua’s voices also help with navigation and puzzle solving, but not so much that it becomes obnoxious. In Hellblade II, there’s none of that “the dialogue tells you the solution” stuff that happens in other games. For the first puzzle section involving a path blocked by a large symbol, the voices yell “Focus!” prompting you to press the right trigger to engage Senua’s focus ability. When I got lost during a particularly bewildering section in a dark forest, the voices remarked only once that I was lost then shut up, leaving me to figure out the solution in blessed silence.

Senua’s Saga: Hellblade II starter pack pic.twitter.com/tOAEtdELKe

— Ninja Theory (@NinjaTheory) May 20, 2024

In delving into a drained lake, I could hear water dripping all around me as the caverns echoed with the sound of my breathing. The further I traveled, the darker it got, and the more sinister the sounds became. My breathing slowly transformed into the guttural sounds of disquieted spirits. That should have been a problem. I hate the sounds typically associated with horror — that crawling, squelching sound used whenever a game or movie wants to convey that something’s gross and wet. But the scary sounds in Hellblade II never crossed the threshold into repulsive or triggering for me (misophonia sufferers, rejoice). Instead, the sounds were soft and quiet but no less sinister, sounding as though they were just over my shoulder in the real world.

Image: Ninja Theory

When audio enhances visual, Hellblade II becomes a harrowing experience.

My aural journey with Hellblade II wasn’t limited to sound effects and voices. The music also played an integral role in crafting a visceral full-body experience with the game. Early on, there’s an encounter where it all combines — music, sound effects, and voices — to create a goose bump-inducing moment that I won’t spoil. The tempo of the music combined with the on-screen action created a beat that I could physically feel reverberating in my chest as I played. It’s a game-defining moment that really nails the skill and creativity of Hellblade II’s sound team.

I’m an aural person, someone who places great emphasis on sound, and Hellblade II felt like it was a game made for me. In Hellblade II, there is no in-game UI. There are no tutorial pop-ups that pause the action to tell you what buttons do what or how to interact with the environment. A UI adds a layer of artificiality, reminding you that you that this is make-believe. Without it, the game created a level of reality that I’ve never really experienced before, forcing me to fully inhabit Senua as a character. And with headphones in, I heard so much more of the world and got a greater feeling for Senua’s unique experience in it.

Senua’s Saga: Hellblade II is out now on Xbox and Game Pass.

Read More 

Uber and Lyft to stay in Minneapolis after state lowers driver pay requirements

Illustration by Alex Castro / The Verge

Uber and Lyft will continue operating in Minneapolis after the state legislature approved a lower minimum pay rate for drivers over the weekend, according to reports from the Minnesota Reformer and StarTribune. The new rates will go into effect on January 1st, 2025, if the bill becomes law, and will guarantee drivers at least $1.28 per mile and 31 cents per minute.
In March, Uber and Lyft threatened to leave Minneapolis after city officials passed an ordinance to increase rates to $1.40 per mile and 51 cents per minute while carrying a rider. The ridehailing companies argued that the ordinance was “deeply flawed,” as city officials determined the rate before the state released a study on how much drivers should be paid to earn Minneapolis’ minimum wage.
The new rates devised by Minnesota lawmakers preempt the ordinance passed by Minneapolis officials. In addition to establishing a minimum rate across the state, the bill will let Uber and Lyft drivers appeal account deactivations while also mandating vehicle insurance and compensation for injuries while on the job.

“Through direct engagement with all stakeholders, we have found enough common ground to balance a new pay increase for drivers with what riders can afford to pay and preserve the service,” Lyft spokesperson CJ Macklin said in an emailed statement to The Verge. Uber spokesperson Josh Gold tells The Verge that the company will continue providing services in the state even though “the coming price increases may hurt riders and drivers alike.”
All that’s left is for Minnesota Governor Tim Walz to sign the bill. During a press conference on Saturday, Walz said it will allow people in Minnesota “to continue to use these services if they see fit.” Last year, Governor Walz vetoed legislation that he claimed would’ve made Minnesota one of the most expensive states for ridesharing.
In 2020, Uber and Lyft threatened to stop offering their services in California over a new law that would classify their drivers as employees. This isn’t the end of Uber and Lyft’s costly driver classification problem, either. The companies are facing a lawsuit in Massachusetts that accuses the ridesharing services of misclassifying their drivers as independent contractors, while New Jersey is suing Lyft over similar arguments.

Illustration by Alex Castro / The Verge

Uber and Lyft will continue operating in Minneapolis after the state legislature approved a lower minimum pay rate for drivers over the weekend, according to reports from the Minnesota Reformer and StarTribune. The new rates will go into effect on January 1st, 2025, if the bill becomes law, and will guarantee drivers at least $1.28 per mile and 31 cents per minute.

In March, Uber and Lyft threatened to leave Minneapolis after city officials passed an ordinance to increase rates to $1.40 per mile and 51 cents per minute while carrying a rider. The ridehailing companies argued that the ordinance was “deeply flawed,” as city officials determined the rate before the state released a study on how much drivers should be paid to earn Minneapolis’ minimum wage.

The new rates devised by Minnesota lawmakers preempt the ordinance passed by Minneapolis officials. In addition to establishing a minimum rate across the state, the bill will let Uber and Lyft drivers appeal account deactivations while also mandating vehicle insurance and compensation for injuries while on the job.

“Through direct engagement with all stakeholders, we have found enough common ground to balance a new pay increase for drivers with what riders can afford to pay and preserve the service,” Lyft spokesperson CJ Macklin said in an emailed statement to The Verge. Uber spokesperson Josh Gold tells The Verge that the company will continue providing services in the state even though “the coming price increases may hurt riders and drivers alike.”

All that’s left is for Minnesota Governor Tim Walz to sign the bill. During a press conference on Saturday, Walz said it will allow people in Minnesota “to continue to use these services if they see fit.” Last year, Governor Walz vetoed legislation that he claimed would’ve made Minnesota one of the most expensive states for ridesharing.

In 2020, Uber and Lyft threatened to stop offering their services in California over a new law that would classify their drivers as employees. This isn’t the end of Uber and Lyft’s costly driver classification problem, either. The companies are facing a lawsuit in Massachusetts that accuses the ridesharing services of misclassifying their drivers as independent contractors, while New Jersey is suing Lyft over similar arguments.

Read More 

Scroll to top
Generated by Feedzy