verge-rss

Raspberry Pi is going public to expand its range of tiny computers

Raspberry Pi prepares for a $630 million IPO. | Photo by Emma Roth / The Verge

British minicomputer manufacturer Raspberry Pi has announced plans to file for a UK stock market listing, saying the initial public offering would allow the company to expand its current talent pool and product offerings.
Raspberry Pi is best known for making highly affordable microcontrollers and credit card-size single-board computers — the cheapest of which costs just $15. They can be used to make all sorts of things, like smart home control hubs, DIY cameras, home media servers, and more.
A fundraising round in 2023 led by UK chip maker Arm Holdings previously valued Raspberry Pi at around £444 million (about $561 million), though that figure has since potentially risen to roughly £500 million (about $630 million) according to The Times.
Raspberry Pi is hoping to use that cash to pay more engineers, bring aspects of its semiconductor design process in-house, and introduce new, more expensive product variants that “better serve Raspberry Pi’s customers’ needs.” The Raspberry Pi Foundation, the computer science education charity that serves as Raspberry Pi’s parent company, will remain the company’s majority shareholder.
“When we released our first product in 2012, our goal was to provide a computer that was affordable enough for young people to own and explore with confidence,” Raspberry Pi CEO and founder Eben Upton said in the company’s filing, adding that the company now has the “technology roadmap to play an increasingly significant role” in the computing industry.

Raspberry Pi prepares for a $630 million IPO. | Photo by Emma Roth / The Verge

British minicomputer manufacturer Raspberry Pi has announced plans to file for a UK stock market listing, saying the initial public offering would allow the company to expand its current talent pool and product offerings.

Raspberry Pi is best known for making highly affordable microcontrollers and credit card-size single-board computers — the cheapest of which costs just $15. They can be used to make all sorts of things, like smart home control hubs, DIY cameras, home media servers, and more.

A fundraising round in 2023 led by UK chip maker Arm Holdings previously valued Raspberry Pi at around £444 million (about $561 million), though that figure has since potentially risen to roughly £500 million (about $630 million) according to The Times.

Raspberry Pi is hoping to use that cash to pay more engineers, bring aspects of its semiconductor design process in-house, and introduce new, more expensive product variants that “better serve Raspberry Pi’s customers’ needs.” The Raspberry Pi Foundation, the computer science education charity that serves as Raspberry Pi’s parent company, will remain the company’s majority shareholder.

“When we released our first product in 2012, our goal was to provide a computer that was affordable enough for young people to own and explore with confidence,” Raspberry Pi CEO and founder Eben Upton said in the company’s filing, adding that the company now has the “technology roadmap to play an increasingly significant role” in the computing industry.

Read More 

The first Assassin’s Creed Shadows trailer shows off dual samurai / ninja action

Image: Ubisoft

The next entry in the Assassin’s Creed franchise is almost here: Ubisoft debuted the first trailer for Assassin’s Creed Shadows, which launches later this year.
The trailer is decently meaty for a world premiere, taking its time to introduce the game’s two protagonists. One is the samurai Yasuke, an African man who became a retainer in service to Japanese warlord Oda Nobunaga. The second protagonist is Naoe, a shinobi character whose opposition to Nobunaga’s war initially pits her against Yasuke. It was rumored that Shadows would feature two main characters, similar to Assassin’s Creed Syndicate, and before the official trailer reveal, key art from the game leaked showing off the samurai / shinobi protagonists.
Since 2022, Assassin’s Creed Shadows was known by its codename, Assassin’s Creed Red. Earlier this week, Ubisoft announced the formal name change in addition to the forthcoming trailer. Red / Shadows was initially revealed as a part of Ubisoft’s Assassin’s Creed roadmap, which also featured other in-development titles in the franchise, including Assassin’s Creed Jade and Codename Hexe.
While Codename Hexe remains mostly a mystery, Assassin’s Creed Jade was formally announced at Gamescom last year. It’s a mobile game co-developed by Tencent Games subsidiary Level Infinite and completed its first closed beta last August. However, it may be a while before Jade releases: it’s been reportedly delayed due to publisher Tencent shifting development focus from licensed Western IPs to its own games.
Ubisoft didn’t show off any of Yasuke’s or Naoe’s gameplay, but we’ll be able to see for ourselves soon when Assassin’s Creed Shadows launches on Xbox, PlayStation, PC, and Mac on November 15th.

Image: Ubisoft

The next entry in the Assassin’s Creed franchise is almost here: Ubisoft debuted the first trailer for Assassin’s Creed Shadows, which launches later this year.

The trailer is decently meaty for a world premiere, taking its time to introduce the game’s two protagonists. One is the samurai Yasuke, an African man who became a retainer in service to Japanese warlord Oda Nobunaga. The second protagonist is Naoe, a shinobi character whose opposition to Nobunaga’s war initially pits her against Yasuke. It was rumored that Shadows would feature two main characters, similar to Assassin’s Creed Syndicate, and before the official trailer reveal, key art from the game leaked showing off the samurai / shinobi protagonists.

Since 2022, Assassin’s Creed Shadows was known by its codename, Assassin’s Creed Red. Earlier this week, Ubisoft announced the formal name change in addition to the forthcoming trailer. Red / Shadows was initially revealed as a part of Ubisoft’s Assassin’s Creed roadmap, which also featured other in-development titles in the franchise, including Assassin’s Creed Jade and Codename Hexe.

While Codename Hexe remains mostly a mystery, Assassin’s Creed Jade was formally announced at Gamescom last year. It’s a mobile game co-developed by Tencent Games subsidiary Level Infinite and completed its first closed beta last August. However, it may be a while before Jade releases: it’s been reportedly delayed due to publisher Tencent shifting development focus from licensed Western IPs to its own games.

Ubisoft didn’t show off any of Yasuke’s or Naoe’s gameplay, but we’ll be able to see for ourselves soon when Assassin’s Creed Shadows launches on Xbox, PlayStation, PC, and Mac on November 15th.

Read More 

Xbox Cloud Gaming now has mouse and keyboard support in 26 games

Image: The Verge

Microsoft started to preview mouse and keyboard support for Xbox Cloud Gaming in March, and it’s now rolling out to everyone as a beta today. You’ll be able to start streaming and playing Xbox games with a mouse or keyboard in Edge or Chrome over at xbox.com/play.
In total, 26 games will be supported at the launch of this beta, including Fortnite, Sea of Thieves, and Halo Infinite. Microsoft is also planning to let PC owners use the Xbox app later this month to select a game that has a mouse and keyboard badge in the cloud gaming section.
I tried the mouse and keyboard support earlier this year on Xbox Cloud Gaming, and it’s exactly what you’d expect. You can easily switch between using a mouse and keyboard or a controller during the middle of games, just like you would on a PC or an Xbox that supports mouse and keyboard input. It’s a great addition for gaming on the go, especially as you no longer have to bring an Xbox controller with you to access some games.
Here’s the full list of games that work in mouse and keyboard mode on Xbox Cloud Gaming:

ARK: Survival Evolved
Atomic Heart
Cities: Skylines – Mayor’s Edition
Cities: Skylines – Remastered
Deep Rock Galactic
Doom 64

Fortnite (browser only)
Gears Tactics
Grounded
Halo Infinite
High on Life
House Flipper

Inkulinati (game preview)
Mount & Blade II: Bannerlord
Norco
Pentiment
Quake
Quake 2
Sea of Thieves
Slime Rancher 2
Sniper Elite 5
State of Decay 2
Terraria
The Sims 4

Valheim (game preview)
Zombie Army 4: Dead War

Image: The Verge

Microsoft started to preview mouse and keyboard support for Xbox Cloud Gaming in March, and it’s now rolling out to everyone as a beta today. You’ll be able to start streaming and playing Xbox games with a mouse or keyboard in Edge or Chrome over at xbox.com/play.

In total, 26 games will be supported at the launch of this beta, including Fortnite, Sea of Thieves, and Halo Infinite. Microsoft is also planning to let PC owners use the Xbox app later this month to select a game that has a mouse and keyboard badge in the cloud gaming section.

I tried the mouse and keyboard support earlier this year on Xbox Cloud Gaming, and it’s exactly what you’d expect. You can easily switch between using a mouse and keyboard or a controller during the middle of games, just like you would on a PC or an Xbox that supports mouse and keyboard input. It’s a great addition for gaming on the go, especially as you no longer have to bring an Xbox controller with you to access some games.

Here’s the full list of games that work in mouse and keyboard mode on Xbox Cloud Gaming:

ARK: Survival Evolved
Atomic Heart
Cities: Skylines – Mayor’s Edition
Cities: Skylines – Remastered
Deep Rock Galactic
Doom 64

Fortnite (browser only)
Gears Tactics
Grounded
Halo Infinite
High on Life
House Flipper

Inkulinati (game preview)
Mount & Blade II: Bannerlord
Norco
Pentiment
Quake
Quake 2
Sea of Thieves
Slime Rancher 2
Sniper Elite 5
State of Decay 2
Terraria
The Sims 4

Valheim (game preview)
Zombie Army 4: Dead War

Read More 

iPhone owners say the latest iOS update is resurfacing deleted nudes

Photo: Wes Davis / The Verge

Apple appears to have a bug that’s dredging up data that iPhone owners thought was gone. Some iPhone owners are reporting that, after updating their phones to iOS 17.5, their deleted photos — some quite old — are popping up again, according to a Reddit thread that MacRumors spotted. iOS beta testers had the same complaints about the bug last week.
People reporting the apparent bug say that they’re seeing old photos appear in their Recents album after Monday’s update. iOS does give users the option to restore deleted photos, but after 30 days, they’re supposed to be permanently removed. The person who started the thread claimed that NSFW photos they had deleted “years ago” were back on their phone. Another Reddit user said that they saw photos from 2016 show up as new images but that they didn’t think they’d ever deleted them.

This could be more innocent than it sounds. Computer data is never actually “deleted” until it’s overwritten with new 1s and 0s — operating systems simply cut off references to it. One user also said they saw a photo return even though they don’t sync their phone or use iCloud, implying the photos could be originating from on-device storage. Apple didn’t immediately reply to The Verge’s request for comment.

After upgrading to iOS 17.5 on my Xr, voicemails that I had already listened to or deleted reappeared. Before the update, I had only one unheard voicemail, but now I have 26. pic.twitter.com/eqx1buJIx0— Stacey (@ssmithdev) May 15, 2024

There’s a chance it’s not specific to photos, either, as one person posted on X that they saw old voicemails come back after the update. Several beta testers said the same thing about earlier iOS 17 betas. Whether the issue implies Apple is secretly holding onto old deleted data or it’s just a quirk of how iOS 17.5 handles that data, it’s not an ideal situation. Nobody wants to see their deleted nudes come back.

Photo: Wes Davis / The Verge

Apple appears to have a bug that’s dredging up data that iPhone owners thought was gone. Some iPhone owners are reporting that, after updating their phones to iOS 17.5, their deleted photos — some quite old — are popping up again, according to a Reddit thread that MacRumors spotted. iOS beta testers had the same complaints about the bug last week.

People reporting the apparent bug say that they’re seeing old photos appear in their Recents album after Monday’s update. iOS does give users the option to restore deleted photos, but after 30 days, they’re supposed to be permanently removed. The person who started the thread claimed that NSFW photos they had deleted “years ago” were back on their phone. Another Reddit user said that they saw photos from 2016 show up as new images but that they didn’t think they’d ever deleted them.

This could be more innocent than it sounds. Computer data is never actually “deleted” until it’s overwritten with new 1s and 0s — operating systems simply cut off references to it. One user also said they saw a photo return even though they don’t sync their phone or use iCloud, implying the photos could be originating from on-device storage. Apple didn’t immediately reply to The Verge’s request for comment.

After upgrading to iOS 17.5 on my Xr, voicemails that I had already listened to or deleted reappeared. Before the update, I had only one unheard voicemail, but now I have 26. pic.twitter.com/eqx1buJIx0

— Stacey (@ssmithdev) May 15, 2024

There’s a chance it’s not specific to photos, either, as one person posted on X that they saw old voicemails come back after the update. Several beta testers said the same thing about earlier iOS 17 betas. Whether the issue implies Apple is secretly holding onto old deleted data or it’s just a quirk of how iOS 17.5 handles that data, it’s not an ideal situation. Nobody wants to see their deleted nudes come back.

Read More 

Senate unveils $32 billion roadmap for regulating AI

Photo by Andrew Harnik / Getty Images

Four top Senators unveiled a proposed roadmap for artificial intelligence regulation on Wednesday, calling for at least $32 billion to be spent each year for non-defense AI innovation.
The members of the AI Working Group — Senate Majority Leader Chuck Schumer (D-NY), Mike Rounds (R-SD), Martin Heinrich (D-NM), and Todd Young (R-IN) — released the long-awaited proposal after months of hosting AI Insight Forums to inform their colleagues about the technology. The events brought in AI experts, including executives like OpenAI CEO Sam Altman and Google CEO Sundar Pichai, as well as academics, labor, and civil rights leaders.
Here’s what the roadmap is not: specific legislation that could pass expeditiously. In the 20-page report, the working group lays out key areas where relevant Senate committees should focus their efforts with regard to AI.
Those include: AI workforce training; addressing AI-generated content in specific areas, including child sexual abuse material (CSAM) and election content; safeguarding private information and copyrighted content from AI systems; and mitigating energy costs of AI. The working group says the report is not an exhaustive list of options.
Schumer said the roadmap is meant to guide Senate committees as they take the lead in crafting regulation, and it was not intended to create a big sweeping law encompassing all of AI.
Some lawmakers didn’t wait for the roadmap to introduce their own AI-related proposals.
The Senate Rules Committee, for example, advanced a series of election-related AI bills on Wednesday. But with so many different areas touched by AI and many different views on the appropriate level and kinds of regulation, it’s not yet clear how quickly such proposals will advance into law — especially in an election year.
The working group is encouraging other lawmakers to work with the Senate Appropriations Committee to bring AI funding to the levels proposed by the National Security Commission on Artificial Intelligence (NSCAI). They say the money should be used to fund AI and semiconductor research and development across the government and the National Institute of Standards and Technology (NIST) testing infrastructure.
The roadmap does not specifically call for all future AI systems to undergo safety evaluation before selling to the public but instead asks to develop a framework determining when an evaluation is required.
This is a departure from some proposed bills that would immediately require safety evaluations for all current and future AI models. The senators also did not immediately call for an overhaul of existing copyright rules, a battle AI companies and copyright holders are having in courts. Instead, it asks policymakers to consider if new legislation around transparency, content provenance, likeness protection, and copyright is needed.
Adobe general counsel and chief trust officer Dana Rao, who attended the AI Insight Forums, said in a statement that the policy roadmap is an encouraging start as it will be “important for governments to provide protections across the wider creative ecosystem, including for visual artists and their concerns about style.”
However, other groups are more critical of Schumer’s roadmap, with many expressing concerns about the proposed costs of regulating the technology.
Amba Kak, co-executive director of AI Now, a policy research group supported by groups like Open Society Foundations, Omidyar Network, and Mozilla, released a statement following the report saying its “long list of proposals are no substitute for enforceable law.” Kak also took issue with the big taxpayer price tag on the proposal, saying it “risks further consolidating power back in AI infrastructure providers and replicating industry incentives — we’ll be looking for assurances to prevent this from taking place.”
Rashad Robinson, president of civil rights group Color of Change, said in a statement that the report “shows very clearly that Schumer is not taking AI seriously, which is disappointing given his previous capacity for honesty, problem-solving and leadership on the issue.” He added the report “is setting a dangerous precedent for the future of technological advancement. It’s imperative that the legislature not only establishes stronger guardrails for AI in order to ensure it isn’t used to manipulate, harm, and disenfranchise Black communities, but that they recognize and quickly respond to the risky, unchecked proliferation of bias AI poses.”
Divyansh Kaushik, vice president at national security advisory firm Beacon Global Strategies, said in a statement that “critical for the success of any legislative efforts” will be ensuring that the big price tag can actually be doled out to the agencies and initiatives that need to use those funds. “[T]his can’t be another CHIPS [and Science Act] where we authorize a lot of money without appropriations,” Kaushik said.

Photo by Andrew Harnik / Getty Images

Four top Senators unveiled a proposed roadmap for artificial intelligence regulation on Wednesday, calling for at least $32 billion to be spent each year for non-defense AI innovation.

The members of the AI Working Group — Senate Majority Leader Chuck Schumer (D-NY), Mike Rounds (R-SD), Martin Heinrich (D-NM), and Todd Young (R-IN) — released the long-awaited proposal after months of hosting AI Insight Forums to inform their colleagues about the technology. The events brought in AI experts, including executives like OpenAI CEO Sam Altman and Google CEO Sundar Pichai, as well as academics, labor, and civil rights leaders.

Here’s what the roadmap is not: specific legislation that could pass expeditiously. In the 20-page report, the working group lays out key areas where relevant Senate committees should focus their efforts with regard to AI.

Those include: AI workforce training; addressing AI-generated content in specific areas, including child sexual abuse material (CSAM) and election content; safeguarding private information and copyrighted content from AI systems; and mitigating energy costs of AI. The working group says the report is not an exhaustive list of options.

Schumer said the roadmap is meant to guide Senate committees as they take the lead in crafting regulation, and it was not intended to create a big sweeping law encompassing all of AI.

Some lawmakers didn’t wait for the roadmap to introduce their own AI-related proposals.

The Senate Rules Committee, for example, advanced a series of election-related AI bills on Wednesday. But with so many different areas touched by AI and many different views on the appropriate level and kinds of regulation, it’s not yet clear how quickly such proposals will advance into law — especially in an election year.

The working group is encouraging other lawmakers to work with the Senate Appropriations Committee to bring AI funding to the levels proposed by the National Security Commission on Artificial Intelligence (NSCAI). They say the money should be used to fund AI and semiconductor research and development across the government and the National Institute of Standards and Technology (NIST) testing infrastructure.

The roadmap does not specifically call for all future AI systems to undergo safety evaluation before selling to the public but instead asks to develop a framework determining when an evaluation is required.

This is a departure from some proposed bills that would immediately require safety evaluations for all current and future AI models. The senators also did not immediately call for an overhaul of existing copyright rules, a battle AI companies and copyright holders are having in courts. Instead, it asks policymakers to consider if new legislation around transparency, content provenance, likeness protection, and copyright is needed.

Adobe general counsel and chief trust officer Dana Rao, who attended the AI Insight Forums, said in a statement that the policy roadmap is an encouraging start as it will be “important for governments to provide protections across the wider creative ecosystem, including for visual artists and their concerns about style.”

However, other groups are more critical of Schumer’s roadmap, with many expressing concerns about the proposed costs of regulating the technology.

Amba Kak, co-executive director of AI Now, a policy research group supported by groups like Open Society Foundations, Omidyar Network, and Mozilla, released a statement following the report saying its “long list of proposals are no substitute for enforceable law.” Kak also took issue with the big taxpayer price tag on the proposal, saying it “risks further consolidating power back in AI infrastructure providers and replicating industry incentives — we’ll be looking for assurances to prevent this from taking place.”

Rashad Robinson, president of civil rights group Color of Change, said in a statement that the report “shows very clearly that Schumer is not taking AI seriously, which is disappointing given his previous capacity for honesty, problem-solving and leadership on the issue.” He added the report “is setting a dangerous precedent for the future of technological advancement. It’s imperative that the legislature not only establishes stronger guardrails for AI in order to ensure it isn’t used to manipulate, harm, and disenfranchise Black communities, but that they recognize and quickly respond to the risky, unchecked proliferation of bias AI poses.”

Divyansh Kaushik, vice president at national security advisory firm Beacon Global Strategies, said in a statement that “critical for the success of any legislative efforts” will be ensuring that the big price tag can actually be doled out to the agencies and initiatives that need to use those funds. “[T]his can’t be another CHIPS [and Science Act] where we authorize a lot of money without appropriations,” Kaushik said.

Read More 

Eve’s Android app is finally almost here, thanks to Google’s new Home APIs

Eve, makers of smart home devices such as the Eve Energy smart plug, will soon launch an Android app. | Photo by Amelia Holowaty Krales / The Verge

After announcing that it would be bringing an app to Android way back in 2022, Eve is finally close to launching its Android app, possibly by this fall. Android users will be able to control Eve’s smart home products natively — including smart plugs, smart lights, and smart shades. They will be able to access features such as energy management that are not yet available in the Matter platforms they work with (including Google Home and Samsung SmartThings). Prior to Matter, Eve devices only worked with Apple HomeKit and were only controllable on iOS devices.
“The highly anticipated app will allow Matter devices to be added, controlled and automated directly and without any proprietary connection mechanism or fragile cloud-to-cloud integrations,” the company said in a press release. “For the growing range of Matter-enabled Eve devices, Eve for Android will provide advanced functionality, such as measurement of energy consumption and generation for Eve Energy solutions, autonomous heating schedules for the smart thermostat Eve Thermo or Adaptive Shading for roller blinds in the Eve Blinds Collection.”
The move follows Google’s announcement this week that it’s opening up Google Home’s Home APIs. Eve has been working as an early access partner with Google and is leveraging the APIs to continue developing the app, which has been much delayed due in large part to “external dependencies,” according to a post on Reddit from a person claiming to be an Eve engineer.
“The Google Home APIs turbo-charge the development of Eve for Android — they complement the set of tools that our teams need,” says Jerome Gackel, CEO of Eve. “What has always set Eve apart is the seamless integration into the platform, without a cloud or proprietary technologies. Thanks to the Google Home APIs, we can deliver the same on Android, and we are proud to be moving forward hand in hand with Google.”
While Eve hasn’t provided a date for the launch, the company says it will be once the Google Home APIs are publicly released. According to Google, the first apps built on the Home APIs will come to the Play Store and App Store starting this fall.

Eve, makers of smart home devices such as the Eve Energy smart plug, will soon launch an Android app. | Photo by Amelia Holowaty Krales / The Verge

After announcing that it would be bringing an app to Android way back in 2022, Eve is finally close to launching its Android app, possibly by this fall. Android users will be able to control Eve’s smart home products natively — including smart plugs, smart lights, and smart shades. They will be able to access features such as energy management that are not yet available in the Matter platforms they work with (including Google Home and Samsung SmartThings). Prior to Matter, Eve devices only worked with Apple HomeKit and were only controllable on iOS devices.

“The highly anticipated app will allow Matter devices to be added, controlled and automated directly and without any proprietary connection mechanism or fragile cloud-to-cloud integrations,” the company said in a press release. “For the growing range of Matter-enabled Eve devices, Eve for Android will provide advanced functionality, such as measurement of energy consumption and generation for Eve Energy solutions, autonomous heating schedules for the smart thermostat Eve Thermo or Adaptive Shading for roller blinds in the Eve Blinds Collection.”

The move follows Google’s announcement this week that it’s opening up Google Home’s Home APIs. Eve has been working as an early access partner with Google and is leveraging the APIs to continue developing the app, which has been much delayed due in large part to “external dependencies,” according to a post on Reddit from a person claiming to be an Eve engineer.

“The Google Home APIs turbo-charge the development of Eve for Android — they complement the set of tools that our teams need,” says Jerome Gackel, CEO of Eve. “What has always set Eve apart is the seamless integration into the platform, without a cloud or proprietary technologies. Thanks to the Google Home APIs, we can deliver the same on Android, and we are proud to be moving forward hand in hand with Google.”

While Eve hasn’t provided a date for the launch, the company says it will be once the Google Home APIs are publicly released. According to Google, the first apps built on the Home APIs will come to the Play Store and App Store starting this fall.

Read More 

ADT’s new security system has facial recognition powered by Google Nest

ADT’s new smart security system includes the option to use facial recognition to allow a “trusted neighbor” temporary access to your home when you’re away. | Screenshot: Jennifer Pattison Tuohy / The Verge

ADT has confirmed to The Verge that it’s rolling out a big upgrade to its ADT Plus home security system. The all-new hardware and software platform for ADT Plus features new ADT hardware, deeper integration with Google Nest hardware, and the ability to automatically disarm using facial recognition to let trusted neighbors into your home when you’re away.
I first reported on the new system last October, but until now, ADT had declined to comment despite publishing multiple support pages about it on its site. This week, ADT spokesperson Ben Tamblyn confirmed to The Verge the new ADT Plus system has started rolling out to some states and will be available nationwide in the coming months. The system can be self-installed or professionally installed and can work with ADT’s professional monitoring service.
Also coming soon is an update to the ADT Plus app, which enables a new feature called Trusted Neighbor. This leverages the capabilities of devices like Google Nest cameras and doorbells — including facial recognition and package detection — and devices like smart door locks, to grant automated, secure, and temporary access to “trusted” people when you’re not home but need some help.
“For security emergencies, ADT monitors and calls the professionals, but for day-to-day, non-security needs, ADT enables your trusted neighbors to be there.”
Tamblyn said Trusted Neighbor uses both time- and event-based access, tapping into the new Google Home APIs Google announced at I/O this week to allow the security system to react to nonemergency events in your home. For example, if the Google Nest doorbell detects a package, Trusted Neighbor can execute an automation that disarms the ADT security system when the Google Nest doorbell recognizes your neighbor approaching, then rearms it once they’ve left.
Tamblyn said Trusted Neighbor can also be set to respond to sensors, so if a water leak is detected, it can automatically let a neighbor in. The ADT app will notify you about these events, and if you’re not comfortable with automatic access, you can choose to initiate the automation manually, according to Tamblyn.

There is also time-based automation for regular events, such as letting the dog walker in if they are recognized by the camera between 10 and 10:30AM on weekdays. The automations are all managed by the user and can be disabled at any time.
“Trusted Neighbor is a service that builds upon that universal feeling of giving your trusted neighbor a key to your house to help when you’re out of town,” said Tamblyn. “It will allow users to easily grant and automate secure and temporary access to their homes for neighbors, friends, and helpers. For security emergencies, ADT monitors and calls the professionals, but for day-to-day, non-security needs, ADT enables your trusted neighbors to be there.”

Images: Google / ADT
The ADT Plus system has new hardware that shares some design features with Google’s now discontinued Nest Secure. The Nest keypad is on the left and the new ADT Plus keypad is on the right.

The Trusted Neighbor feature requires the new ADT Plus system, which takes some design cues from Nest Secure, Google’s home security system it shut down after investing $450 million in ADT. (Nest Secure users were offered a free ADT system, but that offer expired last month and is not applicable to this new system. However, Tamblyn said some existing ADT hardware will be upgradable to work with Trusted Neighbor.)
The new hardware includes upgraded door and window sensors with the same bypass button found on Nest Detect sensors and a backlit circular base station that looks a lot like the Nest Secure version. ADT hasn’t released any new information about ADT Plus’ hardware, but it did confirm that my reporting from last year was accurate, where you can see all the details on new ADT hardware and its functions.
Tamblyn said Trusted Neighbor should be available this summer “to ADT subscribers whose service tier includes Nest Aware and who have the ADT + platform and hardware, and the other required hardware such as Nest cameras.”
It will also work with hardware from other manufacturers, including smart locks and sensors. Tamblyn said smart locks will allow inputting a code to both unlock the door and disarm the system as an alternative to facial recognition. But ADT hasn’t announced which locks will work with the system.
Google Nest’s familiar faces feature regularly confuses me with my UPS driver
Of course, if you have a smart home, you can already do much of what Trusted Neighbor offers yourself. If you’re on a beach in Bali and get an alert that there’s a water leak in your laundry room or a package on your porch, you could text your neighbor, unlock your smart lock in the app, and disable the security system to let them in (or give them the codes so they can do it themselves). The idea behind Trusted Neighbor is to make this process easier and more automated.
While this could certainly be useful, there are some obvious downsides. First, you need to get your neighbor to download and use the ADT Plus app for them to be an authorized user. Second, if you want to use the facial recognition aspect, you’ll need to store their face in your familiar faces database, which you will want to ask their permission for.
Third, and most worryingly, that might not always work. I personally use Google Nest’s familiar faces feature, and it regularly confuses me with my UPS driver. I would be hesitant to trust the security of my home to those smarts. Tamblyn said there are safeguards in place to prevent accidental triggering but wasn’t able to share details.
Still, this is an intriguing new feature for a security system. With Google opening up its Home APIs, I’m excited to see what other innovations smart home companies come up with to leverage the platform’s devices and intelligence. I could certainly see a future where your entire home can respond to your face. Walk up to the front door, and the doorbell recognizes you, turns off the alarm, unlocks the door, sets the temperature to your preference, and starts playing your favorite playlist. Is it a little creepy? Yes. Is it smart? Very.

ADT’s new smart security system includes the option to use facial recognition to allow a “trusted neighbor” temporary access to your home when you’re away. | Screenshot: Jennifer Pattison Tuohy / The Verge

ADT has confirmed to The Verge that it’s rolling out a big upgrade to its ADT Plus home security system. The all-new hardware and software platform for ADT Plus features new ADT hardware, deeper integration with Google Nest hardware, and the ability to automatically disarm using facial recognition to let trusted neighbors into your home when you’re away.

I first reported on the new system last October, but until now, ADT had declined to comment despite publishing multiple support pages about it on its site. This week, ADT spokesperson Ben Tamblyn confirmed to The Verge the new ADT Plus system has started rolling out to some states and will be available nationwide in the coming months. The system can be self-installed or professionally installed and can work with ADT’s professional monitoring service.

Also coming soon is an update to the ADT Plus app, which enables a new feature called Trusted Neighbor. This leverages the capabilities of devices like Google Nest cameras and doorbells — including facial recognition and package detection — and devices like smart door locks, to grant automated, secure, and temporary access to “trusted” people when you’re not home but need some help.

“For security emergencies, ADT monitors and calls the professionals, but for day-to-day, non-security needs, ADT enables your trusted neighbors to be there.”

Tamblyn said Trusted Neighbor uses both time- and event-based access, tapping into the new Google Home APIs Google announced at I/O this week to allow the security system to react to nonemergency events in your home. For example, if the Google Nest doorbell detects a package, Trusted Neighbor can execute an automation that disarms the ADT security system when the Google Nest doorbell recognizes your neighbor approaching, then rearms it once they’ve left.

Tamblyn said Trusted Neighbor can also be set to respond to sensors, so if a water leak is detected, it can automatically let a neighbor in. The ADT app will notify you about these events, and if you’re not comfortable with automatic access, you can choose to initiate the automation manually, according to Tamblyn.

There is also time-based automation for regular events, such as letting the dog walker in if they are recognized by the camera between 10 and 10:30AM on weekdays. The automations are all managed by the user and can be disabled at any time.

“Trusted Neighbor is a service that builds upon that universal feeling of giving your trusted neighbor a key to your house to help when you’re out of town,” said Tamblyn. “It will allow users to easily grant and automate secure and temporary access to their homes for neighbors, friends, and helpers. For security emergencies, ADT monitors and calls the professionals, but for day-to-day, non-security needs, ADT enables your trusted neighbors to be there.”

Images: Google / ADT
The ADT Plus system has new hardware that shares some design features with Google’s now discontinued Nest Secure. The Nest keypad is on the left and the new ADT Plus keypad is on the right.

The Trusted Neighbor feature requires the new ADT Plus system, which takes some design cues from Nest Secure, Google’s home security system it shut down after investing $450 million in ADT. (Nest Secure users were offered a free ADT system, but that offer expired last month and is not applicable to this new system. However, Tamblyn said some existing ADT hardware will be upgradable to work with Trusted Neighbor.)

The new hardware includes upgraded door and window sensors with the same bypass button found on Nest Detect sensors and a backlit circular base station that looks a lot like the Nest Secure version. ADT hasn’t released any new information about ADT Plus’ hardware, but it did confirm that my reporting from last year was accurate, where you can see all the details on new ADT hardware and its functions.

Tamblyn said Trusted Neighbor should be available this summer “to ADT subscribers whose service tier includes Nest Aware and who have the ADT + platform and hardware, and the other required hardware such as Nest cameras.”

It will also work with hardware from other manufacturers, including smart locks and sensors. Tamblyn said smart locks will allow inputting a code to both unlock the door and disarm the system as an alternative to facial recognition. But ADT hasn’t announced which locks will work with the system.

Google Nest’s familiar faces feature regularly confuses me with my UPS driver

Of course, if you have a smart home, you can already do much of what Trusted Neighbor offers yourself. If you’re on a beach in Bali and get an alert that there’s a water leak in your laundry room or a package on your porch, you could text your neighbor, unlock your smart lock in the app, and disable the security system to let them in (or give them the codes so they can do it themselves). The idea behind Trusted Neighbor is to make this process easier and more automated.

While this could certainly be useful, there are some obvious downsides. First, you need to get your neighbor to download and use the ADT Plus app for them to be an authorized user. Second, if you want to use the facial recognition aspect, you’ll need to store their face in your familiar faces database, which you will want to ask their permission for.

Third, and most worryingly, that might not always work. I personally use Google Nest’s familiar faces feature, and it regularly confuses me with my UPS driver. I would be hesitant to trust the security of my home to those smarts. Tamblyn said there are safeguards in place to prevent accidental triggering but wasn’t able to share details.

Still, this is an intriguing new feature for a security system. With Google opening up its Home APIs, I’m excited to see what other innovations smart home companies come up with to leverage the platform’s devices and intelligence. I could certainly see a future where your entire home can respond to your face. Walk up to the front door, and the doorbell recognizes you, turns off the alarm, unlocks the door, sets the temperature to your preference, and starts playing your favorite playlist. Is it a little creepy? Yes. Is it smart? Very.

Read More 

Apple’s new accessibility features let you control an iPhone or iPad with your eyes

Image: Apple

Apple just announced a slew of new accessibility features coming to its software platforms in the months ahead, including eye tracking, which the company says uses artificial intelligence to let people with physical disabilities more easily navigate through iOS and iPadOS.
A new “music haptics” option will use the iPhone’s Taptic Engine vibration system to “play taps, textures, and refined vibrations to the audio of the music” for supported Apple Music tracks. Apple is also adding features to reduce motion sickness for those susceptible to it when using an iPhone in a moving vehicle.
All of these new accessibility options are likely to debut in iOS and iPadOS 18, though Apple is only saying “later this year” ahead of its WWDC event next month. The eye tracking feature “uses the front-facing camera to set up and calibrate in seconds, and with on-device machine learning, all data used to set up and control this feature is kept securely on device, and isn’t shared with Apple.” The company says it’s been designed to work across iOS and iPadOS apps without requiring any extra hardware or accessories.
Music haptics will let those who are deaf or hard of hearing “experience music on iPhone” by producing a range of vibrations, taps, and other effects in rhythm with millions of tracks on Apple Music. Apple says developers will also be able to add the feature to their own apps through a new API.

GIF: Apple
These animated dots could help some people avoid sensory conflict and thus reduce motion sickness.

Other upcoming accessibility features include vocal shortcuts, which will let anyone “assign custom utterances that Siri can understand to launch shortcuts and complete complex tasks.” A new “Listen for Atypical Speech” feature uses machine learning to recognize someone’s unique speech patterns; this one is “designed for users with acquired or progressive conditions that affect speech, such as cerebral palsy, amyotrophic lateral sclerosis (ALS), or stroke.”
If you’re someone who often encounters motion sickness when using your tech in a moving vehicle, Apple’s got a new method for helping to reduce those unpleasant feelings:
With vehicle motion cues, animated dots on the edges of the screen represent changes in vehicle motion to help reduce sensory conflict without interfering with the main content. Using sensors built into iPhone and iPad, vehicle motion cues recognizes when a user is in a moving vehicle and responds accordingly. The feature can be set to show automatically on iPhone, or can be turned on and off in control center.
The company’s full press release contains a longer list of other accessibility capabilities that are coming to Apple’s platforms in a few months. AI and machine learning appear throughout the text, offering yet more confirmation that iOS 18, iPadOS 18, and the company’s other software platforms will go heavy on AI-powered features. Apple is reportedly in discussions with both OpenAI and Google about collaborating on some generative AI functionality.
But even outside all that, these are great steps for making Apple’s products more accessible to as many people as possible. The company announced them one day before Global Accessibility Awareness Day, which is on May 16th.

Image: Apple

Apple just announced a slew of new accessibility features coming to its software platforms in the months ahead, including eye tracking, which the company says uses artificial intelligence to let people with physical disabilities more easily navigate through iOS and iPadOS.

A new “music haptics” option will use the iPhone’s Taptic Engine vibration system to “play taps, textures, and refined vibrations to the audio of the music” for supported Apple Music tracks. Apple is also adding features to reduce motion sickness for those susceptible to it when using an iPhone in a moving vehicle.

All of these new accessibility options are likely to debut in iOS and iPadOS 18, though Apple is only saying “later this year” ahead of its WWDC event next month. The eye tracking feature “uses the front-facing camera to set up and calibrate in seconds, and with on-device machine learning, all data used to set up and control this feature is kept securely on device, and isn’t shared with Apple.” The company says it’s been designed to work across iOS and iPadOS apps without requiring any extra hardware or accessories.

Music haptics will let those who are deaf or hard of hearing “experience music on iPhone” by producing a range of vibrations, taps, and other effects in rhythm with millions of tracks on Apple Music. Apple says developers will also be able to add the feature to their own apps through a new API.

GIF: Apple
These animated dots could help some people avoid sensory conflict and thus reduce motion sickness.

Other upcoming accessibility features include vocal shortcuts, which will let anyone “assign custom utterances that Siri can understand to launch shortcuts and complete complex tasks.” A new “Listen for Atypical Speech” feature uses machine learning to recognize someone’s unique speech patterns; this one is “designed for users with acquired or progressive conditions that affect speech, such as cerebral palsy, amyotrophic lateral sclerosis (ALS), or stroke.”

If you’re someone who often encounters motion sickness when using your tech in a moving vehicle, Apple’s got a new method for helping to reduce those unpleasant feelings:

With vehicle motion cues, animated dots on the edges of the screen represent changes in vehicle motion to help reduce sensory conflict without interfering with the main content. Using sensors built into iPhone and iPad, vehicle motion cues recognizes when a user is in a moving vehicle and responds accordingly. The feature can be set to show automatically on iPhone, or can be turned on and off in control center.

The company’s full press release contains a longer list of other accessibility capabilities that are coming to Apple’s platforms in a few months. AI and machine learning appear throughout the text, offering yet more confirmation that iOS 18, iPadOS 18, and the company’s other software platforms will go heavy on AI-powered features. Apple is reportedly in discussions with both OpenAI and Google about collaborating on some generative AI functionality.

But even outside all that, these are great steps for making Apple’s products more accessible to as many people as possible. The company announced them one day before Global Accessibility Awareness Day, which is on May 16th.

Read More 

Netflix snagged global streaming rights for NFL Christmas Day games

Illustration by Nick Barclay / The Verge

Just 11 months ago, Netflix co-CEO Ted Sarandos said the streamer was “super excited about the success of our sports-adjacent programming,” but it’s going from adjacent to all in now, announcing a new three-year deal with the NFL to stream Christmas Day games. The NFL will announce its 2024 schedule tonight at 8PM ET, which is when we’ll learn which two games will be on Netflix this year.
As we’ve seen with previous NFL games shifted to streaming on Peacock and Amazon, the arrangement will include airing them on broadcast TV in the competing teams’ home markets, but elsewhere in the US, you’ll either need to have Netflix or watch on a mobile device with NFL Plus.

After 2024, the announcement says the deal includes “at least one holiday game each year” in 2025 and 2026. Netflix reportedly had competition from Amazon in bidding for the Christmas games, and The Wall Street Journal cites sources saying Netflix is paying about $75 million per game this year. That’s in addition to stuff like the Jake Paul vs. Mike Tyson fight and a $5 billion, 10-year arrangement with WWE that will start in January 2025, so don’t be surprised if the price for streaming increases and the push for bundles continues.

Illustration by Nick Barclay / The Verge

Just 11 months ago, Netflix co-CEO Ted Sarandos said the streamer was “super excited about the success of our sports-adjacent programming,” but it’s going from adjacent to all in now, announcing a new three-year deal with the NFL to stream Christmas Day games. The NFL will announce its 2024 schedule tonight at 8PM ET, which is when we’ll learn which two games will be on Netflix this year.

As we’ve seen with previous NFL games shifted to streaming on Peacock and Amazon, the arrangement will include airing them on broadcast TV in the competing teams’ home markets, but elsewhere in the US, you’ll either need to have Netflix or watch on a mobile device with NFL Plus.

After 2024, the announcement says the deal includes “at least one holiday game each year” in 2025 and 2026. Netflix reportedly had competition from Amazon in bidding for the Christmas games, and The Wall Street Journal cites sources saying Netflix is paying about $75 million per game this year. That’s in addition to stuff like the Jake Paul vs. Mike Tyson fight and a $5 billion, 10-year arrangement with WWE that will start in January 2025, so don’t be surprised if the price for streaming increases and the push for bundles continues.

Read More 

Uber is adding scheduled carpool rides and shuttle service for airports and concerts

Illustration by Alex Castro / The Verge

Uber announced several new mobility products as part of its annual Go-Get conference, including shuttle service for airports and concert events and scheduled carpool trips.
Uber Shuttle, which is coming soon to select cities, allows customers to use the Uber app to book a seat in a shuttle bus when traveling to an airport or concert destination. The company says the fare will be “a fraction” of the price of a normal Uber ride.
The ridehailing and delivery company has been selected as the official partner of Live Nation, the parent company of Ticketmaster. The partnership will give Uber access to “amphitheaters across the country” as it aims to position Uber Shuttle as a crucial link between venues and mass transit hubs and parking lots. Uber is also teaming up with Miami’s Hard Rock Stadium for Dolphins games as well as Formula One and Grand Prix races.

And more importantly, Uber plans to offer shuttle service for a number of airports, with upcoming cities to be announced at a future date. Airports are a core aspect of Uber’s mobility business, and shuttles could potentially represent a new revenue source for the company.
Uber Shuttle won’t work like the company’s normal ridehailing service. Uber said it will contract with local shuttle bus providers for vehicles and drivers. Shuttle buses with between 14 and 55 seats will be eligible for the new service.

Uber Shuttle rides can be booked up to seven days in advance, and customers can reserve up to five seats in a bus. The price will also not be impacted by surge pricing, which typically spikes during major events like sports games and concerts.
This isn’t the first time Uber has flirted with high-capacity vehicles. The company launched a party bus business nearly a decade ago for transporting wedding parties, corporate event-goers, and more.
Uber is also announcing a new scheduling feature for its UberX Share carpool service. Customers can now schedule a shared ride anywhere between 10 minutes to 30 days in advance. The rides are discounted by up to 25 percent compared to a standard UberX trip, the company says.
This isn’t the first time Uber has flirted with high-capacity vehicles
Uber envisions UberX Share reservations to be especially useful during morning and evening commuting times. The company’s carpool service recently crossed $1 billion in annualized bookings, and in the most recent earnings report, Uber said that shared rides grew by a factor of six year over year — but the company doesn’t break out its shared rides revenue in its quarterly reports.
Uber discontinued its first carpool service, UberPool, during the covid-19 pandemic but later relaunched it under a new name. It was one of the cheapest options on the company’s platform, but many drivers hated it, complaining about low customer ratings, inefficient algorithms, and roundabout directions that annoyed riders despite the low fare.
Uber announced several other new products during today’s event, including Uber One memberships for college students, shareable restaurant lists in Uber Eats, new mobility accounts for caregivers of older individuals or ill relatives, and new Uber Eats perks for Costco members.

Illustration by Alex Castro / The Verge

Uber announced several new mobility products as part of its annual Go-Get conference, including shuttle service for airports and concert events and scheduled carpool trips.

Uber Shuttle, which is coming soon to select cities, allows customers to use the Uber app to book a seat in a shuttle bus when traveling to an airport or concert destination. The company says the fare will be “a fraction” of the price of a normal Uber ride.

The ridehailing and delivery company has been selected as the official partner of Live Nation, the parent company of Ticketmaster. The partnership will give Uber access to “amphitheaters across the country” as it aims to position Uber Shuttle as a crucial link between venues and mass transit hubs and parking lots. Uber is also teaming up with Miami’s Hard Rock Stadium for Dolphins games as well as Formula One and Grand Prix races.

And more importantly, Uber plans to offer shuttle service for a number of airports, with upcoming cities to be announced at a future date. Airports are a core aspect of Uber’s mobility business, and shuttles could potentially represent a new revenue source for the company.

Uber Shuttle won’t work like the company’s normal ridehailing service. Uber said it will contract with local shuttle bus providers for vehicles and drivers. Shuttle buses with between 14 and 55 seats will be eligible for the new service.

Uber Shuttle rides can be booked up to seven days in advance, and customers can reserve up to five seats in a bus. The price will also not be impacted by surge pricing, which typically spikes during major events like sports games and concerts.

This isn’t the first time Uber has flirted with high-capacity vehicles. The company launched a party bus business nearly a decade ago for transporting wedding parties, corporate event-goers, and more.

Uber is also announcing a new scheduling feature for its UberX Share carpool service. Customers can now schedule a shared ride anywhere between 10 minutes to 30 days in advance. The rides are discounted by up to 25 percent compared to a standard UberX trip, the company says.

This isn’t the first time Uber has flirted with high-capacity vehicles

Uber envisions UberX Share reservations to be especially useful during morning and evening commuting times. The company’s carpool service recently crossed $1 billion in annualized bookings, and in the most recent earnings report, Uber said that shared rides grew by a factor of six year over year — but the company doesn’t break out its shared rides revenue in its quarterly reports.

Uber discontinued its first carpool service, UberPool, during the covid-19 pandemic but later relaunched it under a new name. It was one of the cheapest options on the company’s platform, but many drivers hated it, complaining about low customer ratings, inefficient algorithms, and roundabout directions that annoyed riders despite the low fare.

Uber announced several other new products during today’s event, including Uber One memberships for college students, shareable restaurant lists in Uber Eats, new mobility accounts for caregivers of older individuals or ill relatives, and new Uber Eats perks for Costco members.

Read More 

Scroll to top
Generated by Feedzy