verge-rss

PPSSPP brings PSP emulation to the iPhone

Screenshot: PPSSPP

It’s the PlayStation Portable’s turn to get an emulator on the iOS App Store thanks to PPSSPP, which just went live today. This emulator, from developer Henrik Rydgård, has been in development for more than a decade, and it’s free to download for the iPhone and iPad.
Rydgård says in a blog post that the version approved this morning has some limitations versus previous builds of the app that were available through various exploits and workarounds. The biggest is that Apple doesn’t allow Just-in-Time recompilers that retranslate code for the OS and can lead to smoother performance. (It’s why we might never see a GameCube / Wii emulator.) “Fortunately,” he writes, “iOS devices are generally fast enough” for almost all PSP games.
Besides that, Rydgård said iPad Magic Keyboard support had to be removed because “the old method was using an undocumented API” but that the feature will return. The emulator’s RetroAchievements feature is disabled, too. Rydgård didn’t explain why but says it’ll be back later “with a better login UI.” The same goes for Vulkan graphics API support, which he says will also come back, possibly with “a native Metal backend.”

I never owned Sony’s first handheld PSP, but I polled Verge staffers and was told Valkyria Chronicles II, Patapon, Final Fantasy Tactics: The War of the Lions, and Crisis Core: Final Fantasy VII are all excellent reasons to check out PPSSPP. Lumines deserves a mention, too. Most of those games were released on other consoles later, of course, but they do represent the kinds of experiences that were available on the system.
PPSSPP works on iOS 12 and up, iPadOS 12 and up, and can also be played on the Vision Pro. Though the app is free, Rydgård’s paid version is coming soon for $4.99, he told The Verge via email. Assuming it’s like the $5 Android version, it may not be functionally different from the free app — it’s there to support Rydgård’s efforts.

Screenshot: PPSSPP

It’s the PlayStation Portable’s turn to get an emulator on the iOS App Store thanks to PPSSPP, which just went live today. This emulator, from developer Henrik Rydgård, has been in development for more than a decade, and it’s free to download for the iPhone and iPad.

Rydgård says in a blog post that the version approved this morning has some limitations versus previous builds of the app that were available through various exploits and workarounds. The biggest is that Apple doesn’t allow Just-in-Time recompilers that retranslate code for the OS and can lead to smoother performance. (It’s why we might never see a GameCube / Wii emulator.) “Fortunately,” he writes, “iOS devices are generally fast enough” for almost all PSP games.

Besides that, Rydgård said iPad Magic Keyboard support had to be removed because “the old method was using an undocumented API” but that the feature will return. The emulator’s RetroAchievements feature is disabled, too. Rydgård didn’t explain why but says it’ll be back later “with a better login UI.” The same goes for Vulkan graphics API support, which he says will also come back, possibly with “a native Metal backend.”

I never owned Sony’s first handheld PSP, but I polled Verge staffers and was told Valkyria Chronicles II, Patapon, Final Fantasy Tactics: The War of the Lions, and Crisis Core: Final Fantasy VII are all excellent reasons to check out PPSSPP. Lumines deserves a mention, too. Most of those games were released on other consoles later, of course, but they do represent the kinds of experiences that were available on the system.

PPSSPP works on iOS 12 and up, iPadOS 12 and up, and can also be played on the Vision Pro. Though the app is free, Rydgård’s paid version is coming soon for $4.99, he told The Verge via email. Assuming it’s like the $5 Android version, it may not be functionally different from the free app — it’s there to support Rydgård’s efforts.

Read More 

RetroArch brings its free multisystem emulation to the iOS App Store

Image: Cath Virginia / The Verge

Another emulator with tons of history and development, RetroArch, is now freely available on the iOS App Store today. This is great news for retro gaming fans with iPhones because it can emulate a truly eye-crossing number of retro consoles once installed.
In fact, there are too many to list here. But some notable emulation cores — the actual separately developed emulation software — included in this version of RetroArch are the NEC PC Engine, Nintendo DS, Game Boy Advance, Virtual Boy, Neo Geo Pocket, and even the PSP (using the same core that drives the PPSSPP app that went up today). You can see the list in full by clicking “more” on RetroArch’s description in its App Store listing.

Strictly speaking, RetroArch isn’t an emulator but, rather, a front end that you can drop emulator cores into. It has support for various features, including online gameplay, remapping buttons or keys per emulation core or per game, fast-forward and rewind, and gyro control.
You’ll need to be on iOS / iPadOS 14.2 to run the new version of RetroArch or own a Vision Pro. It’s not available on the Mac version of the App Store, but that’s not a problem since there’s a macOS version (and versions for other operating systems as well) that you can easily download from the RetroArch website.

Image: Cath Virginia / The Verge

Another emulator with tons of history and development, RetroArch, is now freely available on the iOS App Store today. This is great news for retro gaming fans with iPhones because it can emulate a truly eye-crossing number of retro consoles once installed.

In fact, there are too many to list here. But some notable emulation cores — the actual separately developed emulation software — included in this version of RetroArch are the NEC PC Engine, Nintendo DS, Game Boy Advance, Virtual Boy, Neo Geo Pocket, and even the PSP (using the same core that drives the PPSSPP app that went up today). You can see the list in full by clicking “more” on RetroArch’s description in its App Store listing.

Strictly speaking, RetroArch isn’t an emulator but, rather, a front end that you can drop emulator cores into. It has support for various features, including online gameplay, remapping buttons or keys per emulation core or per game, fast-forward and rewind, and gyro control.

You’ll need to be on iOS / iPadOS 14.2 to run the new version of RetroArch or own a Vision Pro. It’s not available on the Mac version of the App Store, but that’s not a problem since there’s a macOS version (and versions for other operating systems as well) that you can easily download from the RetroArch website.

Read More 

Senate committee passes three bills to safeguard elections from AI, months before Election Day

Photo by Kevin Dietsch / Getty Images

The Senate Rules Committee passed three bills that aim to safeguard elections from deception by artificial intelligence, with just months to go before Election Day. The bills would still need to advance in the House and pass the full Senate to become law, creating a time crunch for rules around election-related deepfakes to take effect before polls open across the country in November.
The vote happened on the same day that Senate Majority Leader Chuck Schumer (D-NY) and three bipartisan colleagues released a roadmap for how Congress should consider regulating AI. The document lays out priorities and principles for lawmakers to consider but leaves the crafting of specific bills to the committees.
The three election bills passed by the Senate Rules Committee on Wednesday mark an early step at the federal level to take action on AI in elections. Chair Amy Klobuchar (D-MN), who sponsors the bills, noted that states have already moved forward on this issue for state-level elections. For example, 14 states have enacted a form of labeling of AI content, according to Klobuchar.
The measure with the most support in the committee, the Preparing Election Administrators for AI Act, which passed 11–0, would direct the Election Assistance Commission (EAC) to work with the National Institute of Standards and Technology (NIST) to create a report for election offices about relevant risks of AI to disinformation, cybersecurity, and election administration. It also included an amendment requiring a report on how AI ends up impacting the 2024 elections.
The two other bills, the Protect Elections from Deceptive AI Act and the AI Transparency in Elections Act, passed 9–2 out of the committee. The first would prohibit AI deepfakes of federal candidates in certain circumstances when used to fundraise or influence an election and is co-sponsored by Sens. Josh Hawley (R-MO), Chris Coons (D-DE), and Susan Collins (R-ME). The second, co-sponsored by Sen. Lisa Murkowski (R-AK), would enforce a disclaimer on political ads that have been substantially created or altered by AI (it would not apply to things like color editing or resizing, for example). While the Protect Elections from Deceptive AI Act could not regulate satire, Klobuchar noted that the AI Transparency in Elections Act would at least let voters know when satire ads are AI-generated.
“I’m, in many ways, afraid in 2024 we may be less protected than we were in 2020”
Ranking Member Deb Fischer (R-NE), who opposed the latter two bills, said they were “over-inclusive, and they sweep in previously unregulated speech that goes beyond deepfakes.” Fischer said the Protection Elections from Deceptive AI Act would restrict unpaid political speech, adding that “there is no precedent for this restriction in the 50-year history of our federal campaign finance laws.” Fischer also said that state legislatures are a more appropriate venue for these kinds of election regulations rather than the federal government.
But key Democrats on the committee urged action. Senate Intelligence Committee Chair Mark Warner (D-VA) said he’s, “in many ways, afraid in 2024 we may be less protected than we were in 2020.” He said that’s because “our adversaries realize that interference in our elections is cheap and relatively easy,” and Americans “are more willing to believe certain outrageous theories these days.” Compounding that is the fact that “AI changes the whole nature and game of how a bad actor … can interfere using these tools.”
If deepfakes are everywhere and no one believes the results of the elections, woe is our democracy
“If deepfakes are everywhere and no one believes the results of the elections, woe is our democracy,” Schumer said during the markup. “I hope my colleagues will think about the consequences of doing nothing.”
At a press conference on the AI roadmap after the markup, Schumer noted the committee passage and said they’d “like to get that done in time for the election.”

Photo by Kevin Dietsch / Getty Images

The Senate Rules Committee passed three bills that aim to safeguard elections from deception by artificial intelligence, with just months to go before Election Day. The bills would still need to advance in the House and pass the full Senate to become law, creating a time crunch for rules around election-related deepfakes to take effect before polls open across the country in November.

The vote happened on the same day that Senate Majority Leader Chuck Schumer (D-NY) and three bipartisan colleagues released a roadmap for how Congress should consider regulating AI. The document lays out priorities and principles for lawmakers to consider but leaves the crafting of specific bills to the committees.

The three election bills passed by the Senate Rules Committee on Wednesday mark an early step at the federal level to take action on AI in elections. Chair Amy Klobuchar (D-MN), who sponsors the bills, noted that states have already moved forward on this issue for state-level elections. For example, 14 states have enacted a form of labeling of AI content, according to Klobuchar.

The measure with the most support in the committee, the Preparing Election Administrators for AI Act, which passed 11–0, would direct the Election Assistance Commission (EAC) to work with the National Institute of Standards and Technology (NIST) to create a report for election offices about relevant risks of AI to disinformation, cybersecurity, and election administration. It also included an amendment requiring a report on how AI ends up impacting the 2024 elections.

The two other bills, the Protect Elections from Deceptive AI Act and the AI Transparency in Elections Act, passed 9–2 out of the committee. The first would prohibit AI deepfakes of federal candidates in certain circumstances when used to fundraise or influence an election and is co-sponsored by Sens. Josh Hawley (R-MO), Chris Coons (D-DE), and Susan Collins (R-ME). The second, co-sponsored by Sen. Lisa Murkowski (R-AK), would enforce a disclaimer on political ads that have been substantially created or altered by AI (it would not apply to things like color editing or resizing, for example). While the Protect Elections from Deceptive AI Act could not regulate satire, Klobuchar noted that the AI Transparency in Elections Act would at least let voters know when satire ads are AI-generated.

“I’m, in many ways, afraid in 2024 we may be less protected than we were in 2020”

Ranking Member Deb Fischer (R-NE), who opposed the latter two bills, said they were “over-inclusive, and they sweep in previously unregulated speech that goes beyond deepfakes.” Fischer said the Protection Elections from Deceptive AI Act would restrict unpaid political speech, adding that “there is no precedent for this restriction in the 50-year history of our federal campaign finance laws.” Fischer also said that state legislatures are a more appropriate venue for these kinds of election regulations rather than the federal government.

But key Democrats on the committee urged action. Senate Intelligence Committee Chair Mark Warner (D-VA) said he’s, “in many ways, afraid in 2024 we may be less protected than we were in 2020.” He said that’s because “our adversaries realize that interference in our elections is cheap and relatively easy,” and Americans “are more willing to believe certain outrageous theories these days.” Compounding that is the fact that “AI changes the whole nature and game of how a bad actor … can interfere using these tools.”

If deepfakes are everywhere and no one believes the results of the elections, woe is our democracy

“If deepfakes are everywhere and no one believes the results of the elections, woe is our democracy,” Schumer said during the markup. “I hope my colleagues will think about the consequences of doing nothing.”

At a press conference on the AI roadmap after the markup, Schumer noted the committee passage and said they’d “like to get that done in time for the election.”

Read More 

For self-driving cars, the free ride is over

Illustration by Cath Virginia / The Verge | Photo from Getty Images

For years, autonomous vehicles have operated in relative obscurity. With few vehicles on the road and a laissez-faire attitude among government regulators, automakers and big tech firms have been free to test — and even commercially deploy — with little oversight.
Well, those days are done. In rapid succession, the National Highway Traffic Safety Administration (NHTSA) has opened investigations into almost all the major companies testing autonomous vehicles as well as those that offer advanced driver-assist systems in their production cars. Tesla, Ford, Waymo, Cruise, and Zoox are all being probed for alleged safety lapses, with the agency examining hundreds of crashes, some of which have been fatal.
In rapid succession, NHTSA opened investigations into almost all the major companies testing autonomous vehicles
The new investigations signal a new — and perhaps more antagonistic — phase in the relationship between safety regulators and the private sector. The government is requiring more data from companies, especially around crashes, in order to determine whether the industry’s safety claims live up to their hype. And the companies are finding that the proliferation of smartphones with cameras is working against them, as more videos of their vehicles behaving unpredictably go viral.
In 2021, NHTSA issued a standing general order requiring car companies to report crashes involving autonomous vehicles (AVs) as well as Level 2 driver-assist systems found in hundreds of thousands of vehicles on the road today. Companies are now required to document collisions when ADAS and automated technologies were in use within 30 seconds of impact.
NHTSA is seeing these crash reports in real time, enabling the agency’s Office of Defects Investigation to make connections between various incidents and determine whether more scrutiny is warranted. Meanwhile, videos of driverless vehicles operating erratically are also providing investigators with data they wouldn’t have had access to under the standing general order because they don’t involve collisions.

@kilowattsapp Hello officer, sry i got a little confused #waymo #autonomous ♬ original sound – kilowattsapp

NHTSA specifically cited “other incidents… identified based on publicly available reports” in its investigation into Waymo’s driverless car system. The agency is looking into 22 incidents, some of which involve Waymo vehicles crashing into gates, chains, and parked cars. But NHTSA also cited viral videos of the company’s robotaxis operating on the wrong side of the road.
NHTSA is seeing these crash reports in real time
“I think NHTSA is responding to the Standing General Order data, as well as a never-ending stream of videos from the public, like the Waymo AVs choosing to go down the wrong way on a street,” said Mary “Missy” Cummings, a robotics expert and former senior safety advisor at NHTSA. “Indeed, the public is now filling a vital role in providing NHTSA information about near-misses, whereas the [standing general order] provides details about crashes.”
When NHTSA first announced the requirement that AV operators and automakers report crashes involving their vehicles, experts predicted the agency would be hamstrung because of the lack of standardization and context.
That’s because the data was missing a lot of key details, like the number of vehicle miles driven or the prominence of advanced driver-assist technology in each manufacturer’s fleet. NHTSA receives the data through varying sources, including customer complaints and different telematics systems. And companies are allowed to withhold certain details they consider to be “confidential business information.”
But with more robotaxis on the road, as well as the proliferation of Level 2 driver-assist features, investigators have shown a knack for overcoming the limitations of the crash data by requesting further information from the companies. And that’s going to inevitably lead to more tension.
“Indeed, the public is now filling a vital role in providing NHTSA information about near-misses”
“I definitely think the wider rollouts — and the corresponding videos of bad behavior—have made a difference,” said Sam Anthony, former chief technology officer at Perceptive Automata and author of a newsletter about automated driving.
“For years the companies making these vehicles have been able to trade on peoples’ assuming that they drive pretty much like regular cars,” he added. “There isn’t a broad understanding of how really different their perception is, and how that can make them fail in really unpredictable ways. As soon as people start to experience them on the road, they start to viscerally understand that.”
In particular, NHTSA’s willingness to reopen investigations into Tesla’s Autopilot shows that the agency has discovered a new fervor for oversight, Anthony said. Tesla issued a voluntary recall of Autopilot in the form of an over-the-air software update last year, in response to NHTSA’s investigation into dozens of crashes in which drivers were found to be misusing the system.
But Tesla’s recall wasn’t up to snuff, with NHTSA citing at least 20 crashes involving Tesla vehicles that had received the updated software. The agency is now reexamining the company and could conclude that Tesla’s driver-assist technology can’t be operated safely. That could be detrimental to Tesla’s stock price, as well as its future, with Elon Musk betting the company’s longevity on robotaxis.
“I’m optimistic that NHTSA’s newfound regulatory fervor is a sign of changes at the agency,” Anthony said. “It has been leaderless and, from my outside perspective at least, really demoralized for a long time. I’m hopeful that what’s happening now is an indicator that the people there are finally being given the tools and motivation to do their jobs.”

Illustration by Cath Virginia / The Verge | Photo from Getty Images

For years, autonomous vehicles have operated in relative obscurity. With few vehicles on the road and a laissez-faire attitude among government regulators, automakers and big tech firms have been free to test — and even commercially deploy — with little oversight.

Well, those days are done. In rapid succession, the National Highway Traffic Safety Administration (NHTSA) has opened investigations into almost all the major companies testing autonomous vehicles as well as those that offer advanced driver-assist systems in their production cars. Tesla, Ford, Waymo, Cruise, and Zoox are all being probed for alleged safety lapses, with the agency examining hundreds of crashes, some of which have been fatal.

In rapid succession, NHTSA opened investigations into almost all the major companies testing autonomous vehicles

The new investigations signal a new — and perhaps more antagonistic — phase in the relationship between safety regulators and the private sector. The government is requiring more data from companies, especially around crashes, in order to determine whether the industry’s safety claims live up to their hype. And the companies are finding that the proliferation of smartphones with cameras is working against them, as more videos of their vehicles behaving unpredictably go viral.

In 2021, NHTSA issued a standing general order requiring car companies to report crashes involving autonomous vehicles (AVs) as well as Level 2 driver-assist systems found in hundreds of thousands of vehicles on the road today. Companies are now required to document collisions when ADAS and automated technologies were in use within 30 seconds of impact.

NHTSA is seeing these crash reports in real time, enabling the agency’s Office of Defects Investigation to make connections between various incidents and determine whether more scrutiny is warranted. Meanwhile, videos of driverless vehicles operating erratically are also providing investigators with data they wouldn’t have had access to under the standing general order because they don’t involve collisions.

@kilowattsapp

Hello officer, sry i got a little confused #waymo #autonomous

♬ original sound – kilowattsapp

NHTSA specifically cited “other incidents… identified based on publicly available reports” in its investigation into Waymo’s driverless car system. The agency is looking into 22 incidents, some of which involve Waymo vehicles crashing into gates, chains, and parked cars. But NHTSA also cited viral videos of the company’s robotaxis operating on the wrong side of the road.

NHTSA is seeing these crash reports in real time

“I think NHTSA is responding to the Standing General Order data, as well as a never-ending stream of videos from the public, like the Waymo AVs choosing to go down the wrong way on a street,” said Mary “Missy” Cummings, a robotics expert and former senior safety advisor at NHTSA. “Indeed, the public is now filling a vital role in providing NHTSA information about near-misses, whereas the [standing general order] provides details about crashes.”

When NHTSA first announced the requirement that AV operators and automakers report crashes involving their vehicles, experts predicted the agency would be hamstrung because of the lack of standardization and context.

That’s because the data was missing a lot of key details, like the number of vehicle miles driven or the prominence of advanced driver-assist technology in each manufacturer’s fleet. NHTSA receives the data through varying sources, including customer complaints and different telematics systems. And companies are allowed to withhold certain details they consider to be “confidential business information.”

But with more robotaxis on the road, as well as the proliferation of Level 2 driver-assist features, investigators have shown a knack for overcoming the limitations of the crash data by requesting further information from the companies. And that’s going to inevitably lead to more tension.

“Indeed, the public is now filling a vital role in providing NHTSA information about near-misses”

“I definitely think the wider rollouts — and the corresponding videos of bad behavior—have made a difference,” said Sam Anthony, former chief technology officer at Perceptive Automata and author of a newsletter about automated driving.

“For years the companies making these vehicles have been able to trade on peoples’ assuming that they drive pretty much like regular cars,” he added. “There isn’t a broad understanding of how really different their perception is, and how that can make them fail in really unpredictable ways. As soon as people start to experience them on the road, they start to viscerally understand that.”

In particular, NHTSA’s willingness to reopen investigations into Tesla’s Autopilot shows that the agency has discovered a new fervor for oversight, Anthony said. Tesla issued a voluntary recall of Autopilot in the form of an over-the-air software update last year, in response to NHTSA’s investigation into dozens of crashes in which drivers were found to be misusing the system.

But Tesla’s recall wasn’t up to snuff, with NHTSA citing at least 20 crashes involving Tesla vehicles that had received the updated software. The agency is now reexamining the company and could conclude that Tesla’s driver-assist technology can’t be operated safely. That could be detrimental to Tesla’s stock price, as well as its future, with Elon Musk betting the company’s longevity on robotaxis.

“I’m optimistic that NHTSA’s newfound regulatory fervor is a sign of changes at the agency,” Anthony said. “It has been leaderless and, from my outside perspective at least, really demoralized for a long time. I’m hopeful that what’s happening now is an indicator that the people there are finally being given the tools and motivation to do their jobs.”

Read More 

Airbus debuts its half-plane, half-helicopter to the public

Airbus taking its “Racer” aircraft for a test flight. | Image: Airbus

Airbus has a new aircraft that’s half helicopter and half airplane, designed to speed up flight times for emergency responders. The company calls it the Racer: it has wings like a plane, forward-facing rotors, plus top helicopter blades that seriously make it look like a baby of two aircraft.
Today, Airbus is sharing its one-off working demonstration model of the Racer in France’s southern port city of Marseille for the first time, Reuters reports. It follows new flight images and video posted by Airbus earlier this week that shows how it can take off like a helicopter and make a smooth landing without a long runway. The Racer had its first flight in April.
In an email to The Verge, Airbus Helicopters head of external communications Laurence Petiard writes that a ceremony was held today for the company’s partners in the Clean Sky 2 project, involving 40 partners from 13 different European countries. “They were able to see Racer in flight and then on static display,” Petiard said.
The European Union’s Clean Sky 2 program encourages development of lower-emission air transport and is meant to keep Europe’s aeronautical industry globally competitive.

The project’s head, Julien Guitton, said people are interested in high speeds, but not at the price of negative environmental impact. Simulations show the Racer meets Clean Sky 2 requirements around reducing fuel consumption and CO2 emissions by 20 percent for conventional aircraft of the same weight, Guitton said.
Airbus has developed similar hybrid test aircraft, including the X3 demonstrator Concept from 2010. Meanwhile, Boeing has built the V-22 Osprey that uses tilt-rotors to achieve higher flight speeds, though that is bulky and designed for air combat. There are also vertical take-off and landing (VTOL) aircraft that have been in development for many years — but still haven’t brought about the flying taxi future we’ve been promised.

Airbus taking its “Racer” aircraft for a test flight. | Image: Airbus

Airbus has a new aircraft that’s half helicopter and half airplane, designed to speed up flight times for emergency responders. The company calls it the Racer: it has wings like a plane, forward-facing rotors, plus top helicopter blades that seriously make it look like a baby of two aircraft.

Today, Airbus is sharing its one-off working demonstration model of the Racer in France’s southern port city of Marseille for the first time, Reuters reports. It follows new flight images and video posted by Airbus earlier this week that shows how it can take off like a helicopter and make a smooth landing without a long runway. The Racer had its first flight in April.

In an email to The Verge, Airbus Helicopters head of external communications Laurence Petiard writes that a ceremony was held today for the company’s partners in the Clean Sky 2 project, involving 40 partners from 13 different European countries. “They were able to see Racer in flight and then on static display,” Petiard said.

The European Union’s Clean Sky 2 program encourages development of lower-emission air transport and is meant to keep Europe’s aeronautical industry globally competitive.

The project’s head, Julien Guitton, said people are interested in high speeds, but not at the price of negative environmental impact. Simulations show the Racer meets Clean Sky 2 requirements around reducing fuel consumption and CO2 emissions by 20 percent for conventional aircraft of the same weight, Guitton said.

Airbus has developed similar hybrid test aircraft, including the X3 demonstrator Concept from 2010. Meanwhile, Boeing has built the V-22 Osprey that uses tilt-rotors to achieve higher flight speeds, though that is bulky and designed for air combat. There are also vertical take-off and landing (VTOL) aircraft that have been in development for many years — but still haven’t brought about the flying taxi future we’ve been promised.

Read More 

Google adds Max, Peacock, and Angry Birds to cars with native Android software

Image: Ford

Google is adding several new apps to its in-car infotainment platforms, while also making it easier for developers to get their apps approved faster and with fewer complications.
Google said that two new streaming services, Max and Peacock, are coming to cars with Google built-in. These include models from companies like Nissan, Ford, Acura, Renault, Honda, Polestar, and Volvo.
In addition, cars with Google built-in are getting the Angry Birds game, so you can enjoy all the brick smashing and pig disrupting fun while parked in your vehicle. (Video streaming and gaming apps are only available while the vehicle is parked, in order to reduce distractions.)
Google said that two new streaming services, Max and Peacock, are coming to cars with Google built-in
On the Android Auto side, customers who like mirroring their phone on their vehicle’s display will soon be able to add the Uber app. This will be especially useful for Uber drivers, who can now accept trips and deliveries and get turn-by-turn directions, all from the homescreen of their car.
Lastly, Google is launching a new Google Cast feature with Rivian, which famously does not allow phone mirroring but does use Google Maps for its native navigation. Google Cast will allow customers to cast video content from their phone or tablet directly to the car while parked. “If you don’t already offer casting in your app, this is a simple way for your content to reach new audiences in the car,” Google says.
Google has slowly been adding more video streaming apps to its native Android platform, having already announced the inclusion of YouTube and Prime Video last year. Meanwhile, Tesla owners have been chilling out to Netflix, Hulu, and YouTube since 2019. But, you know, better late than never.

Image: Ford

Google is adding several new apps to its in-car infotainment platforms, while also making it easier for developers to get their apps approved faster and with fewer complications.

Google said that two new streaming services, Max and Peacock, are coming to cars with Google built-in. These include models from companies like Nissan, Ford, Acura, Renault, Honda, Polestar, and Volvo.

In addition, cars with Google built-in are getting the Angry Birds game, so you can enjoy all the brick smashing and pig disrupting fun while parked in your vehicle. (Video streaming and gaming apps are only available while the vehicle is parked, in order to reduce distractions.)

Google said that two new streaming services, Max and Peacock, are coming to cars with Google built-in

On the Android Auto side, customers who like mirroring their phone on their vehicle’s display will soon be able to add the Uber app. This will be especially useful for Uber drivers, who can now accept trips and deliveries and get turn-by-turn directions, all from the homescreen of their car.

Lastly, Google is launching a new Google Cast feature with Rivian, which famously does not allow phone mirroring but does use Google Maps for its native navigation. Google Cast will allow customers to cast video content from their phone or tablet directly to the car while parked. “If you don’t already offer casting in your app, this is a simple way for your content to reach new audiences in the car,” Google says.

Google has slowly been adding more video streaming apps to its native Android platform, having already announced the inclusion of YouTube and Prime Video last year. Meanwhile, Tesla owners have been chilling out to Netflix, Hulu, and YouTube since 2019. But, you know, better late than never.

Read More 

Car screens are getting bigger — and weirder — and Google wants to help

Image: Daniel Golson

Google is lowering the barriers for new apps to be added to Android Auto and cars with Google software built-in, making it easier for developers of gaming and streaming apps to get them added to those platforms. It is also releasing new guidelines for developing apps for various screen sizes and shapes.
Google is launching a new program for car-ready apps, essentially expediting the process for developers to get their apps approved for in-car platforms. As part of this program, Google says it will “proactively” review mobile apps that are already compatible with the increasingly large-sized screens found in more modern vehicles.
Google is launching a new program for car-ready apps
“If the app qualifies, we will automatically opt it in for distribution on cars with Google built-in and make it available in Android Auto, without the need for new development or a new release to be created,” Vivek Radhakrishnan, technical program manager, and Seung Nam, product manager, write in an article. “This program will start with parked app categories like video, gaming and browsers with plans to expand to other app categories in the future.”
Google says it will roll out the new program in the coming months, but app developers who already have gaming or streaming apps that are compatible with large screens can request an earlier review.
Google insists that its top priorities are not just compatibility but also safety. Indeed, safety experts have been sounding a warning for years that the expanding screen size and proliferation of new touchscreen features have a direct correlation to the rise in distracted driving.
Google insists that its top priorities are not just compatibility but also safety
A recent study found that drivers selecting music with Apple CarPlay or Android Auto had slower reaction times than those who were high from smoking pot. And the Centers for Disease Control and Prevention qualifies “anything that takes your attention away from driving,” including sending text messages or checking navigation, as a potential dangerous distraction. Google has been trying to work its way through this problem for several years now, but they have yet to arrive at a definitive solution.
Google is competing with Apple, which has promised that its new, more immersive version of CarPlay is coming soon, starting with luxury automakers Porsche and Aston Martin. (Mercedes-Benz recently said it would not be adopting the multiscreen version of CarPlay.) Some automakers, like GM, are blocking CarPlay and Android Auto in certain models, preferring to build their own software experiences on top of Google’s native in-car software, Android Automotive.
As it adapts to new car designs, Google is releasing a new tiered system for app developers so they can get access to the company’s platforms. Google says the goal is to “minimize” requirements and “streamline the process” for developers.
Here’s how Google describes each tier:

Tier 1: Car differentiated
This tier represents the best of what’s possible in cars. Apps in this tier are specifically built to work across the variety of hardware in cars and can adapt their experience across driving and parked modes. They provide the best user experience designed for the different screens in the car like the center console, instrument cluster and additional screens – like panoramic displays that we see in many premium vehicles.
Tier 2: Car optimized
Most apps available in cars today fall into this tier and provide a great experience on the car’s center stack display. These apps will have some car-specific engineering to include capabilities that can be used across driving or parked modes, depending on the app’s category.
Tier 3: Car ready
Apps in this tier are large screen compatible and are enabled while the car is parked, with potentially no additional work. While these apps may not have car-specific features, users can experience the app just as they would on any large screen Android device.

Google is also releasing new tools to help developers account for new screen sizes and shapes in cars. Depending on the brand, cars have been introducing new-sized screens, including portrait displays and panoramic ones that span the length of the interior.
Google is launching a new emulator “for distant and panoramic displays so developers can visualize and test for the growing sizes and number of screens in the car and make sure apps can adapt to the variety of displays for the best experience.”
The company is also helping developers test user interfaces against new surfaces, like curved displays, insets, and angles. Developers will be able to change the emulator screen to match screen designs in various car models, without needing to bring in real cars for testing.

Image: Daniel Golson

Google is lowering the barriers for new apps to be added to Android Auto and cars with Google software built-in, making it easier for developers of gaming and streaming apps to get them added to those platforms. It is also releasing new guidelines for developing apps for various screen sizes and shapes.

Google is launching a new program for car-ready apps, essentially expediting the process for developers to get their apps approved for in-car platforms. As part of this program, Google says it will “proactively” review mobile apps that are already compatible with the increasingly large-sized screens found in more modern vehicles.

Google is launching a new program for car-ready apps

“If the app qualifies, we will automatically opt it in for distribution on cars with Google built-in and make it available in Android Auto, without the need for new development or a new release to be created,” Vivek Radhakrishnan, technical program manager, and Seung Nam, product manager, write in an article. “This program will start with parked app categories like video, gaming and browsers with plans to expand to other app categories in the future.”

Google says it will roll out the new program in the coming months, but app developers who already have gaming or streaming apps that are compatible with large screens can request an earlier review.

Google insists that its top priorities are not just compatibility but also safety. Indeed, safety experts have been sounding a warning for years that the expanding screen size and proliferation of new touchscreen features have a direct correlation to the rise in distracted driving.

Google insists that its top priorities are not just compatibility but also safety

A recent study found that drivers selecting music with Apple CarPlay or Android Auto had slower reaction times than those who were high from smoking pot. And the Centers for Disease Control and Prevention qualifies “anything that takes your attention away from driving,” including sending text messages or checking navigation, as a potential dangerous distraction. Google has been trying to work its way through this problem for several years now, but they have yet to arrive at a definitive solution.

Google is competing with Apple, which has promised that its new, more immersive version of CarPlay is coming soon, starting with luxury automakers Porsche and Aston Martin. (Mercedes-Benz recently said it would not be adopting the multiscreen version of CarPlay.) Some automakers, like GM, are blocking CarPlay and Android Auto in certain models, preferring to build their own software experiences on top of Google’s native in-car software, Android Automotive.

As it adapts to new car designs, Google is releasing a new tiered system for app developers so they can get access to the company’s platforms. Google says the goal is to “minimize” requirements and “streamline the process” for developers.

Here’s how Google describes each tier:

Tier 1: Car differentiated

This tier represents the best of what’s possible in cars. Apps in this tier are specifically built to work across the variety of hardware in cars and can adapt their experience across driving and parked modes. They provide the best user experience designed for the different screens in the car like the center console, instrument cluster and additional screens – like panoramic displays that we see in many premium vehicles.

Tier 2: Car optimized

Most apps available in cars today fall into this tier and provide a great experience on the car’s center stack display. These apps will have some car-specific engineering to include capabilities that can be used across driving or parked modes, depending on the app’s category.

Tier 3: Car ready

Apps in this tier are large screen compatible and are enabled while the car is parked, with potentially no additional work. While these apps may not have car-specific features, users can experience the app just as they would on any large screen Android device.

Google is also releasing new tools to help developers account for new screen sizes and shapes in cars. Depending on the brand, cars have been introducing new-sized screens, including portrait displays and panoramic ones that span the length of the interior.

Google is launching a new emulator “for distant and panoramic displays so developers can visualize and test for the growing sizes and number of screens in the car and make sure apps can adapt to the variety of displays for the best experience.”

The company is also helping developers test user interfaces against new surfaces, like curved displays, insets, and angles. Developers will be able to change the emulator screen to match screen designs in various car models, without needing to bring in real cars for testing.

Read More 

Android will be able to detect if your phone has been snatched

More security features on more Android phones. | Illustration by Alex Castro / The Verge

Google is announcing an array of new security features as it releases its second Android 15 beta, including a feature that can detect the moment your phone is swiped from your hands. Some of these updates will be included with Android 15 when it arrives this fall, but theft detection and a number of other features will be available to phones with much older OS versions, too — bringing them to many more people.
Theft Detection Lock works by recognizing the unusual motions that would indicate someone has yanked your phone out of your hand or a table in front of you. To prevent a thief from being able to access information on your device, the screen automatically locks. The system looks for other signals that indicate foul play, too, and will be able to lock the screen for protection if someone tries to take it off the network to prevent remote access.
Google is also introducing a new way to lock your phone screen remotely if it ends up in the wrong hands. By visiting android.com/lock, you can enter your phone number and respond to a security challenge to lock your device — a potentially helpful tool if all you have access to is a friend’s phone in the moment. All of these features will arrive later this year via a Google Play services update for phones running Android 10 or later.

Image: Google
A new feature aims to detect when a thief has taken your phone offline and locks the device.

Android 15 also introduces new security features, including “private spaces,” which let you put apps and information in a separate hidden area on your phone that can be locked with a unique PIN. Google is also adding protections for when a phone is forced to reset, requiring the owner’s credentials the next time it’s set up.
Android’s Play Protect also gets an update designed to protect users from bad actors — it will look at how apps use sensitive permissions on your phone to keep an eye out for signs of phishing and fraud. Potentially malicious apps are sent to Google for further review.
As it did last year, Google’s newest OS version played an ever smaller role in the company’s day one I/O keynote. We’ll hear more about Android 15’s new features over the next few months while it’s in beta, but in the meantime, we love to see so many new features coming to lots of Android phones — not just the ones capable of running the very latest OS version.

More security features on more Android phones. | Illustration by Alex Castro / The Verge

Google is announcing an array of new security features as it releases its second Android 15 beta, including a feature that can detect the moment your phone is swiped from your hands. Some of these updates will be included with Android 15 when it arrives this fall, but theft detection and a number of other features will be available to phones with much older OS versions, too — bringing them to many more people.

Theft Detection Lock works by recognizing the unusual motions that would indicate someone has yanked your phone out of your hand or a table in front of you. To prevent a thief from being able to access information on your device, the screen automatically locks. The system looks for other signals that indicate foul play, too, and will be able to lock the screen for protection if someone tries to take it off the network to prevent remote access.

Google is also introducing a new way to lock your phone screen remotely if it ends up in the wrong hands. By visiting android.com/lock, you can enter your phone number and respond to a security challenge to lock your device — a potentially helpful tool if all you have access to is a friend’s phone in the moment. All of these features will arrive later this year via a Google Play services update for phones running Android 10 or later.

Image: Google
A new feature aims to detect when a thief has taken your phone offline and locks the device.

Android 15 also introduces new security features, including “private spaces,” which let you put apps and information in a separate hidden area on your phone that can be locked with a unique PIN. Google is also adding protections for when a phone is forced to reset, requiring the owner’s credentials the next time it’s set up.

Android’s Play Protect also gets an update designed to protect users from bad actors — it will look at how apps use sensitive permissions on your phone to keep an eye out for signs of phishing and fraud. Potentially malicious apps are sent to Google for further review.

As it did last year, Google’s newest OS version played an ever smaller role in the company’s day one I/O keynote. We’ll hear more about Android 15’s new features over the next few months while it’s in beta, but in the meantime, we love to see so many new features coming to lots of Android phones — not just the ones capable of running the very latest OS version.

Read More 

Someone finally made a heat pump that looks good inside your home

Quilt uses ductless heat pump-powered mini-split units combined with touchscreen remote controls to heat and cool your home. | Image: Quilt (Cayce Clifford)

A new company founded by three former Googlers is looking to disrupt the somewhat staid mini-split industry. Quilt is a new “home climate system” — a ductless HVAC system that uses heat pump-powered mini-split units to heat and cool your home.
Two things make Quilt stand out from the start: it’s smart and pretty. The system has built-in millimeter-wave radar sensors for precision occupancy detection, so you can choose to heat and cool only occupied rooms. They work with a smartphone app and a stylish touchscreen remote called the Dial, and the system uses predictive algorithms to heat and cool your home as efficiently as possible.

Each Quilt unit is a ductless mini-split, but rather than large white boxes on your wall, Quilt has designed smaller, sleeker units with the option of a white oak or white front panel that can be customized to fit your home decor.
At only 38 inches wide, less than eight inches tall, and just over eight inches deep, the Quilt indoor unit is much smaller than traditional mini-splits and can fit above a window to be less obtrusive in a room. The units also have built-in accent lighting, with color-changing lighting and adjustable brightness.

The outdoor unit, which can power up to two indoor units, has been designed to be smaller and more modern-looking, with a matte black finish. CEO and co-founder Paul Lambert claims the electric units meet or exceed industry standards for energy efficiency, including Energy Star Most Efficient 2024, SEER2 25, and CEE Tier 2. Although figures are preliminary as the system is still pending certification (you can see more details on Quilt’s website). Lambert, along with co-founders Bill Kee and Matthew Knoll, were involved with developing Gmail, Google Analytics, and robotics at Google, respectively. Lambert says Nest co-founder Matt Rogers was an early investor in Quilt.
The Dials have Thread radios on board and are Matter upgradable, although the company is not enabling either at launch
The Quilt system uses Wi-Fi (2.4 and 5 GHz) and Bluetooth to communicate between units and the remotes, and each Dial can also control the whole system, just like a thermostat does for a central HVAC system. The Dial has temperature, motion, and proximity detection built-in. You can also use the Quilt app (iOS or Android), where you can set schedules and program the system to your preferences — such as what temperatures you want empty rooms to be kept at.
Lambert says the Dials have Thread radios on board and are Matter upgradable, although the company is not enabling either at launch. There is also no integration with smart home platforms such as Apple Home, Google Home, or Amazon Alexa yet. “We want to see where the demand is,” says Lambert. “But we expect to have a public API and support Matter.”

Image: Quilt (Cayce Clifford)
The Dial is very small (2 and 1/4 inches), can control a single or multiple Quilts, and has definite Nest Thermostat vibes (the nicer version).

Unlike traditional ducted HVAC systems, ductless systems need one unit for every room (or two, depending on size). While this is more efficient — because they don’t have ducts to carry the heated or cooled air around your home, where it can lose up to 30 percent efficiency — it can be an installation challenge and is a lot more expensive.
Quilt costs $6,499 per room, which includes installation. While there are several rebates and incentives for upgrading your home’s HVAC system to an efficient electrical solution, that’s going to add up very quickly. For a three-bedroom house, say you were looking at installing it in six rooms (each bedroom, a kitchen, living room, and dining room — Quilt doesn’t recommend them in bathrooms) — it could cost close to $40,000. Quilt says it will offer all available rebates and help with incentives at checkout.
Somewhat uniquely, Quilt plans to own the whole process from the point of sale to installation. The system will only be available in the Bay Area at launch, with the first installations slated for late summer 2024. Los Angeles will follow later this year, and Lambert says they plan to expand based on reservation volume. So, if you’re interested, you should sign up on Quilt.com — a $100 deposit is required.

Quilt uses ductless heat pump-powered mini-split units combined with touchscreen remote controls to heat and cool your home. | Image: Quilt (Cayce Clifford)

A new company founded by three former Googlers is looking to disrupt the somewhat staid mini-split industry. Quilt is a new “home climate system” — a ductless HVAC system that uses heat pump-powered mini-split units to heat and cool your home.

Two things make Quilt stand out from the start: it’s smart and pretty. The system has built-in millimeter-wave radar sensors for precision occupancy detection, so you can choose to heat and cool only occupied rooms. They work with a smartphone app and a stylish touchscreen remote called the Dial, and the system uses predictive algorithms to heat and cool your home as efficiently as possible.

Each Quilt unit is a ductless mini-split, but rather than large white boxes on your wall, Quilt has designed smaller, sleeker units with the option of a white oak or white front panel that can be customized to fit your home decor.

At only 38 inches wide, less than eight inches tall, and just over eight inches deep, the Quilt indoor unit is much smaller than traditional mini-splits and can fit above a window to be less obtrusive in a room. The units also have built-in accent lighting, with color-changing lighting and adjustable brightness.

The outdoor unit, which can power up to two indoor units, has been designed to be smaller and more modern-looking, with a matte black finish. CEO and co-founder Paul Lambert claims the electric units meet or exceed industry standards for energy efficiency, including Energy Star Most Efficient 2024, SEER2 25, and CEE Tier 2. Although figures are preliminary as the system is still pending certification (you can see more details on Quilt’s website). Lambert, along with co-founders Bill Kee and Matthew Knoll, were involved with developing Gmail, Google Analytics, and robotics at Google, respectively. Lambert says Nest co-founder Matt Rogers was an early investor in Quilt.

The Dials have Thread radios on board and are Matter upgradable, although the company is not enabling either at launch

The Quilt system uses Wi-Fi (2.4 and 5 GHz) and Bluetooth to communicate between units and the remotes, and each Dial can also control the whole system, just like a thermostat does for a central HVAC system. The Dial has temperature, motion, and proximity detection built-in. You can also use the Quilt app (iOS or Android), where you can set schedules and program the system to your preferences — such as what temperatures you want empty rooms to be kept at.

Lambert says the Dials have Thread radios on board and are Matter upgradable, although the company is not enabling either at launch. There is also no integration with smart home platforms such as Apple Home, Google Home, or Amazon Alexa yet. “We want to see where the demand is,” says Lambert. “But we expect to have a public API and support Matter.”

Image: Quilt (Cayce Clifford)
The Dial is very small (2 and 1/4 inches), can control a single or multiple Quilts, and has definite Nest Thermostat vibes (the nicer version).

Unlike traditional ducted HVAC systems, ductless systems need one unit for every room (or two, depending on size). While this is more efficient — because they don’t have ducts to carry the heated or cooled air around your home, where it can lose up to 30 percent efficiency — it can be an installation challenge and is a lot more expensive.

Quilt costs $6,499 per room, which includes installation. While there are several rebates and incentives for upgrading your home’s HVAC system to an efficient electrical solution, that’s going to add up very quickly. For a three-bedroom house, say you were looking at installing it in six rooms (each bedroom, a kitchen, living room, and dining room — Quilt doesn’t recommend them in bathrooms) — it could cost close to $40,000. Quilt says it will offer all available rebates and help with incentives at checkout.

Somewhat uniquely, Quilt plans to own the whole process from the point of sale to installation. The system will only be available in the Bay Area at launch, with the first installations slated for late summer 2024. Los Angeles will follow later this year, and Lambert says they plan to expand based on reservation volume. So, if you’re interested, you should sign up on Quilt.com — a $100 deposit is required.

Read More 

Wear OS 5 triples down on battery life

Wear OS 5 is on the way. | Photo by Vjeran Pavic / The Verge

Wear OS 5 is on its way, and with it, Google says Android smartwatch users ought to see even better battery life. Running a marathon, for example, will purportedly consume 20 percent less battery than on Wear OS 4.
This emphasis on battery life is similar to last year’s Wear OS 4 announcement — and for good reason. Wear OS 4 helped the Pixel Watch 2 last an entire day, something the original struggled to do. That improved battery life has seemingly bought some goodwill. Google says that, in the last year, the Wear OS user base grew by an impressive 40 percent across 160 countries and regions.
Of course, we’ll likely have to wait until Samsung’s next-gen Galaxy Watch and the Pixel Watch 3 this fall to see how this improved battery life translates into real-life usage. Wear OS updates typically come to those new watches from Samsung and Google before they come to third-party or older hardware.

Image: Google
Weather is getting more detailed.

On top of better battery life, Google also mentioned some improvements coming to watchfaces. The big thing is Google’s adding some more useful complications. That includes ones that let you view goal progress, weather, and “weighted elements.” The weather complications include things like current conditions, temperature, UV index, and chance of precipitation. Meanwhile, weighted elements appear to be a pie chart-like complication for multiple datasets — something you might expect from a health app where you track multiple goals in a day. Other updates were mostly to make things easier for developers, such as providing new guidelines on how to build UI for larger, round displays. Also mentioned were easier ways to test fitness features, like auto-pause and resuming exercises.
On the health front, Google’s Health Connect platform will get a few new tricks, too. For instance, you’ll be able to pull data while a third-party app is running in the background. You’ll also be able to pull historical data beyond 30 days.
As for fitness features, Wear OS 5 is adding support for more advanced running metrics like ground contact time, stride length, vertical oscillation, and vertical ratio. These are pretty common metrics on multisport watches from Garmin and Polar, and Apple also added it in watchOS 9.
Much of this is stuff that other smartwatch makers already nailed down a while back. But while Google still has to play a bit of catchup, it is encouraging to see the gap steadily closing. Less encouraging is the fact that it appears the fragmentation of Wear OS continues. Last year, Google launched Wear OS 4 before Wear OS 3 had fully rolled out to everyone else other than Samsung and Google. Ostensibly, we’re about to see the same thing happen with Wear OS 5. And while it would be nice if we could expect every Wear OS watch to get these updates in a timely manner, it’s a decent hint at what we might see on the Galaxy Watch 7 series and the Pixel Watch 3.

Wear OS 5 is on the way. | Photo by Vjeran Pavic / The Verge

Wear OS 5 is on its way, and with it, Google says Android smartwatch users ought to see even better battery life. Running a marathon, for example, will purportedly consume 20 percent less battery than on Wear OS 4.

This emphasis on battery life is similar to last year’s Wear OS 4 announcement — and for good reason. Wear OS 4 helped the Pixel Watch 2 last an entire day, something the original struggled to do. That improved battery life has seemingly bought some goodwill. Google says that, in the last year, the Wear OS user base grew by an impressive 40 percent across 160 countries and regions.

Of course, we’ll likely have to wait until Samsung’s next-gen Galaxy Watch and the Pixel Watch 3 this fall to see how this improved battery life translates into real-life usage. Wear OS updates typically come to those new watches from Samsung and Google before they come to third-party or older hardware.

Image: Google
Weather is getting more detailed.

On top of better battery life, Google also mentioned some improvements coming to watchfaces. The big thing is Google’s adding some more useful complications. That includes ones that let you view goal progress, weather, and “weighted elements.” The weather complications include things like current conditions, temperature, UV index, and chance of precipitation. Meanwhile, weighted elements appear to be a pie chart-like complication for multiple datasets — something you might expect from a health app where you track multiple goals in a day. Other updates were mostly to make things easier for developers, such as providing new guidelines on how to build UI for larger, round displays. Also mentioned were easier ways to test fitness features, like auto-pause and resuming exercises.

On the health front, Google’s Health Connect platform will get a few new tricks, too. For instance, you’ll be able to pull data while a third-party app is running in the background. You’ll also be able to pull historical data beyond 30 days.

As for fitness features, Wear OS 5 is adding support for more advanced running metrics like ground contact time, stride length, vertical oscillation, and vertical ratio. These are pretty common metrics on multisport watches from Garmin and Polar, and Apple also added it in watchOS 9.

Much of this is stuff that other smartwatch makers already nailed down a while back. But while Google still has to play a bit of catchup, it is encouraging to see the gap steadily closing. Less encouraging is the fact that it appears the fragmentation of Wear OS continues. Last year, Google launched Wear OS 4 before Wear OS 3 had fully rolled out to everyone else other than Samsung and Google. Ostensibly, we’re about to see the same thing happen with Wear OS 5. And while it would be nice if we could expect every Wear OS watch to get these updates in a timely manner, it’s a decent hint at what we might see on the Galaxy Watch 7 series and the Pixel Watch 3.

Read More 

Scroll to top
Generated by Feedzy