verge-rss

Summer blackouts are increasing in the US

A line worker tends to fallen power lines in the East End neighborhood of Houston, days after Hurricane Beryl made landfall, on Thursday, July 11th, 2024. | Raquel Natalicchio / Houston Chronicle via Getty Images

The US has dealt with 60 percent more weather-related outages during warmer months over the past decade than it did during the 2000s, according to data crunched by the nonprofit research organization Climate Central.
It’s a trend that raises health risks as the planet heats up. Climate change supercharges disasters like storms and wildfires that often cut off power. Soaring demand for air conditioning also stresses out the grid. All of this can leave people without life-saving cooling or electric medical devices at times when they’re most vulnerable.

Image: Climate Central

Climate Central collected data from the Department of Energy on outages that took place between 2000 and 2023. It looked specifically at periods between May and September each year, warmer months when people rely on air conditioning the most. The analysis focused on blackouts attributed to bad weather or wildfires, which hot and dry conditions can exacerbate.
The findings fall in line with other surveys of power outages over time in the US. Americans experienced an average of 5.5 hours of electricity interruptions in 2022 compared to roughly 3.5 hours in 2013, according to the US Energy Information Administration (EIA). That includes all kinds of power disruptions throughout the year. But the culprit behind longer outages is “major events,” including weather disasters. Without those big events, the length of outages would have mostly flatlined over the past decade.

Screenshot: EIA

Certain areas have fared worse than others over the years, the Climate Central analysis shows. The South experienced more weather-related blackouts than any other region during warmer months, with 175 outages between 2000 and 2023. Texas leads the nation as the state with the most weather-related outages, with 107 over the same period.
The Lone Star State is in a unique position because most of the state doesn’t connect to larger power grids that span across eastern and western states. That makes it harder for Texas to make up for energy shortfalls by relying on its neighbors. But the Texas power grid has also been hit hard by extreme weather. Just this summer, Hurricane Beryl led to widespread blackouts and at least 11 heat-related deaths reported in the aftermath of the storm.

Image: Climate Central

The nation’s aging grid infrastructure could certainly use an upgrade to make it more resilient to a changing climate. Burying power lines can safeguard them from extreme weather in some scenarios. Residential solar energy systems and microgrids can help keep the lights on for homes even if power plants or power lines go down in a disaster. And switching from fossil fuels to renewable energy would prevent those climate-related disasters from growing into bigger monsters in the first place.

A line worker tends to fallen power lines in the East End neighborhood of Houston, days after Hurricane Beryl made landfall, on Thursday, July 11th, 2024. | Raquel Natalicchio / Houston Chronicle via Getty Images

The US has dealt with 60 percent more weather-related outages during warmer months over the past decade than it did during the 2000s, according to data crunched by the nonprofit research organization Climate Central.

It’s a trend that raises health risks as the planet heats up. Climate change supercharges disasters like storms and wildfires that often cut off power. Soaring demand for air conditioning also stresses out the grid. All of this can leave people without life-saving cooling or electric medical devices at times when they’re most vulnerable.

Image: Climate Central

Climate Central collected data from the Department of Energy on outages that took place between 2000 and 2023. It looked specifically at periods between May and September each year, warmer months when people rely on air conditioning the most. The analysis focused on blackouts attributed to bad weather or wildfires, which hot and dry conditions can exacerbate.

The findings fall in line with other surveys of power outages over time in the US. Americans experienced an average of 5.5 hours of electricity interruptions in 2022 compared to roughly 3.5 hours in 2013, according to the US Energy Information Administration (EIA). That includes all kinds of power disruptions throughout the year. But the culprit behind longer outages is “major events,” including weather disasters. Without those big events, the length of outages would have mostly flatlined over the past decade.

Screenshot: EIA

Certain areas have fared worse than others over the years, the Climate Central analysis shows. The South experienced more weather-related blackouts than any other region during warmer months, with 175 outages between 2000 and 2023. Texas leads the nation as the state with the most weather-related outages, with 107 over the same period.

The Lone Star State is in a unique position because most of the state doesn’t connect to larger power grids that span across eastern and western states. That makes it harder for Texas to make up for energy shortfalls by relying on its neighbors. But the Texas power grid has also been hit hard by extreme weather. Just this summer, Hurricane Beryl led to widespread blackouts and at least 11 heat-related deaths reported in the aftermath of the storm.

Image: Climate Central

The nation’s aging grid infrastructure could certainly use an upgrade to make it more resilient to a changing climate. Burying power lines can safeguard them from extreme weather in some scenarios. Residential solar energy systems and microgrids can help keep the lights on for homes even if power plants or power lines go down in a disaster. And switching from fossil fuels to renewable energy would prevent those climate-related disasters from growing into bigger monsters in the first place.

Read More 

Twitch is upping subscription prices on mobile

Illustration by Nick Barclay / The Verge

If you subscribe to Twitch channels from Twitch’s mobile app, you might have to pay a bit more starting in October.
Twitch announced on Wednesday that Tier 1 and gift subscriptions in more than 40 countries will have a higher price on the mobile app starting on October 1st. According to emails from Twitch shared by Dexerto and on Reddit, Tier 1 subscriptions will cost $7.99 per month, an increase from the current Tier 1 price on the app of $5.99 per month. The cost of Tier 2 and Tier 3 subscriptions will remain the same.
The July price hike, which affected subscriptions on the web, upped a Tier 1 subscription to $5.99 per month in the US, an increase of $1. (Once the new mobile price increase takes effect, subscribing via the app will once again be more expensive than on the web; like other companies, Twitch charges higher prices for subscriptions on mobile apps as a way to offset Apple’s and Google’s app store fees.)
Last year, in the US, Twitch increased the price of Twitch Turbo, its monthly subscription that removes ads, from $8.99 per month to $11.99 per month.

Illustration by Nick Barclay / The Verge

If you subscribe to Twitch channels from Twitch’s mobile app, you might have to pay a bit more starting in October.

Twitch announced on Wednesday that Tier 1 and gift subscriptions in more than 40 countries will have a higher price on the mobile app starting on October 1st. According to emails from Twitch shared by Dexerto and on Reddit, Tier 1 subscriptions will cost $7.99 per month, an increase from the current Tier 1 price on the app of $5.99 per month. The cost of Tier 2 and Tier 3 subscriptions will remain the same.

The July price hike, which affected subscriptions on the web, upped a Tier 1 subscription to $5.99 per month in the US, an increase of $1. (Once the new mobile price increase takes effect, subscribing via the app will once again be more expensive than on the web; like other companies, Twitch charges higher prices for subscriptions on mobile apps as a way to offset Apple’s and Google’s app store fees.)

Last year, in the US, Twitch increased the price of Twitch Turbo, its monthly subscription that removes ads, from $8.99 per month to $11.99 per month.

Read More 

How to freeze your credit after a data breach

Illustration by Samar Haddad / The Verge

Back in 2017, Equifax announced that hackers stole half of the US population’s Social Security numbers in what, we said, “will likely end up being one of the worst data breaches to ever affect the country.” Perhaps — until this year, when about 2.9 billion rows of data were collected through a breach at National Public Data (NPD), a company that resells collected personal data for background checks. This data included names, Social Security numbers, and other personal information.

As usual, when this sort of thing hits the news, our immediate reaction is to wonder what we can do to prevent ourselves from falling prey to identity theft, unauthorized withdrawals, false credit applications, and other nasty consequences. And, also, as usual, the information from the breached organization — and from most news organizations — is often vague and unsatisfactory.
How do I know if my data was stolen from National Public Data?
Unfortunately, at the time this story was written, National Public Data was not providing a lot of information about whose data was stolen. There are a couple of websites out there (specifically, npdbreach.com, from Atlas Privacy, and npd.pentester.com) that say they can tell you, but since they ask you to enter data such as your birth year, it’s up to you whether you want to trust them or not.
Many companies that have suffered a breach eventually offer the services of a security firm that will monitor your account for a period of time; but so far, the only thing NPD is doing is recommending that you monitor your accounts, get a free credit report, and initiate a credit freeze.
What’s a credit freeze?
A credit freeze prevents creditors from viewing your credit file. Whenever you apply for a credit card, loan, mortgage, or even just to rent an apartment, the bank or landlord evaluates your credit and the risk of approving you. A freeze blocks them from retrieving your credit information, thereby preventing an attacker from taking out new credit in your name.
How do I place a freeze?
The good news is that placing a credit freeze is free. It will last one year, and after that, you can renew it. The bad news is that you’ll have to reach out to each credit reporting company independently. There are three main companies in the US: Equifax, Experian, and TransUnion.
Here are the contact numbers for each company, as well as links to their freeze landing pages.

Equifax: 1-800-685-1111

Experian: 1‑888‑397‑3742

TransUnion: 1-888-909-8872

Does this mean I won’t be able to rent an apartment, take out a new credit card, or get a loan?
No, you’ll just have to temporarily lift your freeze, and the approval process might be slightly delayed. If you can find out which credit reporting company your potential landlord or bank uses, you can lift it only for that company. This won’t affect your credit score, and it won’t prevent you from receiving a credit report. You can also keep using your same credit cards, although if you think they might have been compromised, you should get new ones.

Screenshot: National Public Data
Security breaches like the one at National Public Data are unfortunately becoming more common.

Couldn’t an attacker lift my freeze and open a new line of credit?
In order to lift a freeze, you need either an account with the credit bureau, a password, or a PIN (depending on the credit bureau). Each bureau puts in several safeguards to make sure only the owner of the account can change it.
When should I put this freeze on my account?
You’re probably here because you’re worried your data has been compromised. You should try to do this as soon as possible. Even if you’ve happened here by accident, you might consider putting this freeze on your accounts as a safety precaution because there’s no telling whether your data is out there. And once you request a freeze online, it is required to take effect within one business day.
When should I unfreeze my credit?
It’s fairly easy to lift your freeze in the event that you want to open a new credit card or rent a new apartment (or do anything else involving your credit). And every company must lift your freeze within one hour of it being requested online. Still, it won’t hurt to give your creditor a heads-up, just in case.
Is there anything else I can do?
There are several other steps you can take:

Keep a watch on your savings and checking accounts, credit card statements, and any other financial accounts, and immediately chase down any expenses or withdrawals that you don’t recognize — even small ones. Fraudsters will sometimes test to see if you actually read your statements by charging or withdrawing small amounts and, if you don’t report it, will then follow up with larger thefts.
You are entitled to a free credit report once a week. These reports contain information on loans, bill payments, debts, and other financial dealings that have occurred, and so will let you know if anything has happened that you may not have authorized. There is actually a single place you can go to obtain a credit report from all three agencies, AnnualCreditReport.com, which will then move you to each agency you want a report from.
You can set up a fraud alert, which means a business must verify your identity before extending new credit. If you set up a fraud alert at one of the three credit bureaus, it will contact the other two so they can set one up as well. The fraud alert lasts a year, after which you can renew it. (If you’re a victim of identity theft, it will last seven years.)
It’s a good idea to set up two-factor authentication on your online accounts, especially those involving money (like bank accounts or credit cards), using an authentication app.

Update, August 21st, 2024: This article was originally published in September 2017 and has been updated to reflect considerable changes in credit freezes, reports, and the latest data breach.

Illustration by Samar Haddad / The Verge

Back in 2017, Equifax announced that hackers stole half of the US population’s Social Security numbers in what, we said, “will likely end up being one of the worst data breaches to ever affect the country.” Perhaps — until this year, when about 2.9 billion rows of data were collected through a breach at National Public Data (NPD), a company that resells collected personal data for background checks. This data included names, Social Security numbers, and other personal information.

As usual, when this sort of thing hits the news, our immediate reaction is to wonder what we can do to prevent ourselves from falling prey to identity theft, unauthorized withdrawals, false credit applications, and other nasty consequences. And, also, as usual, the information from the breached organization — and from most news organizations — is often vague and unsatisfactory.

How do I know if my data was stolen from National Public Data?

Unfortunately, at the time this story was written, National Public Data was not providing a lot of information about whose data was stolen. There are a couple of websites out there (specifically, npdbreach.com, from Atlas Privacy, and npd.pentester.com) that say they can tell you, but since they ask you to enter data such as your birth year, it’s up to you whether you want to trust them or not.

Many companies that have suffered a breach eventually offer the services of a security firm that will monitor your account for a period of time; but so far, the only thing NPD is doing is recommending that you monitor your accounts, get a free credit report, and initiate a credit freeze.

What’s a credit freeze?

A credit freeze prevents creditors from viewing your credit file. Whenever you apply for a credit card, loan, mortgage, or even just to rent an apartment, the bank or landlord evaluates your credit and the risk of approving you. A freeze blocks them from retrieving your credit information, thereby preventing an attacker from taking out new credit in your name.

How do I place a freeze?

The good news is that placing a credit freeze is free. It will last one year, and after that, you can renew it. The bad news is that you’ll have to reach out to each credit reporting company independently. There are three main companies in the US: Equifax, Experian, and TransUnion.

Here are the contact numbers for each company, as well as links to their freeze landing pages.

Equifax: 1-800-685-1111

Experian: 1‑888‑397‑3742

TransUnion: 1-888-909-8872

Does this mean I won’t be able to rent an apartment, take out a new credit card, or get a loan?

No, you’ll just have to temporarily lift your freeze, and the approval process might be slightly delayed. If you can find out which credit reporting company your potential landlord or bank uses, you can lift it only for that company. This won’t affect your credit score, and it won’t prevent you from receiving a credit report. You can also keep using your same credit cards, although if you think they might have been compromised, you should get new ones.

Screenshot: National Public Data
Security breaches like the one at National Public Data are unfortunately becoming more common.

Couldn’t an attacker lift my freeze and open a new line of credit?

In order to lift a freeze, you need either an account with the credit bureau, a password, or a PIN (depending on the credit bureau). Each bureau puts in several safeguards to make sure only the owner of the account can change it.

When should I put this freeze on my account?

You’re probably here because you’re worried your data has been compromised. You should try to do this as soon as possible. Even if you’ve happened here by accident, you might consider putting this freeze on your accounts as a safety precaution because there’s no telling whether your data is out there. And once you request a freeze online, it is required to take effect within one business day.

When should I unfreeze my credit?

It’s fairly easy to lift your freeze in the event that you want to open a new credit card or rent a new apartment (or do anything else involving your credit). And every company must lift your freeze within one hour of it being requested online. Still, it won’t hurt to give your creditor a heads-up, just in case.

Is there anything else I can do?

There are several other steps you can take:

Keep a watch on your savings and checking accounts, credit card statements, and any other financial accounts, and immediately chase down any expenses or withdrawals that you don’t recognize — even small ones. Fraudsters will sometimes test to see if you actually read your statements by charging or withdrawing small amounts and, if you don’t report it, will then follow up with larger thefts.
You are entitled to a free credit report once a week. These reports contain information on loans, bill payments, debts, and other financial dealings that have occurred, and so will let you know if anything has happened that you may not have authorized. There is actually a single place you can go to obtain a credit report from all three agencies, AnnualCreditReport.com, which will then move you to each agency you want a report from.
You can set up a fraud alert, which means a business must verify your identity before extending new credit. If you set up a fraud alert at one of the three credit bureaus, it will contact the other two so they can set one up as well. The fraud alert lasts a year, after which you can renew it. (If you’re a victim of identity theft, it will last seven years.)
It’s a good idea to set up two-factor authentication on your online accounts, especially those involving money (like bank accounts or credit cards), using an authentication app.

Update, August 21st, 2024: This article was originally published in September 2017 and has been updated to reflect considerable changes in credit freezes, reports, and the latest data breach.

Read More 

Senators want investigation of AI-enabled ammo vending machines

Illustration by Alex Castro / The Verge

AI-enabled ammunition vending machines could facilitate mass shootings and let people circumvent federal bans prohibiting people with certain criminal convictions from buying ammunition, two senators warn.
Massachusetts Senators Ed Markey and Elizabeth Warren sent a letter to the Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF) asking it to “closely examine” facial recognition-enabled ammunition vending machines that have recently been installed in supermarkets in certain states.
“Easy access to ammunition helps to fuel our country’s gun violence epidemic, which now claims more than 44,000 lives annually.”
The machines, distributed by the Texas-based company American Rounds, began appearing in grocery stores in Alabama, Texas, and Oklahoma in July. American Rounds says the machines have “built-in AI technology, card scanning capability and facial recognition software” that confirms the purchaser’s age and verifies that their face matches their ID. The machines don’t limit how much ammunition a person can purchase at a time.
“Easy access to ammunition helps to fuel our country’s gun violence epidemic, which now claims more than 44,000 lives annually,” the letter reads. “Studies show that increasing the availability of firearms and ammunition leads to more injuries and deaths, especially suicides, and that regulation of ammunition purchases can help reduce gun violence.”
Markey and Warren’s letter says these machines carry “inherent risks,” including possibly allowing people who are barred from purchasing guns and ammo under federal law — including people with convictions for felonies or domestic violence misdemeanors and people with active domestic violence-related restraining orders — to circumvent these restrictions. The letter also says that eliminating face-to-face sales means there’s no opportunity to identify straw purchases. “[E]xperienced gun shop employees may be able to detect when someone attempts to straw purchase ammunition for another and stop the transaction,” the letter reads. Clerks could also notice when a customer is experiencing signs of distress or other warnings that they plan on using ammunition to hurt themselves or others — and could refuse to sell to them.

The letter also takes issue with the machines’ use of “unreliable and inaccurate facial recognition technology,” noting that studies show that facial recognition algorithms misidentify women and people of color at higher rates than they misidentify white men. “Given the significant error rates with facial recognition technology, ammunition vending machines raise serious concerns about false approval and potential legal implications for both consumers and vendors.”
“A federal license is not required to sell ammunition. However, commercial sales of ammunition must comply with state laws as well as any applicable federal laws,” an ATF spokesperson said in a statement to CNN in July.
Markey and Warren have requested that the ATF provide written responses to a list of questions by August 30th. The ATF did not immediately respond to The Verge’s request for comment.

Illustration by Alex Castro / The Verge

AI-enabled ammunition vending machines could facilitate mass shootings and let people circumvent federal bans prohibiting people with certain criminal convictions from buying ammunition, two senators warn.

Massachusetts Senators Ed Markey and Elizabeth Warren sent a letter to the Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF) asking it to “closely examine” facial recognition-enabled ammunition vending machines that have recently been installed in supermarkets in certain states.

“Easy access to ammunition helps to fuel our country’s gun violence epidemic, which now claims more than 44,000 lives annually.”

The machines, distributed by the Texas-based company American Rounds, began appearing in grocery stores in Alabama, Texas, and Oklahoma in July. American Rounds says the machines have “built-in AI technology, card scanning capability and facial recognition software” that confirms the purchaser’s age and verifies that their face matches their ID. The machines don’t limit how much ammunition a person can purchase at a time.

“Easy access to ammunition helps to fuel our country’s gun violence epidemic, which now claims more than 44,000 lives annually,” the letter reads. “Studies show that increasing the availability of firearms and ammunition leads to more injuries and deaths, especially suicides, and that regulation of ammunition purchases can help reduce gun violence.”

Markey and Warren’s letter says these machines carry “inherent risks,” including possibly allowing people who are barred from purchasing guns and ammo under federal law — including people with convictions for felonies or domestic violence misdemeanors and people with active domestic violence-related restraining orders — to circumvent these restrictions. The letter also says that eliminating face-to-face sales means there’s no opportunity to identify straw purchases. “[E]xperienced gun shop employees may be able to detect when someone attempts to straw purchase ammunition for another and stop the transaction,” the letter reads. Clerks could also notice when a customer is experiencing signs of distress or other warnings that they plan on using ammunition to hurt themselves or others — and could refuse to sell to them.

The letter also takes issue with the machines’ use of “unreliable and inaccurate facial recognition technology,” noting that studies show that facial recognition algorithms misidentify women and people of color at higher rates than they misidentify white men. “Given the significant error rates with facial recognition technology, ammunition vending machines raise serious concerns about false approval and potential legal implications for both consumers and vendors.”

“A federal license is not required to sell ammunition. However, commercial sales of ammunition must comply with state laws as well as any applicable federal laws,” an ATF spokesperson said in a statement to CNN in July.

Markey and Warren have requested that the ATF provide written responses to a list of questions by August 30th. The ATF did not immediately respond to The Verge’s request for comment.

Read More 

Google sales reps allegedly keep telling advertisers how to target teens

Illustration: The Verge

Google representatives gave ad buyers tips on how they could reach teens, even though the company bars targeted advertisements to users under the age of 18 based on their demographics, according to a report from Adweek.
Three unnamed ad buyers told Adweek that Google sales reps suggested they might be able to reach teens by targeting a group of “unknown” users, whose “age, gender, parental status, or household income” Google doesn’t know. Adweek said it also reviewed written documents backing up the sources’ claims. A Google spokesperson told Adweek that the unknown category can include users who aren’t signed in to their accounts or who’ve turned off personalized ad targeting.
Google’s stated policy is to “block ad targeting based on the age, gender, or interests of people under 18.” The Adweek story is yet another example of Google reportedly helping ad buyers target teens through the use of its unknown user category, after the Financial Times recently reported on a similar situation.
Google spokesperson Jacel Booth said in a statement that the company “strictly prohibit[s] ads being personalized to people under 18—full stop. Our policies are reinforced with technical protections, which continue to work properly.” Booth added that Google would take “additional action with sales representatives to reinforce that they must not help agencies or advertisers attempt to circumvent our policies.”
“We strictly prohibit ads being personalized to people under 18—full stop.”
The reported behavior could potentially raise concerns under the Children’s Online Privacy Protection Act (COPPA), which prohibits platforms from collecting personal information on kids under 13 without parental consent. An updated version of the law, which passed the Senate and awaits a House vote, would ban targeted advertising to kids under 17.
One unnamed agency buyer cited in the Adweek article said they were “shocked” at how explicitly a Google rep allegedly suggested including the unknown category for a client’s media buy on YouTube, because they said teens may be included in that group. Another buyer at a brand told Adweek that Google reps had reached out to suggest targeting users over 16 who may have disposable income, via the unknown category.
A third buyer who worked at an agency representing a large entertainment brand said Google reps offered the unknown category as a solution to possibly target some teens after the brand threatened to move its spend to Meta, which allows some targeting of teens.

Illustration: The Verge

Google representatives gave ad buyers tips on how they could reach teens, even though the company bars targeted advertisements to users under the age of 18 based on their demographics, according to a report from Adweek.

Three unnamed ad buyers told Adweek that Google sales reps suggested they might be able to reach teens by targeting a group of “unknown” users, whose “age, gender, parental status, or household income” Google doesn’t know. Adweek said it also reviewed written documents backing up the sources’ claims. A Google spokesperson told Adweek that the unknown category can include users who aren’t signed in to their accounts or who’ve turned off personalized ad targeting.

Google’s stated policy is to “block ad targeting based on the age, gender, or interests of people under 18.” The Adweek story is yet another example of Google reportedly helping ad buyers target teens through the use of its unknown user category, after the Financial Times recently reported on a similar situation.

Google spokesperson Jacel Booth said in a statement that the company “strictly prohibit[s] ads being personalized to people under 18—full stop. Our policies are reinforced with technical protections, which continue to work properly.” Booth added that Google would take “additional action with sales representatives to reinforce that they must not help agencies or advertisers attempt to circumvent our policies.”

“We strictly prohibit ads being personalized to people under 18—full stop.”

The reported behavior could potentially raise concerns under the Children’s Online Privacy Protection Act (COPPA), which prohibits platforms from collecting personal information on kids under 13 without parental consent. An updated version of the law, which passed the Senate and awaits a House vote, would ban targeted advertising to kids under 17.

One unnamed agency buyer cited in the Adweek article said they were “shocked” at how explicitly a Google rep allegedly suggested including the unknown category for a client’s media buy on YouTube, because they said teens may be included in that group. Another buyer at a brand told Adweek that Google reps had reached out to suggest targeting users over 16 who may have disposable income, via the unknown category.

A third buyer who worked at an agency representing a large entertainment brand said Google reps offered the unknown category as a solution to possibly target some teens after the brand threatened to move its spend to Meta, which allows some targeting of teens.

Read More 

Microsoft’s Recall AI feature won’t be available for Windows testers until October

Image: Microsoft

Microsoft says it’s planning to allow Windows testers to try out its controversial Recall AI feature in October. The software giant was originally planning to launch Recall with its Copilot Plus PCs in June but was forced to hold back the feature after security concerns were raised.
At the time of the delay on June 13th, Microsoft promised the feature — that screenshots nearly everything on your PC — would be available for Windows Insiders “in the coming weeks,” but that’s now more like the coming months. “With a commitment to delivering a trustworthy and secure Recall (preview) experience on Copilot Plus PCs for customers, we’re sharing an update that Recall will be available to Windows Insiders starting in October,” says Windows and Surface chief Pavan Davuluri in an updated blog post.
The feature uses local AI models built into Windows 11 to screenshot mostly everything you see or do on your computer and then give you the ability to search and retrieve items you’ve seen. An explorable timeline also lets you scroll through all these snapshots to look back at your work on a particular day.

GIF: Microsoft
Recall’s timeline feature.

While Microsoft has always maintained that Recall is secure, local, and private on-device, security researchers found that the database wasn’t encrypted, and malware could have potentially accessed the Recall feature. Microsoft is now working on major changes to Recall, including making the AI-powered feature an opt-in experience instead of on by default, encrypting the database, and authenticating through Windows Hello.
Davuluri doesn’t explain why Recall has been pushed back further, but he does say that “security continues to be our top priority and when Recall is available for Windows Insiders in October we will publish a blog with more details.” It’s likely that Microsoft simply needs more time to fully test its security changes to Recall.
This could mean we won’t see a full launch of Recall this year, though. Microsoft typically tests Windows features with its Insider program for weeks or months at a time before shipping them out more broadly. That timing may well depend on exactly when Microsoft manages to ship the test version of Recall in October.

Sign up for Notepad by Tom Warren, a weekly newsletter uncovering the secrets and strategy behind Microsoft’s era-defining bets on AI, gaming, and computing. Subscribe to get the latest straight to your inbox.

Monthly
$7/month
Get every issue of Notepad straight to your inbox. The first month is free.
START YOUR TRIAL

Annual
$70/year
Get a year of Notepad at a discounted rate. The first month is free.
START YOUR TRIAL

Bundle
$100/person/year
Get one year of both Notepad and Command Line. The first month is free.
SUBSCRIBE TO BOTH

We accept credit card, Apple Pay and Google Pay.

Image: Microsoft

Microsoft says it’s planning to allow Windows testers to try out its controversial Recall AI feature in October. The software giant was originally planning to launch Recall with its Copilot Plus PCs in June but was forced to hold back the feature after security concerns were raised.

At the time of the delay on June 13th, Microsoft promised the feature — that screenshots nearly everything on your PC — would be available for Windows Insiders “in the coming weeks,” but that’s now more like the coming months. “With a commitment to delivering a trustworthy and secure Recall (preview) experience on Copilot Plus PCs for customers, we’re sharing an update that Recall will be available to Windows Insiders starting in October,” says Windows and Surface chief Pavan Davuluri in an updated blog post.

The feature uses local AI models built into Windows 11 to screenshot mostly everything you see or do on your computer and then give you the ability to search and retrieve items you’ve seen. An explorable timeline also lets you scroll through all these snapshots to look back at your work on a particular day.

GIF: Microsoft
Recall’s timeline feature.

While Microsoft has always maintained that Recall is secure, local, and private on-device, security researchers found that the database wasn’t encrypted, and malware could have potentially accessed the Recall feature. Microsoft is now working on major changes to Recall, including making the AI-powered feature an opt-in experience instead of on by default, encrypting the database, and authenticating through Windows Hello.

Davuluri doesn’t explain why Recall has been pushed back further, but he does say that “security continues to be our top priority and when Recall is available for Windows Insiders in October we will publish a blog with more details.” It’s likely that Microsoft simply needs more time to fully test its security changes to Recall.

This could mean we won’t see a full launch of Recall this year, though. Microsoft typically tests Windows features with its Insider program for weeks or months at a time before shipping them out more broadly. That timing may well depend on exactly when Microsoft manages to ship the test version of Recall in October.

Read More 

Tesla’s latest Model X recall isn’t just a software update

Image: Umar Shakir / The Verge

Tesla is issuing a new recall for more than 9,000 of its Model X SUVs due to roof cosmetic trim pieces that could fly off while driving because they may have been adhered without primer. It’s the second such recall from the company to address the issue on the Model X, the first one issued in 2020, and it is again specific to early 2016 vehicle models.
Tesla often pushes software updates to address recalls, including a repeat Autopilot safety issue where it does not warn drivers effectively. But for the Model X roof issue, Tesla will need to actually take a gander at thousands of vehicles in person.
To remedy the Model X voluntary recall, Tesla Service will “test the roof trim adhesion and reattach the trim pieces as necessary” for free.
Tesla had found that the 2020 recall remedy did not thoroughly detect trim pieces that could detach, according to Reuters. The automaker says no crashes or injuries have been reported, but it has received about 170 issue reports and claims from owners that this recall may cover.
Similar to this Model X issue, Tesla recalled over 11,000 Cybertrucks in June due to truck bed trimming that could come loose and create road hazards. Thousands of Cybertrucks also had recalls for failing windshield wiper motors and one that takes the metal off the pedal.

Image: Umar Shakir / The Verge

Tesla is issuing a new recall for more than 9,000 of its Model X SUVs due to roof cosmetic trim pieces that could fly off while driving because they may have been adhered without primer. It’s the second such recall from the company to address the issue on the Model X, the first one issued in 2020, and it is again specific to early 2016 vehicle models.

Tesla often pushes software updates to address recalls, including a repeat Autopilot safety issue where it does not warn drivers effectively. But for the Model X roof issue, Tesla will need to actually take a gander at thousands of vehicles in person.

To remedy the Model X voluntary recall, Tesla Service will “test the roof trim adhesion and reattach the trim pieces as necessary” for free.

Tesla had found that the 2020 recall remedy did not thoroughly detect trim pieces that could detach, according to Reuters. The automaker says no crashes or injuries have been reported, but it has received about 170 issue reports and claims from owners that this recall may cover.

Similar to this Model X issue, Tesla recalled over 11,000 Cybertrucks in June due to truck bed trimming that could come loose and create road hazards. Thousands of Cybertrucks also had recalls for failing windshield wiper motors and one that takes the metal off the pedal.

Read More 

Google Pixel 9 Pro and 9 Pro XL review: AI all over the place

The AI is inconsistent, but the hardware is oh so good. Google finally got the hardware right.
The Pixel 9 Pro, its bigger Pro XL sibling, and the standard Pixel 9 look and feel like the flagship phones Google has been trying to make since the Pixel 6 ushered in the visor camera bump era. They feel solid, the screens are bright, and the damn edges are finally flat. As far as I’m concerned, Google can hang up a “Mission Accomplished” banner.
The software is another thing. Some of it is promising, some of it seems like a party trick, and some of it is downright reckless. Google’s been rolling out generative AI features here and there over the past year, but this feels like the company’s first big swing at an AI phone. It’s kind of all over the place.
There’s a little sparkly AI icon in so many different corners of the UI, and these various assistants and systems don’t work well together yet. Do you want to have a conversation with AI? Or use AI to write an email? Or organize and refer to your screenshots with AI? Those features all exist on the Pixel 9 series, but they’re all in separate apps and interfaces. It’s starting to feel like I need AI to sort out all of the AI, and that’s not a great place to be. What’s worse is that they all work inconsistently, making it hard to rely on any of them. Thank God the hardware’s so good.

Google flattened out the edges of the phone and evened out the bezels around the screen. It’s an iPhone from the front, and I don’t think that’s a problem at all. The camera visor is a chunky pill that no longer connects to the phone’s side rails.
It still looks kind of weird, but it’s instantly recognizable as a Pixel. And despite this protrusion, the phone also sits steadily on a table when you tap the screen and doesn’t wobble back and forth — a problem that Samsung’s phones suffer from if you don’t put a case on them.
Along with a refreshed design, the Pixel 9 series gets a new Tensor G4 chip. Between the updated processor and a new vapor chamber, the Pixel no longer feels like it’s about to catch on fire when I use it as a Wi-Fi hotspot. Love it. I’m also a fan of the faster fingerprint scanner, which feels like the one Google should have been using all along.

Just look at those uniform bezels!

For the first time, Google is offering the Pro version in two sizes. They come with different-sized batteries, naturally, but both managed a full day of heavy use without needing a recharge. The Pixel 9 Pro is the size of the Pixel 8 (and the standard Pixel 9) with a 6.3-inch screen. The 9 Pro XL is equivalent to the Pixel 8 Pro in size with a 6.8-inch display.
The displays themselves are a bit brighter than the previous gen, going up to 2,000 nits for HDR content and up to 3,000 nits in peak brightness mode — the 8 Pro supported up to 1,600 nits and 2,400 nits, respectively. I can easily appreciate the difference in direct sunlight; it’s not Galaxy S24 Ultra good, but it’s a lot better.
But despite the difference in size, these two Pro 9 devices share the exact same camera hardware, including a 5x telephoto lens — something you don’t get on every “small” flagship phone. The main and telephoto cameras are unchanged from the 8 Pro, but the ultrawide has been updated with a faster lens that helps boost low-light performance.
There are a few AI features right inside the camera app, naturally. Unlike some of the other AI tools on these devices, these are pretty pedestrian. That includes Add Me, which lets you composite two photos into one group shot so that the person who took the first photo can get in the picture. The UI guides you through the process in which you take a photo and then swap with someone who was in the shot. You’ll see a ghostly overlay of the first image and some on-screen prompts to help you frame up the second photo properly, and afterward, you get one image with everyone included.

It works best when there’s plenty of light and your subjects stay in consistent poses between frames. When it’s good, it’s really good, and I’d have a hard time telling if anything was up if I didn’t know better. But even in the best examples, you can still zoom in and see some fuzzy edges around details like hair. I think I’d actually use this occasionally, not least of all because I hate asking a stranger to take my photo.
Video Boost, the AI tool that improves video, got a sizable update this time around, too. It processes faster once the file is uploaded, and there’s more detail in boosted Night Sight clips. The first time I tested Video Boost on the Pixel 8 Pro, it was a little underwhelming, but with these improvements, it’s a feature I’ve actually wanted to use more. It cleans up footage taken at higher zoom magnifications and smooths out transitions between lenses, so it’s a nice all-purpose tool if you’re doing something a little more technically challenging than just shooting a quick clip of your cat doing something funny.

AI tricks aren’t limited to the camera app — even if they’re some of my favorite use cases. As Google reminded us about a hundred times at its launch presentation, the Pixel 9 series is AI all the way down, from the Gemini Assistant — the default virtual assistant this time — to a daily AI summary in the revamped weather app.
AI is the thing in phones this year, and the Pixel 9 series represents our first look at some technologies that will likely trickle out across previous Pixel phones and parts of the Android ecosystem. A couple are exclusive to the Pixel 9 series, and Google is mostly vague about which features will be distributed to older phones. But altogether they’re the foundation of what Google wants us to think of as AI-first phones for the AI era.

They’re hit-and-miss, but one feature in particular is a little too good. That’s “reimagine,” a generative AI tool you’ll find in Magic Editor. Instead of just erasing or moving things around in your photo, you can select a part of your image and add something with a text prompt.
The results are uncanny — so good that they’re problematic. Without too much trouble, we had it add a range of nasty and extremely believable stuff to photos — everything from a cockroach on a plate of food to a snake in a flower display at Whole Foods.

Google’s examples of “reimagine” in use feature wildflowers and hot air balloons, which, sure. It can add those things to your photos. They usually look good and only sometimes look like a baked potato. But they’re only tagged as AI-generated by a line in the image metadata, which makes them really easy to pass off as real images.
Pixel Studio is less problematic. You use text prompts to dream up images in a handful of predetermined styles, including “3D cartoon” and “freestyle,” which is the more photorealistic option. My kid got a real kick out of making trucks of various shapes and sizes being operated by cats. If you ask it to generate an image with “poop” in it (toddlers think this is wildly funny), then you’ll get something more realistic than you probably wanted to see.
You can also play a fun game where you get it to generate IP that Google likely did not intend it to create. Here’s an incomplete list of the images I got it to make for me with these exact prompts:

Pikachu sticking a paper clip in an electrical outlet
Toad eating a banana
Thomas the Tank Engine chain smoking

It’s strange how easily you can make, like, PG-13-rated images, too. It faithfully generated a cartoon baby deer lighting a joint, and I don’t know, guys, maybe there’s a better use for all these supercomputers running AI. At the very least, it’s great if your idea of fun is responding to your spouse’s questions with obnoxious AI-generated art.
I had high hopes for Pixel Screenshots, a Pixel 9-series exclusive and potentially far more useful app. It’s a repository for all of your screenshots that uses AI to parse out information from them and saves it as metadata so you can search for it later — Airbnb door codes, Wi-Fi passwords, that kind of thing. It all stays on-device, so it’s relatively secure.
The thing is, it’s a whole separate app. You can’t ask Gemini to find your boarding zone; you have to open up the screenshots app and search. At that point, I’ll just open up the Delta Airlines app and look at my boarding pass. Besides, the Screenshots app told me I was in boarding group M3 — the pass it scanned clearly said group three.

The app automatically collects all your screenshots once you opt in.

I would legitimately use this for Pinterest-type stuff since I haven’t used Pinterest in about a decade.

And that’s the problem: it hallucinates and misinterprets. A lot of the metadata on my screenshots is right, but some of it is just off. I took a screenshot of a particularly gross “reimagine” creation when I prompted AI to fill a bowl with geoducks.
Reimagine made something I can best describe as a bowl of raw thumbs, which the Screenshots app labeled as “a green bowl of chicken” that might be “overcooked or undercooked.” Presumably scanning the text of the AI prompt I used for the photo, which appears in the corner of the screen, it states that the image is “from the Geoducks app, a food delivery service.” Makes me feel great about the future of AI trained on synthetic data.
I’m not ready to write off Screenshots just yet, though. It’s the kind of feature that makes sense when you use your phone for months or years, not weeks. It takes very little effort to use since screenshots are automatically saved there. And its best feature is that when you screenshot a page in Chrome, it’ll save the URL along with the image so you can get back to the page easily. When you think of Screenshots as a replacement for infinite Chrome tabs or a Pinterest board, it makes a lot more sense.
Gemini Assistant, which I’ve used on lots of other Android phones, is much more familiar and is now the default assistant. It can do a lot more basic assistant stuff than it could when it launched, but it still can’t play my dang Spotify playlists. The Pixel 9 Pro and Pro XL come with a free one-year trial of Gemini Advanced (a cool $20 per month after that!), which allows you to tap into newer language models and a brand-new feature: Gemini Live. It’s Google’s version of ChatGPT’s conversation mode, and incidentally, it feels a little like talking to a page of Google results.

An AI phone for a weird future.

The Pixel 9 Pros represent Google’s most advanced efforts in mobile AI, for better and worse. There’s a lot of promise in some of these tools, and at this point, I genuinely prefer asking Gemini some of my low-stakes questions than wading through Google Search. But we can’t keep ignoring the fact that AI just makes shit up sometimes, and it’s hard to trust a technology like that with the details of your day-to-day existence.
The feature pileup as Google rushes to ship new AI products is also getting a little confusing — not to mention that they’re all seemingly called some version of “Gemini” or “Gemma.” I can ask Gemini Assistant with the Workspace extension to check my inbox for important emails, but I can’t ask Gemini Live. I can also open Gemini inside the Gmail app to ask the same question and get a slightly different answer. I can take a screenshot of something on Amazon that I’m thinking about buying and save it to the Screenshots app, but I can’t automatically add a photo of something on a store shelf. It’s starting to feel a little like AI everything, everywhere, all at once.
It’s starting to feel a little like AI everything, everywhere, all at once
But the important thing is that behind all the flashy AI features, there’s a really good phone in the Pixel 9 Pro and the Pro XL. These are phones that I can finally hold up next to a Galaxy S24 Plus or an iPhone 15 Pro and think, yes, these are all top-of-the-line devices. These Pixels aren’t the budget-priced flagships that they used to be, and I think the higher prices are well justified by the hardware. Plus, when you’re getting seven years of OS updates, you can squeeze a whole lot of value out of your investment.
And even as a small-phone enthusiast, the 9 Pro feels like a reasonable size to me. It’s not small, but it’s not gargantuan, either, and I deeply appreciate not having to sacrifice camera features by choosing it over the big one. Pixel image quality remains reliable, and the battery will keep up to the end of the day. Whether we’re ready or not, a new era of AI phones and photos is here, and it’s messy as hell. But the hardware — if not my faith in an AI-everything future — is solid.
Photography by Allison Johnson / The Verge

The AI is inconsistent, but the hardware is oh so good.

Google finally got the hardware right.

The Pixel 9 Pro, its bigger Pro XL sibling, and the standard Pixel 9 look and feel like the flagship phones Google has been trying to make since the Pixel 6 ushered in the visor camera bump era. They feel solid, the screens are bright, and the damn edges are finally flat. As far as I’m concerned, Google can hang up a “Mission Accomplished” banner.

The software is another thing. Some of it is promising, some of it seems like a party trick, and some of it is downright reckless. Google’s been rolling out generative AI features here and there over the past year, but this feels like the company’s first big swing at an AI phone. It’s kind of all over the place.

There’s a little sparkly AI icon in so many different corners of the UI, and these various assistants and systems don’t work well together yet. Do you want to have a conversation with AI? Or use AI to write an email? Or organize and refer to your screenshots with AI? Those features all exist on the Pixel 9 series, but they’re all in separate apps and interfaces. It’s starting to feel like I need AI to sort out all of the AI, and that’s not a great place to be. What’s worse is that they all work inconsistently, making it hard to rely on any of them. Thank God the hardware’s so good.

Google flattened out the edges of the phone and evened out the bezels around the screen. It’s an iPhone from the front, and I don’t think that’s a problem at all. The camera visor is a chunky pill that no longer connects to the phone’s side rails.

It still looks kind of weird, but it’s instantly recognizable as a Pixel. And despite this protrusion, the phone also sits steadily on a table when you tap the screen and doesn’t wobble back and forth — a problem that Samsung’s phones suffer from if you don’t put a case on them.

Along with a refreshed design, the Pixel 9 series gets a new Tensor G4 chip. Between the updated processor and a new vapor chamber, the Pixel no longer feels like it’s about to catch on fire when I use it as a Wi-Fi hotspot. Love it. I’m also a fan of the faster fingerprint scanner, which feels like the one Google should have been using all along.

Just look at those uniform bezels!

For the first time, Google is offering the Pro version in two sizes. They come with different-sized batteries, naturally, but both managed a full day of heavy use without needing a recharge. The Pixel 9 Pro is the size of the Pixel 8 (and the standard Pixel 9) with a 6.3-inch screen. The 9 Pro XL is equivalent to the Pixel 8 Pro in size with a 6.8-inch display.

The displays themselves are a bit brighter than the previous gen, going up to 2,000 nits for HDR content and up to 3,000 nits in peak brightness mode — the 8 Pro supported up to 1,600 nits and 2,400 nits, respectively. I can easily appreciate the difference in direct sunlight; it’s not Galaxy S24 Ultra good, but it’s a lot better.

But despite the difference in size, these two Pro 9 devices share the exact same camera hardware, including a 5x telephoto lens — something you don’t get on every “small” flagship phone. The main and telephoto cameras are unchanged from the 8 Pro, but the ultrawide has been updated with a faster lens that helps boost low-light performance.

There are a few AI features right inside the camera app, naturally. Unlike some of the other AI tools on these devices, these are pretty pedestrian. That includes Add Me, which lets you composite two photos into one group shot so that the person who took the first photo can get in the picture. The UI guides you through the process in which you take a photo and then swap with someone who was in the shot. You’ll see a ghostly overlay of the first image and some on-screen prompts to help you frame up the second photo properly, and afterward, you get one image with everyone included.

It works best when there’s plenty of light and your subjects stay in consistent poses between frames. When it’s good, it’s really good, and I’d have a hard time telling if anything was up if I didn’t know better. But even in the best examples, you can still zoom in and see some fuzzy edges around details like hair. I think I’d actually use this occasionally, not least of all because I hate asking a stranger to take my photo.

Video Boost, the AI tool that improves video, got a sizable update this time around, too. It processes faster once the file is uploaded, and there’s more detail in boosted Night Sight clips. The first time I tested Video Boost on the Pixel 8 Pro, it was a little underwhelming, but with these improvements, it’s a feature I’ve actually wanted to use more. It cleans up footage taken at higher zoom magnifications and smooths out transitions between lenses, so it’s a nice all-purpose tool if you’re doing something a little more technically challenging than just shooting a quick clip of your cat doing something funny.

AI tricks aren’t limited to the camera app — even if they’re some of my favorite use cases. As Google reminded us about a hundred times at its launch presentation, the Pixel 9 series is AI all the way down, from the Gemini Assistant — the default virtual assistant this time — to a daily AI summary in the revamped weather app.

AI is the thing in phones this year, and the Pixel 9 series represents our first look at some technologies that will likely trickle out across previous Pixel phones and parts of the Android ecosystem. A couple are exclusive to the Pixel 9 series, and Google is mostly vague about which features will be distributed to older phones. But altogether they’re the foundation of what Google wants us to think of as AI-first phones for the AI era.

They’re hit-and-miss, but one feature in particular is a little too good. That’s “reimagine,” a generative AI tool you’ll find in Magic Editor. Instead of just erasing or moving things around in your photo, you can select a part of your image and add something with a text prompt.

The results are uncanny — so good that they’re problematic. Without too much trouble, we had it add a range of nasty and extremely believable stuff to photos — everything from a cockroach on a plate of food to a snake in a flower display at Whole Foods.

Google’s examples of “reimagine” in use feature wildflowers and hot air balloons, which, sure. It can add those things to your photos. They usually look good and only sometimes look like a baked potato. But they’re only tagged as AI-generated by a line in the image metadata, which makes them really easy to pass off as real images.

Pixel Studio is less problematic. You use text prompts to dream up images in a handful of predetermined styles, including “3D cartoon” and “freestyle,” which is the more photorealistic option. My kid got a real kick out of making trucks of various shapes and sizes being operated by cats. If you ask it to generate an image with “poop” in it (toddlers think this is wildly funny), then you’ll get something more realistic than you probably wanted to see.

You can also play a fun game where you get it to generate IP that Google likely did not intend it to create. Here’s an incomplete list of the images I got it to make for me with these exact prompts:

Pikachu sticking a paper clip in an electrical outlet
Toad eating a banana
Thomas the Tank Engine chain smoking

It’s strange how easily you can make, like, PG-13-rated images, too. It faithfully generated a cartoon baby deer lighting a joint, and I don’t know, guys, maybe there’s a better use for all these supercomputers running AI. At the very least, it’s great if your idea of fun is responding to your spouse’s questions with obnoxious AI-generated art.

I had high hopes for Pixel Screenshots, a Pixel 9-series exclusive and potentially far more useful app. It’s a repository for all of your screenshots that uses AI to parse out information from them and saves it as metadata so you can search for it later — Airbnb door codes, Wi-Fi passwords, that kind of thing. It all stays on-device, so it’s relatively secure.

The thing is, it’s a whole separate app. You can’t ask Gemini to find your boarding zone; you have to open up the screenshots app and search. At that point, I’ll just open up the Delta Airlines app and look at my boarding pass. Besides, the Screenshots app told me I was in boarding group M3 — the pass it scanned clearly said group three.

The app automatically collects all your screenshots once you opt in.

I would legitimately use this for Pinterest-type stuff since I haven’t used Pinterest in about a decade.

And that’s the problem: it hallucinates and misinterprets. A lot of the metadata on my screenshots is right, but some of it is just off. I took a screenshot of a particularly gross “reimagine” creation when I prompted AI to fill a bowl with geoducks.

Reimagine made something I can best describe as a bowl of raw thumbs, which the Screenshots app labeled as “a green bowl of chicken” that might be “overcooked or undercooked.” Presumably scanning the text of the AI prompt I used for the photo, which appears in the corner of the screen, it states that the image is “from the Geoducks app, a food delivery service.” Makes me feel great about the future of AI trained on synthetic data.

I’m not ready to write off Screenshots just yet, though. It’s the kind of feature that makes sense when you use your phone for months or years, not weeks. It takes very little effort to use since screenshots are automatically saved there. And its best feature is that when you screenshot a page in Chrome, it’ll save the URL along with the image so you can get back to the page easily. When you think of Screenshots as a replacement for infinite Chrome tabs or a Pinterest board, it makes a lot more sense.

Gemini Assistant, which I’ve used on lots of other Android phones, is much more familiar and is now the default assistant. It can do a lot more basic assistant stuff than it could when it launched, but it still can’t play my dang Spotify playlists. The Pixel 9 Pro and Pro XL come with a free one-year trial of Gemini Advanced (a cool $20 per month after that!), which allows you to tap into newer language models and a brand-new feature: Gemini Live. It’s Google’s version of ChatGPT’s conversation mode, and incidentally, it feels a little like talking to a page of Google results.

An AI phone for a weird future.

The Pixel 9 Pros represent Google’s most advanced efforts in mobile AI, for better and worse. There’s a lot of promise in some of these tools, and at this point, I genuinely prefer asking Gemini some of my low-stakes questions than wading through Google Search. But we can’t keep ignoring the fact that AI just makes shit up sometimes, and it’s hard to trust a technology like that with the details of your day-to-day existence.

The feature pileup as Google rushes to ship new AI products is also getting a little confusing — not to mention that they’re all seemingly called some version of “Gemini” or “Gemma.” I can ask Gemini Assistant with the Workspace extension to check my inbox for important emails, but I can’t ask Gemini Live. I can also open Gemini inside the Gmail app to ask the same question and get a slightly different answer. I can take a screenshot of something on Amazon that I’m thinking about buying and save it to the Screenshots app, but I can’t automatically add a photo of something on a store shelf. It’s starting to feel a little like AI everything, everywhere, all at once.

It’s starting to feel a little like AI everything, everywhere, all at once

But the important thing is that behind all the flashy AI features, there’s a really good phone in the Pixel 9 Pro and the Pro XL. These are phones that I can finally hold up next to a Galaxy S24 Plus or an iPhone 15 Pro and think, yes, these are all top-of-the-line devices. These Pixels aren’t the budget-priced flagships that they used to be, and I think the higher prices are well justified by the hardware. Plus, when you’re getting seven years of OS updates, you can squeeze a whole lot of value out of your investment.

And even as a small-phone enthusiast, the 9 Pro feels like a reasonable size to me. It’s not small, but it’s not gargantuan, either, and I deeply appreciate not having to sacrifice camera features by choosing it over the big one. Pixel image quality remains reliable, and the battery will keep up to the end of the day. Whether we’re ready or not, a new era of AI phones and photos is here, and it’s messy as hell. But the hardware — if not my faith in an AI-everything future — is solid.

Photography by Allison Johnson / The Verge

Read More 

Google’s AI ‘Reimagine’ tool helped us add wrecks, disasters, and corpses to our photos

Magic Editor’s new tool helped us add the bike and car with nothing more than a text prompt. | Photo: The Verge

As it turns out, a rabbit wearing an AI-generated top hat was just the tip of the iceberg.
Google is the latest phone company this year to announce AI photo editing tools, following Samsung’s somewhat troubling, mostly delightful sketch-to-image feature and Apple’s much more seemingly tame Image Playground coming this fall. The Pixel 9’s answer is a new tool called “Reimagine,” and after using it for a week with a few of my colleagues, I’m more convinced than ever that none of us are ready for what’s coming.
Reimagine is a logical extension of last year’s Magic Editor tools, which let you select and erase parts of a scene or change the sky to look like a sunset. It was nothing shocking. But Reimagine doesn’t just take it a step further — it kicks the whole door down. You can select any nonhuman object or portion of a scene and type in a text prompt to generate something in that space. The results are often very convincing and even uncanny. The lighting, shadows, and perspective usually match the original photo. You can add fun stuff, sure, like wildflowers or rainbows or whatever. But that’s not the problem.
A couple of my colleagues helped me test the boundaries of Reimagine with their Pixel 9 and 9 Pro review units, and we got it to generate some very disturbing things. Some of this required some creative prompting to work around the obvious guardrails; if you choose your words carefully, you can get it to create a reasonably convincing body under a blood-stained sheet.

In our week of testing, we added car wrecks, smoking bombs in public places, sheets that appear to cover bloody corpses, and drug paraphernalia to images. That seems bad. As a reminder, this isn’t some piece of specialized software we went out of our way to use — it’s all built into a phone that my dad could walk into Verizon and buy.
When we asked Google for comment on the issue, company spokesperson Alex Moriconi responded with the following statement:
Pixel Studio and Magic Editor are helpful tools meant to unlock your creativity with text to image generation and advanced photo editing on Pixel 9 devices. We design our Generative AI tools to respect the intent of user prompts and that means they may create content that may offend when instructed by the user to do so. That said, it’s not anything goes. We have clear policies and Terms of Service on what kinds of content we allow and don’t allow, and build guardrails to prevent abuse. At times, some prompts can challenge these tools’ guardrails and we remain committed to continually enhancing and refining the safeguards we have in place.
To be sure, our creative prompting to work around filters is a clear violation of these policies. It’s also a violation of Safeway’s policies to ring up your organic peaches as conventionally grown at the self-checkout, not that I know anyone who would do that. And someone with the worst intentions isn’t concerned with Google’s terms and conditions, either. What’s most troubling about all of this is the lack of robust tools to identify this kind of content on the web. Our ability to make problematic images is running way ahead of our ability to identify them.
When you edit an image with Reimagine, there’s no watermark or any other obvious way to tell that the image is AI-generated — there’s just a tag in the metadata. That’s all well and good, but standard metadata is easily stripped from an image simply by taking a screenshot. Moriconi tells us that Google uses a more robust tagging system called SynthID for images created by Pixel Studio since they’re 100 percent synthetic. But images edited with Magic Editor don’t get those tags.

Photos: The Verge
To be sure, tampering with photos is nothing new. People have been adding weird and deceptive stuff to images since the beginning of photography. But the difference now is that it has never been this easy to add these things realistically to your photos. A year or two ago, adding a convincing car crash to an image would have taken time, expertise, an understanding of Photoshop layers, and access to expensive software. Those barriers are gone; all it now takes is a bit of text, a few moments, and a new Pixel phone.
It’s also never been easier to circulate misleading photos quickly. The tools to convincingly manipulate your photos exist right inside the same device you use to capture it and publish it for all the world to see. We uploaded one of our “Reimagined” images to an Instagram story as a test (and quickly took it down). Meta didn’t tag it automatically as AI-generated, and I’m sure nobody would have been the wiser if they’d seen it.
Who knows, maybe everyone will read and abide by Google’s AI policies and use Reimagine to put wildflowers and rainbows in their photos. That would be lovely! But just in case they don’t, it might be best to apply a little extra skepticism to photos you see online.

Magic Editor’s new tool helped us add the bike and car with nothing more than a text prompt. | Photo: The Verge

As it turns out, a rabbit wearing an AI-generated top hat was just the tip of the iceberg.

Google is the latest phone company this year to announce AI photo editing tools, following Samsung’s somewhat troubling, mostly delightful sketch-to-image feature and Apple’s much more seemingly tame Image Playground coming this fall. The Pixel 9’s answer is a new tool called “Reimagine,” and after using it for a week with a few of my colleagues, I’m more convinced than ever that none of us are ready for what’s coming.

Reimagine is a logical extension of last year’s Magic Editor tools, which let you select and erase parts of a scene or change the sky to look like a sunset. It was nothing shocking. But Reimagine doesn’t just take it a step further — it kicks the whole door down. You can select any nonhuman object or portion of a scene and type in a text prompt to generate something in that space. The results are often very convincing and even uncanny. The lighting, shadows, and perspective usually match the original photo. You can add fun stuff, sure, like wildflowers or rainbows or whatever. But that’s not the problem.

A couple of my colleagues helped me test the boundaries of Reimagine with their Pixel 9 and 9 Pro review units, and we got it to generate some very disturbing things. Some of this required some creative prompting to work around the obvious guardrails; if you choose your words carefully, you can get it to create a reasonably convincing body under a blood-stained sheet.

In our week of testing, we added car wrecks, smoking bombs in public places, sheets that appear to cover bloody corpses, and drug paraphernalia to images. That seems bad. As a reminder, this isn’t some piece of specialized software we went out of our way to use — it’s all built into a phone that my dad could walk into Verizon and buy.

When we asked Google for comment on the issue, company spokesperson Alex Moriconi responded with the following statement:

Pixel Studio and Magic Editor are helpful tools meant to unlock your creativity with text to image generation and advanced photo editing on Pixel 9 devices. We design our Generative AI tools to respect the intent of user prompts and that means they may create content that may offend when instructed by the user to do so. That said, it’s not anything goes. We have clear policies and Terms of Service on what kinds of content we allow and don’t allow, and build guardrails to prevent abuse. At times, some prompts can challenge these tools’ guardrails and we remain committed to continually enhancing and refining the safeguards we have in place.

To be sure, our creative prompting to work around filters is a clear violation of these policies. It’s also a violation of Safeway’s policies to ring up your organic peaches as conventionally grown at the self-checkout, not that I know anyone who would do that. And someone with the worst intentions isn’t concerned with Google’s terms and conditions, either. What’s most troubling about all of this is the lack of robust tools to identify this kind of content on the web. Our ability to make problematic images is running way ahead of our ability to identify them.

When you edit an image with Reimagine, there’s no watermark or any other obvious way to tell that the image is AI-generated — there’s just a tag in the metadata. That’s all well and good, but standard metadata is easily stripped from an image simply by taking a screenshot. Moriconi tells us that Google uses a more robust tagging system called SynthID for images created by Pixel Studio since they’re 100 percent synthetic. But images edited with Magic Editor don’t get those tags.

Photos: The Verge

To be sure, tampering with photos is nothing new. People have been adding weird and deceptive stuff to images since the beginning of photography. But the difference now is that it has never been this easy to add these things realistically to your photos. A year or two ago, adding a convincing car crash to an image would have taken time, expertise, an understanding of Photoshop layers, and access to expensive software. Those barriers are gone; all it now takes is a bit of text, a few moments, and a new Pixel phone.

It’s also never been easier to circulate misleading photos quickly. The tools to convincingly manipulate your photos exist right inside the same device you use to capture it and publish it for all the world to see. We uploaded one of our “Reimagined” images to an Instagram story as a test (and quickly took it down). Meta didn’t tag it automatically as AI-generated, and I’m sure nobody would have been the wiser if they’d seen it.

Who knows, maybe everyone will read and abide by Google’s AI policies and use Reimagine to put wildflowers and rainbows in their photos. That would be lovely! But just in case they don’t, it might be best to apply a little extra skepticism to photos you see online.

Read More 

The Beats Studio Pro headphones add one of Apple’s best features

A new firmware update brings audio sharing to the Beats Studio Pro. | Photo by Chris Welch / The Verge

Apple has released a firmware update for the Beats Studio Pro that finally brings audio sharing to the headphones, allowing an iPhone or iPad to stream audio to two pairs of wireless headphones simultaneously. The new feature, spotted by 9to5Mac, was noticeably missing when we reviewed the Beats Studio Pro last year, but now better positions the headphones as an alternative to Apple’s more expensive AirPods Max.
Audio sharing was added to the Beats line shortly after its debut in 2019. It’s a feature found in all of Apple’s headphones featuring the company’s W1 or H1 chips, but the Beats Studio Pro debuted with a different chip that ensured all of the headphone’s features were compatible with both Apple and Android devices.
The firmware update adding audio sharing (2C301) should be downloaded and installed automatically for those regularly using the Beats Studio Pro. Apple has an in-depth guide for using the audio sharing feature including which models support it, but it can be easily activated on iPhones and iPads as an AirPlay feature when a pair of Beats headphones in pairing mode are held close. The feature is not available when using the Beats Studio Pro with Android devices.

A new firmware update brings audio sharing to the Beats Studio Pro. | Photo by Chris Welch / The Verge

Apple has released a firmware update for the Beats Studio Pro that finally brings audio sharing to the headphones, allowing an iPhone or iPad to stream audio to two pairs of wireless headphones simultaneously. The new feature, spotted by 9to5Mac, was noticeably missing when we reviewed the Beats Studio Pro last year, but now better positions the headphones as an alternative to Apple’s more expensive AirPods Max.

Audio sharing was added to the Beats line shortly after its debut in 2019. It’s a feature found in all of Apple’s headphones featuring the company’s W1 or H1 chips, but the Beats Studio Pro debuted with a different chip that ensured all of the headphone’s features were compatible with both Apple and Android devices.

The firmware update adding audio sharing (2C301) should be downloaded and installed automatically for those regularly using the Beats Studio Pro. Apple has an in-depth guide for using the audio sharing feature including which models support it, but it can be easily activated on iPhones and iPads as an AirPlay feature when a pair of Beats headphones in pairing mode are held close. The feature is not available when using the Beats Studio Pro with Android devices.

Read More 

Scroll to top
Generated by Feedzy