verge-rss

The history of Roku and the fight over CarPlay

Image: Alex Parkin / The Verge

Before Roku was a leading player in the streaming wars, with that ubiquitous purple screensaver and a library of original content and practically every streaming app you could possibly imagine, it was a Netflix gadget — the first Netflix gadget, for that matter, and the one that helped start a streaming revolution. But the Roku story was almost very different.
On this episode of The Vergecast, we try out a couple of show formats we’ve been planning for a while. First, we debut our tech-rewatch segment, in the vein of some of our favorite rewatch shows like Office Ladies, The West Wing Weekly, and The Rewatchables. We’re calling it Version History, at least for now. For this first segment, we tell the story of the Roku Netflix Player, debate its legacy, and try to decide whether this thing belongs in the Version History Hall of Fame. The exact qualifications for said Hall of Fame? Still very much TBD.
After that, we have another take on our as-yet-untitled debate show. In this one, Nilay Patel and David Pierce yell at each other about who should own the screens in your car. Are CarPlay and Android Auto the answer, the solution to universally crappy automaker software? Or should Google and Apple get out of the way and let carmakers build what’s required for the self-driving, automated, infinitely more immersive future of driving? Things get heated. Names are called.

(We want to know what you think of these new formats! What do you like? What do you hate? What should we tweak or try or do differently? We’re always looking to expand The Vergecast and even launch new shows, so we want all your feedback. You can send us an email at vergecast@theverge.com, call the Hotline at 866-VERGE11, or just leave us a comment here.)
Finally, we answer a question on the Vergecast Hotline about political texts and how to get them to stop. We have good news… and we have bad news.
If you want to know more about everything we discuss in this episode, here are some links to get you started, first on the Roku Netflix Player:

From Fast Company: Inside Netflix’s Project Griffin: The Forgotten History Of Roku Under Reed Hastings

From CNBC: How Roku used the Netflix playbook to rule streaming video

From CNN: Netflix Player offers PC-free movie watching

From Wired: Review: Roku Netflix Set Top Box Is Just Shy of Totally Amazing

From The New York Times: Why the Roku Netflix Player Is the First Shot of the Revolution

And on the CarPlay / Android Auto debate:

Car companies haven’t figured out if they’ll let Apple CarPlay take over all the screens
The rest of the auto industry still loves CarPlay and Android Auto
Everybody hates GM’s decision to kill Apple CarPlay and Android Auto for its EVs
Rivian CEO says CarPlay isn’t going to happen
Apple’s fancy new CarPlay will only work wirelessly

And on robotexts:

From The Washington Post: How to stop receiving spam texts

From PCMag: Stop Robotexts: How to Block Smishing and Spam Text Messages

Image: Alex Parkin / The Verge

Before Roku was a leading player in the streaming wars, with that ubiquitous purple screensaver and a library of original content and practically every streaming app you could possibly imagine, it was a Netflix gadget — the first Netflix gadget, for that matter, and the one that helped start a streaming revolution. But the Roku story was almost very different.

On this episode of The Vergecast, we try out a couple of show formats we’ve been planning for a while. First, we debut our tech-rewatch segment, in the vein of some of our favorite rewatch shows like Office Ladies, The West Wing Weekly, and The Rewatchables. We’re calling it Version History, at least for now. For this first segment, we tell the story of the Roku Netflix Player, debate its legacy, and try to decide whether this thing belongs in the Version History Hall of Fame. The exact qualifications for said Hall of Fame? Still very much TBD.

After that, we have another take on our as-yet-untitled debate show. In this one, Nilay Patel and David Pierce yell at each other about who should own the screens in your car. Are CarPlay and Android Auto the answer, the solution to universally crappy automaker software? Or should Google and Apple get out of the way and let carmakers build what’s required for the self-driving, automated, infinitely more immersive future of driving? Things get heated. Names are called.

(We want to know what you think of these new formats! What do you like? What do you hate? What should we tweak or try or do differently? We’re always looking to expand The Vergecast and even launch new shows, so we want all your feedback. You can send us an email at vergecast@theverge.com, call the Hotline at 866-VERGE11, or just leave us a comment here.)

Finally, we answer a question on the Vergecast Hotline about political texts and how to get them to stop. We have good news… and we have bad news.

If you want to know more about everything we discuss in this episode, here are some links to get you started, first on the Roku Netflix Player:

From Fast Company: Inside Netflix’s Project Griffin: The Forgotten History Of Roku Under Reed Hastings

From CNBC: How Roku used the Netflix playbook to rule streaming video

From CNN: Netflix Player offers PC-free movie watching

From Wired: Review: Roku Netflix Set Top Box Is Just Shy of Totally Amazing

From The New York Times: Why the Roku Netflix Player Is the First Shot of the Revolution

And on the CarPlay / Android Auto debate:

Car companies haven’t figured out if they’ll let Apple CarPlay take over all the screens
The rest of the auto industry still loves CarPlay and Android Auto
Everybody hates GM’s decision to kill Apple CarPlay and Android Auto for its EVs
Rivian CEO says CarPlay isn’t going to happen
Apple’s fancy new CarPlay will only work wirelessly

And on robotexts:

From The Washington Post: How to stop receiving spam texts

From PCMag: Stop Robotexts: How to Block Smishing and Spam Text Messages

Read More 

Dashlane says passkey adoption has increased by 400 percent in 2024

Image: Dashlane

Password manager Dashlane has released a new passkey report that gives us some idea of how many people are adopting the cryptographic passwordless logins. According to the report, Dashlane has seen a 400 percent increase in passkey authentications since the beginning of the year, with 1 in 5 active Dashlane users now having at least one passkey in their Dashlane vault.
Over 100 sites now offer passkey support, though Dashlane says the top 20 most popular apps account for 52 percent of passkey authentications. When split into industry sectors, e-commerce (which includes eBay, Amazon, and Target) made up the largest share of passkey authentications at 42 percent. So-called “sticky apps” — meaning those used on a frequent basis, such as social media, e-commerce, and finance or payment sites — were the groups that saw the fastest passkey adoption between April and June of this year.

Image: Dashlane
Several platforms in the top 20, like eBay and Google, are “early adopters” of passkeys that were quick to back the technology.

Other domains show surprising growth, though — while Roblox is the only gaming category entry within the top 20 apps, its passkey adoption is outperforming giant platforms like Facebook, X, and Adobe, for example. Global payment processing platform Stripe is another notable standout, having made it into the top 10 apps for passkey adoption despite only rolling out support in May 2024.

Image: Dashlane
eBay is the most popular domain for passkeys, though Stripe is doing well considering support was only launched three months ago.

Image: Dashlane
E-commerce platforms dominate overall, making up almost half of all Dashlane’s passkey authentications.

Dashlane’s report also found that passkey usage increased successful sign-ins by 70 percent compared to traditional passwords. According to a report from the FIDO Alliance (the coalition behind the development of passkeys), people are abandoning purchases and attempts to sign in to services via passwords almost four times per month on average, a 15 percent increase between 2022 and 2023.
Google shared a similarly positive update in May, revealing that passkeys had been used over a billion times collectively by 400 million Google accounts. It’s good to see an increase in adoption rates, but we’re a long way off from replacing traditional passwords entirely.

Image: Dashlane

Password manager Dashlane has released a new passkey report that gives us some idea of how many people are adopting the cryptographic passwordless logins. According to the report, Dashlane has seen a 400 percent increase in passkey authentications since the beginning of the year, with 1 in 5 active Dashlane users now having at least one passkey in their Dashlane vault.

Over 100 sites now offer passkey support, though Dashlane says the top 20 most popular apps account for 52 percent of passkey authentications. When split into industry sectors, e-commerce (which includes eBay, Amazon, and Target) made up the largest share of passkey authentications at 42 percent. So-called “sticky apps” — meaning those used on a frequent basis, such as social media, e-commerce, and finance or payment sites — were the groups that saw the fastest passkey adoption between April and June of this year.

Image: Dashlane
Several platforms in the top 20, like eBay and Google, are “early adopters” of passkeys that were quick to back the technology.

Other domains show surprising growth, though — while Roblox is the only gaming category entry within the top 20 apps, its passkey adoption is outperforming giant platforms like Facebook, X, and Adobe, for example. Global payment processing platform Stripe is another notable standout, having made it into the top 10 apps for passkey adoption despite only rolling out support in May 2024.

Image: Dashlane
eBay is the most popular domain for passkeys, though Stripe is doing well considering support was only launched three months ago.

Image: Dashlane
E-commerce platforms dominate overall, making up almost half of all Dashlane’s passkey authentications.

Dashlane’s report also found that passkey usage increased successful sign-ins by 70 percent compared to traditional passwords. According to a report from the FIDO Alliance (the coalition behind the development of passkeys), people are abandoning purchases and attempts to sign in to services via passwords almost four times per month on average, a 15 percent increase between 2022 and 2023.

Google shared a similarly positive update in May, revealing that passkeys had been used over a billion times collectively by 400 million Google accounts. It’s good to see an increase in adoption rates, but we’re a long way off from replacing traditional passwords entirely.

Read More 

Perplexity is cutting checks to publishers following plagiarism accusations

Image: The Verge

Perplexity is launching a program to share ad revenue with publishing partners following weeks of plagiarism accusations.
Perplexity’s “Publishers’ Program” has recruited its first batch of partners, including prominent names like Time, Der Spiegel, Fortune, Entrepreneur, The Texas Tribune, and Automattic (with WordPress.com participating but not Tumblr). Under this program, when Perplexity features content from these publishers in response to user queries, the publishers will receive a share of the ad revenue. Publishing partners will also get a free one-year subscription to Perplexity’s Enterprise Pro tier and access to Perplexity’s developer tools, plus insights through Scalepost.ai, a new AI startup that helps secure partnerships between AI companies and publishers, such as how frequently their articles appear in search queries.
Dmitry Shevelenko, Perplexity’s chief business officer, declined to share exact deal terms but said that the revenue share is a multiyear agreement with a “double-digit percentage,” consistent across all publishers, with especially favorable terms for the initial partners. Perplexity spokesperson Sara Platnick added that payments are made on a per-source basis, meaning publishers are compensated for each article used in responses. The program will temporarily provide cash advances on revenue to publishers as Perplexity builds a long-term advertising model. The advances aren’t a licensing fee for content like OpenAI’s deals.
“It’s a much better revenue split than Google, which is zero,” Automattic CEO Matt Mullenweg told me via direct message. The publishing agreement doesn’t cover WordPress.org, but Automattic will be sending payments to direct customers of WordPress.com. “The amount, I don’t know! Probably small to start because they don’t make much revenue now, but if Perplexity is the next Google, which I think it has a chance of being, these numbers could become meaningful and we’re looking to help publishers get paid in every way we can.”
This new program comes a month after a Forbes editor found the publication’s paywalled reporting plagiarized in Perplexity’s new product, Pages, an AI-powered tool that lets users create a report or article based on prompts. The AI-generated version of the Forbes story, along with an AI-generated Perplexity podcast of the story, was then sent to subscribers via a mobile push notification, Forbes reported. Wired then published an investigation that found Perplexity’s AI was “paraphrasing WIRED stories, and at times summarizing stories inaccurately and with minimal attribution.” Forbes has since threatened legal action against Perplexity.
Still, taking content for free doesn’t seem like a moral issue to Perplexity
Shevelenko told me the company started work on this program back in January, well before the blowback, saying the team took inspiration from X’s ad revenue sharing program. Perplexity planned to launch this program last month amid the drama but decided to hold off until now, he said. I asked him if this was a well-timed apology tour or if it was just a stopgap to prevent lawsuits. “We don’t want people saying nasty things about us more than we don’t want to get sued,” Shevelenko said.
Shevelenko says it’s “not great” that people think of Perplexity as plagiarizing journalists’ work, particularly as “an aspiring consumer brand.” He also thinks the accusation isn’t quite fair, saying that people have been “tricking” the service’s AI to get these plagiarized results. Still, scraping and reprinting content doesn’t seem like a moral issue to Perplexity. Shevelenko said, “There’s intricacies of fair use and copyright law where we feel we’re kind of, you know, clearly within those bounds.”
For Perplexity, whether it is a way to make amends or not, the startup seems determined to set up long-term infrastructure to pay publishers for their content for as long as the company exists. Shevelenko said as much himself: “Obviously I’m not contemplating the scenario. But say, Perplexity dies and fails. You’re not losing anything, right? And if we’re successful, you’re riding in that upside.”
AI-powered search is more expensive than traditional search, so Perplexity needs to work quickly to cover the compute costs involved. In May, the startup raised $250 million at a $3 billion valuation. “We need advertising to be successful because it is going to be our main business model,” Shevelenko said.
Paying publishers only adds to the costs, and Perplexity is aware it isn’t the norm for a search tool. “By the way, our investors don’t love that we’re doing this because they’re like, ‘Oh, we want you to have the same margin profile as Google,’” Shevelenko said, adding that Perplexity can’t compete with Google by mimicking their strategies. Instead, he says, the company wants to focus on building a profitable business by forming alliances with the media by creating the right long-term structures, like ad revenue sharing.
There’s also the looming threat of OpenAI, which just announced a prototype of its AI-powered search product, SearchGPT, alongside its own publishing partners like News Corp, The Atlantic, and The Verge’s parent company, Vox Media. It seems OpenAI has taken stock of Perplexity’s mistakes, too. In the announcement, the company said that publishers will have the ability to “manage how they appear in OpenAI search features” and can opt out of having their content used to train OpenAI’s models and still be surfaced in search.
Shevelenko said Perplexity is “happy to give publishers full control there” only “to the extent to which it doesn’t make the product ugly.” For now, offering that control is a “work in progress.” Most importantly, Perplexity wants to avoid giving publishers the ability to change the answers.
It seems like AI companies will use publishers’ content whether they agree or not. The business side of the media industry seems to believe that accepting the money, rather than laying off staff to afford lengthy legal battles, is the best option for now. The CEO of The Atlantic, which recently made a deal with OpenAI, said on an episode of The Verge’s Decoder that they weighed all the benefits of a partnership versus what it would cost to sue and how much they would get from it, “and then you make a choice.”
So, if Perplexity wants to cut publishers a check for using their content, I think that’s good, actually. But it doesn’t answer a lot of questions, like what that means for publishers that aren’t getting a check or if the deals will amount to meaningful resources for newsrooms. In the face of a growing technology backed by some of Silicon Valley’s most powerful, it doesn’t seem like the media has much of a choice.

Image: The Verge

Perplexity is launching a program to share ad revenue with publishing partners following weeks of plagiarism accusations.

Perplexity’s “Publishers’ Program” has recruited its first batch of partners, including prominent names like Time, Der Spiegel, Fortune, Entrepreneur, The Texas Tribune, and Automattic (with WordPress.com participating but not Tumblr). Under this program, when Perplexity features content from these publishers in response to user queries, the publishers will receive a share of the ad revenue. Publishing partners will also get a free one-year subscription to Perplexity’s Enterprise Pro tier and access to Perplexity’s developer tools, plus insights through Scalepost.ai, a new AI startup that helps secure partnerships between AI companies and publishers, such as how frequently their articles appear in search queries.

Dmitry Shevelenko, Perplexity’s chief business officer, declined to share exact deal terms but said that the revenue share is a multiyear agreement with a “double-digit percentage,” consistent across all publishers, with especially favorable terms for the initial partners. Perplexity spokesperson Sara Platnick added that payments are made on a per-source basis, meaning publishers are compensated for each article used in responses. The program will temporarily provide cash advances on revenue to publishers as Perplexity builds a long-term advertising model. The advances aren’t a licensing fee for content like OpenAI’s deals.

“It’s a much better revenue split than Google, which is zero,” Automattic CEO Matt Mullenweg told me via direct message. The publishing agreement doesn’t cover WordPress.org, but Automattic will be sending payments to direct customers of WordPress.com. “The amount, I don’t know! Probably small to start because they don’t make much revenue now, but if Perplexity is the next Google, which I think it has a chance of being, these numbers could become meaningful and we’re looking to help publishers get paid in every way we can.”

This new program comes a month after a Forbes editor found the publication’s paywalled reporting plagiarized in Perplexity’s new product, Pages, an AI-powered tool that lets users create a report or article based on prompts. The AI-generated version of the Forbes story, along with an AI-generated Perplexity podcast of the story, was then sent to subscribers via a mobile push notification, Forbes reported. Wired then published an investigation that found Perplexity’s AI was “paraphrasing WIRED stories, and at times summarizing stories inaccurately and with minimal attribution.” Forbes has since threatened legal action against Perplexity.

Still, taking content for free doesn’t seem like a moral issue to Perplexity

Shevelenko told me the company started work on this program back in January, well before the blowback, saying the team took inspiration from X’s ad revenue sharing program. Perplexity planned to launch this program last month amid the drama but decided to hold off until now, he said. I asked him if this was a well-timed apology tour or if it was just a stopgap to prevent lawsuits. “We don’t want people saying nasty things about us more than we don’t want to get sued,” Shevelenko said.

Shevelenko says it’s “not great” that people think of Perplexity as plagiarizing journalists’ work, particularly as “an aspiring consumer brand.” He also thinks the accusation isn’t quite fair, saying that people have been “tricking” the service’s AI to get these plagiarized results. Still, scraping and reprinting content doesn’t seem like a moral issue to Perplexity. Shevelenko said, “There’s intricacies of fair use and copyright law where we feel we’re kind of, you know, clearly within those bounds.”

For Perplexity, whether it is a way to make amends or not, the startup seems determined to set up long-term infrastructure to pay publishers for their content for as long as the company exists. Shevelenko said as much himself: “Obviously I’m not contemplating the scenario. But say, Perplexity dies and fails. You’re not losing anything, right? And if we’re successful, you’re riding in that upside.”

AI-powered search is more expensive than traditional search, so Perplexity needs to work quickly to cover the compute costs involved. In May, the startup raised $250 million at a $3 billion valuation. “We need advertising to be successful because it is going to be our main business model,” Shevelenko said.

Paying publishers only adds to the costs, and Perplexity is aware it isn’t the norm for a search tool. “By the way, our investors don’t love that we’re doing this because they’re like, ‘Oh, we want you to have the same margin profile as Google,’” Shevelenko said, adding that Perplexity can’t compete with Google by mimicking their strategies. Instead, he says, the company wants to focus on building a profitable business by forming alliances with the media by creating the right long-term structures, like ad revenue sharing.

There’s also the looming threat of OpenAI, which just announced a prototype of its AI-powered search product, SearchGPT, alongside its own publishing partners like News Corp, The Atlantic, and The Verge’s parent company, Vox Media. It seems OpenAI has taken stock of Perplexity’s mistakes, too. In the announcement, the company said that publishers will have the ability to “manage how they appear in OpenAI search features” and can opt out of having their content used to train OpenAI’s models and still be surfaced in search.

Shevelenko said Perplexity is “happy to give publishers full control there” only “to the extent to which it doesn’t make the product ugly.” For now, offering that control is a “work in progress.” Most importantly, Perplexity wants to avoid giving publishers the ability to change the answers.

It seems like AI companies will use publishers’ content whether they agree or not. The business side of the media industry seems to believe that accepting the money, rather than laying off staff to afford lengthy legal battles, is the best option for now. The CEO of The Atlantic, which recently made a deal with OpenAI, said on an episode of The Verge’s Decoder that they weighed all the benefits of a partnership versus what it would cost to sue and how much they would get from it, “and then you make a choice.”

So, if Perplexity wants to cut publishers a check for using their content, I think that’s good, actually. But it doesn’t answer a lot of questions, like what that means for publishers that aren’t getting a check or if the deals will amount to meaningful resources for newsrooms. In the face of a growing technology backed by some of Silicon Valley’s most powerful, it doesn’t seem like the media has much of a choice.

Read More 

Canva adds a new generative AI platform to its growing creative empire

Canva welcomes Leonardo.ai to its design app portfolio. | Image: Leonardo.ai

Canva has announced plans to acquire Leonardo.ai, an Australian generative AI content and research startup, as part of its goal to build a “world-class suite of visual AI tools.” While financial terms haven’t been disclosed, the deal will see Canva gain access to Leonardo.ai’s lineup of user-customizable text-to-image and text-to-video generators.
In Canva‘s announcement, company co-founder Cameron Adams says Leonardo.ai will “continue to develop its web platform” as a separate product offering, much like the Affinity creative software suite Canva acquired in March. Leonardo.ai’s technology and Phoenix foundation model will also be “rapidly” integrated into Canva’s existing suite of Magic Studio products, such as the Magic Media image and video generator.

Image: Canva / Leonardio.ai
Here’s an example image that Canva says was generated using a Leonardo.ai model.

Canva has made efforts to diversify its platform with more office suite-like tools of late, but the visual design and communications platform remains one of the biggest competitors to Adobe’s lineup of creative software products. Where the Affinity acquisition may help Canva to compete against Adobe software like Illustrator, Photoshop, and InDesign, Leonardo.ai could be similarly poised as an alternative to Adobe’s Firefly generative AI models.
Leonardo.ai told TechCrunch that its models are trained using “licensed, synthetic, and publicly available/open source data,” which is vaguer than Adobe’s training disclosure for Firefly. Despite this, Adobe suffered backlash to a recent policy update that forced it to explicitly state that user data wouldn’t be used to train the company’s generative AI models. Canva has an opportunity to position itself as a growing alternative, but it needs to tread carefully to avoid any Adobe-like scrutiny from creators who hold similar reservations about generative AI.

Canva welcomes Leonardo.ai to its design app portfolio. | Image: Leonardo.ai

Canva has announced plans to acquire Leonardo.ai, an Australian generative AI content and research startup, as part of its goal to build a “world-class suite of visual AI tools.” While financial terms haven’t been disclosed, the deal will see Canva gain access to Leonardo.ai’s lineup of user-customizable text-to-image and text-to-video generators.

In Canva‘s announcement, company co-founder Cameron Adams says Leonardo.ai will “continue to develop its web platform” as a separate product offering, much like the Affinity creative software suite Canva acquired in March. Leonardo.ai’s technology and Phoenix foundation model will also be “rapidly” integrated into Canva’s existing suite of Magic Studio products, such as the Magic Media image and video generator.

Image: Canva / Leonardio.ai
Here’s an example image that Canva says was generated using a Leonardo.ai model.

Canva has made efforts to diversify its platform with more office suite-like tools of late, but the visual design and communications platform remains one of the biggest competitors to Adobe’s lineup of creative software products. Where the Affinity acquisition may help Canva to compete against Adobe software like Illustrator, Photoshop, and InDesign, Leonardo.ai could be similarly poised as an alternative to Adobe’s Firefly generative AI models.

Leonardo.ai told TechCrunch that its models are trained using “licensed, synthetic, and publicly available/open source data,” which is vaguer than Adobe’s training disclosure for Firefly. Despite this, Adobe suffered backlash to a recent policy update that forced it to explicitly state that user data wouldn’t be used to train the company’s generative AI models. Canva has an opportunity to position itself as a growing alternative, but it needs to tread carefully to avoid any Adobe-like scrutiny from creators who hold similar reservations about generative AI.

Read More 

Microsoft wants Congress to outlaw AI-generated deepfake fraud

Illustration: Alex Castro / The Verge

Microsoft is calling on members of Congress to regulate the use of AI-generated deepfakes to protect against fraud, abuse, and manipulation. Microsoft vice chair and president Brad Smith is calling for urgent action from policymakers to protect elections and guard seniors from fraud and children from abuse.
“While the tech sector and non-profit groups have taken recent steps to address this problem, it has become apparent that our laws will also need to evolve to combat deepfake fraud,” says Smith in a blog post. “One of the most important things the US can do is pass a comprehensive deepfake fraud statute to prevent cybercriminals from using this technology to steal from everyday Americans.”
Microsoft wants a “deepfake fraud statute” that will give law enforcement officials a legal framework to prosecute AI-generated scams and fraud. Smith is also calling on lawmakers to “ensure that our federal and state laws on child sexual exploitation and abuse and non-consensual intimate imagery are updated to include AI-generated content.”
The Senate recently passed a bill cracking down on sexually explicit deepfakes, allowing victims of nonconsensual sexually explicit AI deepfakes to sue their creators for damages. The bill was passed months after middle and high school students were found to be fabricating explicit images of female classmates, and trolls flooded X with graphic Taylor Swift AI-generated fakes.
Microsoft has had to implement more safety controls for its own AI products, after a loophole in the company’s Designer AI image creator allowed people to create explicit images of celebrities like Taylor Swift. “The private sector has a responsibility to innovate and implement safeguards that prevent the misuse of AI,” says Smith.
While the FCC has already banned robocalls with AI-generated voices, generative AI makes it easy to create fake audio, images, and video — something we’re already seeing during the run up to the 2024 presidential election. Elon Musk shared a deepfake video spoofing Vice President Kamala Harris on X earlier this week, in a post that appears to violate X’s own policies against synthetic and manipulated media.
Microsoft wants posts like Musk’s to be clearly labeled as a deepfake. “Congress should require AI system providers to use state-of-the-art provenance tooling to label synthetic content,” says Smith. “This is essential to build trust in the information ecosystem and will help the public better understand whether content is AI-generated or manipulated.”

Illustration: Alex Castro / The Verge

Microsoft is calling on members of Congress to regulate the use of AI-generated deepfakes to protect against fraud, abuse, and manipulation. Microsoft vice chair and president Brad Smith is calling for urgent action from policymakers to protect elections and guard seniors from fraud and children from abuse.

“While the tech sector and non-profit groups have taken recent steps to address this problem, it has become apparent that our laws will also need to evolve to combat deepfake fraud,” says Smith in a blog post. “One of the most important things the US can do is pass a comprehensive deepfake fraud statute to prevent cybercriminals from using this technology to steal from everyday Americans.”

Microsoft wants a “deepfake fraud statute” that will give law enforcement officials a legal framework to prosecute AI-generated scams and fraud. Smith is also calling on lawmakers to “ensure that our federal and state laws on child sexual exploitation and abuse and non-consensual intimate imagery are updated to include AI-generated content.”

The Senate recently passed a bill cracking down on sexually explicit deepfakes, allowing victims of nonconsensual sexually explicit AI deepfakes to sue their creators for damages. The bill was passed months after middle and high school students were found to be fabricating explicit images of female classmates, and trolls flooded X with graphic Taylor Swift AI-generated fakes.

Microsoft has had to implement more safety controls for its own AI products, after a loophole in the company’s Designer AI image creator allowed people to create explicit images of celebrities like Taylor Swift. “The private sector has a responsibility to innovate and implement safeguards that prevent the misuse of AI,” says Smith.

While the FCC has already banned robocalls with AI-generated voices, generative AI makes it easy to create fake audio, images, and video — something we’re already seeing during the run up to the 2024 presidential election. Elon Musk shared a deepfake video spoofing Vice President Kamala Harris on X earlier this week, in a post that appears to violate X’s own policies against synthetic and manipulated media.

Microsoft wants posts like Musk’s to be clearly labeled as a deepfake. “Congress should require AI system providers to use state-of-the-art provenance tooling to label synthetic content,” says Smith. “This is essential to build trust in the information ecosystem and will help the public better understand whether content is AI-generated or manipulated.”

Read More 

Instagram starts letting people create AI versions of themselves

AI characters in Instagram. | Meta

Meta is opening up the ability for anyone in the US to create AI versions of themselves in Instagram or on the web, with a new tool called AI Studio.
The pitch is that creators and business owners will use these AI profiles to talk to their followers on their behalf. They’ll be able to talk directly with humans in chat threads and respond to comments on their author’s account. Meta says Instagram users in the US can get started with AI Studio via either its website or by starting a new “AI chat” directly in Instagram.
In a blog post on Monday, the company writes that “creators can customize their AI based on things like their Instagram content, topics to avoid and links they want it to share.” The post goes on to say that creators will be able to toggle things like auto-replies from their AI and dictate which specific accounts it’s allowed to interact with.

AI Studio also allows for the creation of totally new AI characters that can be deployed across Meta’s apps. Here, Meta is coming after startups like Character.AI and Replika, where people already talk to — and even fall in love with — themed chatbots. Like OpenAI’s custom GPT store, Meta will also surface the AI characters people make for others to try.
Meta’s first pass at this concept was having a handful of celebrities create AI versions of themselves with the same likeness but different names and personas. At the time, Meta said it took that approach because it worried about AI versions of celebrities saying problematic things on behalf of their human counterparts. (Even with the controls built into AI Studio, this is still bound to happen. It’s generative AI we’re dealing with, after all.)

Meta
Get ready for AIs everywhere in Instagram.

It seems that Meta is at least aware that this is dicey territory. The company says that AI profiles are clearly labeled everywhere they appear. A company handbook for creators goes into more detail about the AI creation process, and it looks like the onus is on the creator to list the topics an AI won’t engage on. One of Meta’s example questions that an AI can be told to not respond to: “Should I invest in crypto??”

AI characters in Instagram. | Meta

Meta is opening up the ability for anyone in the US to create AI versions of themselves in Instagram or on the web, with a new tool called AI Studio.

The pitch is that creators and business owners will use these AI profiles to talk to their followers on their behalf. They’ll be able to talk directly with humans in chat threads and respond to comments on their author’s account. Meta says Instagram users in the US can get started with AI Studio via either its website or by starting a new “AI chat” directly in Instagram.

In a blog post on Monday, the company writes that “creators can customize their AI based on things like their Instagram content, topics to avoid and links they want it to share.” The post goes on to say that creators will be able to toggle things like auto-replies from their AI and dictate which specific accounts it’s allowed to interact with.

AI Studio also allows for the creation of totally new AI characters that can be deployed across Meta’s apps. Here, Meta is coming after startups like Character.AI and Replika, where people already talk to — and even fall in love with — themed chatbots. Like OpenAI’s custom GPT store, Meta will also surface the AI characters people make for others to try.

Meta’s first pass at this concept was having a handful of celebrities create AI versions of themselves with the same likeness but different names and personas. At the time, Meta said it took that approach because it worried about AI versions of celebrities saying problematic things on behalf of their human counterparts. (Even with the controls built into AI Studio, this is still bound to happen. It’s generative AI we’re dealing with, after all.)

Meta
Get ready for AIs everywhere in Instagram.

It seems that Meta is at least aware that this is dicey territory. The company says that AI profiles are clearly labeled everywhere they appear. A company handbook for creators goes into more detail about the AI creation process, and it looks like the onus is on the creator to list the topics an AI won’t engage on. One of Meta’s example questions that an AI can be told to not respond to: “Should I invest in crypto??”

Read More 

Logitech’s Circle View Doorbell is safe… for now

The Apple HomeKit compatible Logitech Circle View Doorbell is still alive and buzzing. | Photo by Dan Seifert / The Verge

Apple HomeKit users can breathe a sigh of relief. Logitech’s Circle View Doorbell and Circle View Camera, an indoor/outdoor security camera, are still being manufactured and sold — despite a comment from Logitech’s CEO saying she thought they were “pretty much gone,” although she would need to “double-check.”
The Circle View lineup represents two of just a handful of security cameras compatible with Apple’s HomeKit Secure Video service. If they went away, it would be slim pickings for smart home users who like the security and privacy features of Apple’s video service.

@decoderpod Logitech CEO Hanneke Faber said the company’s smart home products may soon be discontinued to focus on its three core categories. #logitech #home #smarthome #cameras #business #tech ♬ original sound – Decoder with Nilay Patel

Logitech’s new CEO, Hanneke Faber, was interviewed by The Verge’s Editor-in-Chief Nilay Patel on his Decoder podcast this week. She told him the company — which is known for its sprawling product categories (RIP Harmony) — was now focused on three areas: personal workspace, video conferencing, and gaming.
In response to a question about smart home doorbells, she said, “I think those are pretty much gone. … I need to double-check, but I’m not even sure those are still being sold.” You can see a video of the exchange on TikTok.
We double-checked for her, and Logitech’s Wendy Spander, Global Head of Communications, clarified to us that “The Circle View products are still in production, and we currently sell them (as do our retail partners). They have not been discontinued. They are in stock and ready to ship.” Both are available on Logitech’s website, confirming her statement.

Photo: Logitech
The Logitech Circle View Camera is over four years old.

Considering the last smart home product from Logitech launched more than three years ago (the Circle View Doorbell), and the new CEO clearly doesn’t consider smart home devices to be a crucial part of the company’s business — don’t hold your breath for a new model.
The good news is that Faber told Patel Logitech would continue to support all its products for some time. This presumably includes its Circle camera line-up, which was discontinued a few years back, along with the Circle View products that are still sold. The latter also work exclusively through the Apple Home app. So, you wouldn’t even have to rely on Logitech to maintain a separate app to keep using them.

The Apple HomeKit compatible Logitech Circle View Doorbell is still alive and buzzing. | Photo by Dan Seifert / The Verge

Apple HomeKit users can breathe a sigh of relief. Logitech’s Circle View Doorbell and Circle View Camera, an indoor/outdoor security camera, are still being manufactured and sold — despite a comment from Logitech’s CEO saying she thought they were “pretty much gone,” although she would need to “double-check.”

The Circle View lineup represents two of just a handful of security cameras compatible with Apple’s HomeKit Secure Video service. If they went away, it would be slim pickings for smart home users who like the security and privacy features of Apple’s video service.

@decoderpod

Logitech CEO Hanneke Faber said the company’s smart home products may soon be discontinued to focus on its three core categories. #logitech #home #smarthome #cameras #business #tech

♬ original sound – Decoder with Nilay Patel

Logitech’s new CEO, Hanneke Faber, was interviewed by The Verge’s Editor-in-Chief Nilay Patel on his Decoder podcast this week. She told him the company — which is known for its sprawling product categories (RIP Harmony) — was now focused on three areas: personal workspace, video conferencing, and gaming.

In response to a question about smart home doorbells, she said, “I think those are pretty much gone. … I need to double-check, but I’m not even sure those are still being sold.” You can see a video of the exchange on TikTok.

We double-checked for her, and Logitech’s Wendy Spander, Global Head of Communications, clarified to us that “The Circle View products are still in production, and we currently sell them (as do our retail partners). They have not been discontinued. They are in stock and ready to ship.” Both are available on Logitech’s website, confirming her statement.

Photo: Logitech
The Logitech Circle View Camera is over four years old.

Considering the last smart home product from Logitech launched more than three years ago (the Circle View Doorbell), and the new CEO clearly doesn’t consider smart home devices to be a crucial part of the company’s business — don’t hold your breath for a new model.

The good news is that Faber told Patel Logitech would continue to support all its products for some time. This presumably includes its Circle camera line-up, which was discontinued a few years back, along with the Circle View products that are still sold. The latter also work exclusively through the Apple Home app. So, you wouldn’t even have to rely on Logitech to maintain a separate app to keep using them.

Read More 

Customs agents need a warrant to search your phone now

Image: Cath Virginia / The Verge; Getty Images

A federal judge in New York ruled that Customs and Border Protection (CBP) can’t search travelers’ phones without a warrant. The ruling theoretically applies to land borders, seaports, and airports — but in practice, it only applies to New York’s Eastern District.
That’s not nothing, though, since the district includes John F. Kennedy Airport in Queens, the sixth-busiest airport in the country. Nationwide, CBP has conducted more than 230,000 searches of electronic devices between the 2018 and 2023 fiscal years at land borders, seaports, and airports, according to its publicly available enforcement statistics.
The ruling stems from a criminal case against Kurbonali Sultanov, a naturalized US citizen from Uzbekistan, who was ordered to hand his phone over to CBP after his name triggered an alert on the Treasury Enforcement Communications System identifying Sultanov as a potential purchaser or possessor of child sexual abuse material. Sultanov, who said the agents said he had no choice but to unlock his phone, handed it over and was then questioned by officers with Immigration and Customs Enforcement’s Homeland Security Investigations (HSI) unit. The HSI agents read Sultanov his Miranda rights, which he said he understood “50/50,” before questioning him.

Government investigators later obtained a warrant for the phone CBP had searched at the airport, as well as another phone Sultanov had in his possession when he entered the country. During his criminal trial, Sultanov filed a motion to suppress the evidence that had been obtained from his phones, arguing that the initial search of his phone was illegal under the Fourth Amendment.
The judge, Nina R. Morrison of New York’s Eastern District, denied Sultanov’s motion to suppress evidence, saying the second forensic search of his phones was conducted in good faith and pursuant to a warrant. But Morrison ruled in favor of Sultanov on Fourth Amendment grounds, finding that the initial search of his phone was unconstitutional.
In 2021, a US appeals court ruled that CBP agents can search travelers’ phones and other devices without a warrant and without reasonable suspicion, overturning an earlier ruling that held that warrantless, suspicionless searches violated the Fourth Amendment.
Morrison cites the judge’s ruling in that case, Alasaad v. Mayorkas, as well as other cases in which judges held that forensic examinations of cell phones are nonroutine. In Alasaad, the court ruled that “basic border searches [of electronic devices] are routine searches” but did not determine whether forensic searches require reasonable suspicion.
“This Court respectfully concludes otherwise,” Morrison writes. “Particularly in light of the record before this Court regarding the vast potential scope of a so-called ‘manual’ search, the distinction between manual and forensic searches is too flimsy a hook own which to hang a categorical exemption to the Fourth Amendment’s warrant requirement. And it is one that may collapse altogether as technology evolves.”

Though the geographical scope of the ruling is limited, the case has implications that reach far beyond Sultanov’s case. The Knight First Amendment Institute at Columbia University and the Reporters Committee for Freedom of the Press filed amici briefs in the case, arguing that letting CBP conduct warrantless searches of travelers’ phones at ports of entry imperiled freedom of the press. In her ruling, Morrison wrote that journalists, as well as “the targets of political opposition (or their colleagues, friends, or families) would only need to travel once through an international airport for the government to gain unfettered access to the most ‘intimate window into a person’s life.’”
(The “intimate window” quote comes from the Supreme Court ruling in Carpenter v. United States, in which the justices ruled that police must obtain warrants to seize cellphone tower location records.)
“As the court recognizes, warrantless searches of electronic devices at the border are an unjustified intrusion into travelers’ private expressions, personal associations, and journalistic endeavors—activities the First and Fourth Amendments were designed to protect,” Scott Wilkens, senior counsel at the Knight First Amendment Institute, said in a statement.
A CBP spokesperson contacted by The Verge said the agency can’t comment on pending criminal cases.
CBP’s ability to search travelers’ phones has received increased scrutiny in recent months. In April, a bipartisan group of senators sent a letter to Homeland Security Secretary Alejandro Mayorkas asking for information on what data the government retains from these searches and how the data is used. “We are concerned that the current policies and practices governing the search of electronic devices at the border constitute a departure from the intended scope and application of border search authority,” Sens. Gary Peters (D-MI), Rand Paul (R-KY), Ron Wyden (D-OR), and Mike Crapo (R-ID) wrote.

Image: Cath Virginia / The Verge; Getty Images

A federal judge in New York ruled that Customs and Border Protection (CBP) can’t search travelers’ phones without a warrant. The ruling theoretically applies to land borders, seaports, and airports — but in practice, it only applies to New York’s Eastern District.

That’s not nothing, though, since the district includes John F. Kennedy Airport in Queens, the sixth-busiest airport in the country. Nationwide, CBP has conducted more than 230,000 searches of electronic devices between the 2018 and 2023 fiscal years at land borders, seaports, and airports, according to its publicly available enforcement statistics.

The ruling stems from a criminal case against Kurbonali Sultanov, a naturalized US citizen from Uzbekistan, who was ordered to hand his phone over to CBP after his name triggered an alert on the Treasury Enforcement Communications System identifying Sultanov as a potential purchaser or possessor of child sexual abuse material. Sultanov, who said the agents said he had no choice but to unlock his phone, handed it over and was then questioned by officers with Immigration and Customs Enforcement’s Homeland Security Investigations (HSI) unit. The HSI agents read Sultanov his Miranda rights, which he said he understood “50/50,” before questioning him.

Government investigators later obtained a warrant for the phone CBP had searched at the airport, as well as another phone Sultanov had in his possession when he entered the country. During his criminal trial, Sultanov filed a motion to suppress the evidence that had been obtained from his phones, arguing that the initial search of his phone was illegal under the Fourth Amendment.

The judge, Nina R. Morrison of New York’s Eastern District, denied Sultanov’s motion to suppress evidence, saying the second forensic search of his phones was conducted in good faith and pursuant to a warrant. But Morrison ruled in favor of Sultanov on Fourth Amendment grounds, finding that the initial search of his phone was unconstitutional.

In 2021, a US appeals court ruled that CBP agents can search travelers’ phones and other devices without a warrant and without reasonable suspicion, overturning an earlier ruling that held that warrantless, suspicionless searches violated the Fourth Amendment.

Morrison cites the judge’s ruling in that case, Alasaad v. Mayorkas, as well as other cases in which judges held that forensic examinations of cell phones are nonroutine. In Alasaad, the court ruled that “basic border searches [of electronic devices] are routine searches” but did not determine whether forensic searches require reasonable suspicion.

“This Court respectfully concludes otherwise,” Morrison writes. “Particularly in light of the record before this Court regarding the vast potential scope of a so-called ‘manual’ search, the distinction between manual and forensic searches is too flimsy a hook own which to hang a categorical exemption to the Fourth Amendment’s warrant requirement. And it is one that may collapse altogether as technology evolves.”

Though the geographical scope of the ruling is limited, the case has implications that reach far beyond Sultanov’s case. The Knight First Amendment Institute at Columbia University and the Reporters Committee for Freedom of the Press filed amici briefs in the case, arguing that letting CBP conduct warrantless searches of travelers’ phones at ports of entry imperiled freedom of the press. In her ruling, Morrison wrote that journalists, as well as “the targets of political opposition (or their colleagues, friends, or families) would only need to travel once through an international airport for the government to gain unfettered access to the most ‘intimate window into a person’s life.’”

(The “intimate window” quote comes from the Supreme Court ruling in Carpenter v. United States, in which the justices ruled that police must obtain warrants to seize cellphone tower location records.)

“As the court recognizes, warrantless searches of electronic devices at the border are an unjustified intrusion into travelers’ private expressions, personal associations, and journalistic endeavors—activities the First and Fourth Amendments were designed to protect,” Scott Wilkens, senior counsel at the Knight First Amendment Institute, said in a statement.

A CBP spokesperson contacted by The Verge said the agency can’t comment on pending criminal cases.

CBP’s ability to search travelers’ phones has received increased scrutiny in recent months. In April, a bipartisan group of senators sent a letter to Homeland Security Secretary Alejandro Mayorkas asking for information on what data the government retains from these searches and how the data is used. “We are concerned that the current policies and practices governing the search of electronic devices at the border constitute a departure from the intended scope and application of border search authority,” Sens. Gary Peters (D-MI), Rand Paul (R-KY), Ron Wyden (D-OR), and Mike Crapo (R-ID) wrote.

Read More 

ADT’s new smart security system will unlock your door for a Trusted Neighbor

ADT’s new smart security system is now available nationwide. ADT Plus features new ADT hardware and integrates with Google Nest cameras and video doorbells, and the Yale Assure Lock 2. | Image: ADT

ADT’s long-rumored new security system is now live on ADT.com. Featuring entirely new ADT hardware and integrations with Google Nest cameras, smart speakers, and more, the new ADT Plus system has a distinctly Google Nest Secure look and feel (RIP). But how it will work in your home remains to be seen.
The company also announced that the Yale Assure Lock 2 will be the first smart lock compatible with Trusted Neighbor, its new feature that leverages technologies such as smart locks and facial recognition to make it easier for people you trust to get into your home in an emergency — or just to feed the dog.
If you have the smart lock as part of the system, you’ll be able to set it so that when your “trusted neighbor” comes to the house to help you out, the door can unlock automatically and the system disarm, then re-lock and re-arm when they leave.

Yale’s Assure Lock 2 (Z-Wave) will be the first smart door lock to work with ADT’s new Trusted Neighbor feature.

The new ADT Plus system is a significant shift for the company as it brings its professionally installed system and DIY system to parity. Now, you get the same hardware no matter which installation route you go.
ADT Plus replaces ADT’s Self Setup system, which launched in 2023 following Google’s investment in the company. It was offered for free to Nest Secure users when that service shut down and will continue to be supported.
According to ADT’s chief business officer, Wayne Thorsen, the technology behind the new system — which uses DECT Ultra Low Energy, Z-Wave, Wi-Fi, and BLE protocols — is able to integrate more deeply with hardware from smart home partners like Yale and Google, allowing for more advanced automation.

In an interview with The Verge, Thorsen said this is just the start of more intelligent integrations that will be coming to the new platform. The Z-Wave Yale lock is the first lock to be compatible, but Thorsen says there will be more options in the future.
The access feature allows someone you trust to disarm your system and unlock the door using the app or a key code based on parameters you set. These parameters can be time-based or — uniquely — event-based. So, you can set it to let Suzy in if a package shows up at your door and to let the plumber in if a leak detector is triggered.
It can also leverage the Familiar Face feature of Google’s Nest cameras and the Yale lock’s connection to the ADT system to have the home “magically” disarm itself and unlock the door when it recognizes a trusted neighbor.

Trusted Neighbor is a new feature in the ADT Plus app that lets you give friends and neighbors secure time and event-based access to your home.

The Yale Assure Lock 2 is being offered in a bundled starter kit for the new system, which regularly costs $658.98 but is launching with a 30 percent discount for $461.29. The Front Door Protection bundle includes the new base station, two door/window sensors, a Yale Assure Lock 2, and a Google Nest Doorbell (battery). Professional monitoring is $45 a month and includes a subscription to Nest Aware for event-based video recording (if you want 24/7 video recording, it’s an extra $7).
The premium Total Safety package adds a third door/window sensor, a motion sensor, three Google Nest Cams (Nest Cam indoor, Nest Cam indoor/outdoor, and Nest Cam floodlight), and three water temp sensors for $1,101.76 (reg $1,573.95). You can also build a package that starts at $269 for the base station and one door/window sensor.
The ADT Plus system can be self-installed and self-monitored. However, you have to pay for one month of monitoring to purchase the system — after that, you can self-monitor for free, according to the company. For professional installation, a 36-month monitoring plan is required.
Shades of Nest Secure

Image: Google Nest
Google’s Nest Secure home security system shut down earlier this year.

The new ADT Plus system borrows heavily from the much-missed Google Secure system. It has a similar-looking base station that features a backlit touch-button keypad with proximity sensing and the option of premium door and window sensors, similar to the Nest Detects that were part of Nest Secure. The sensor can be disabled with a button press — so you can let the dog out to pee without waking up the whole house — although it doesn’t double as a motion sensor like the Detect did.
Since Google’s investment in ADT in 2020, there have been several changes at the top of the company, with many Nest employees moving over from Google to ADT —including Thorsen and new CTO Gilles Drieu, who was director of engineering at Google Nest.
This all means the new hardware has Nest’s fingerprint all over it. I’m looking forward to testing it out to see if it’s a worthy Nest Secure successor. However, while ADT Plus works with Google Nest hardware, the company tells me it’s not compatible with the Google Home app — app control is only through the ADT Plus app.
As someone who has covered the smart home and home security for over a decade, I’m excited to see better integration and innovation between the two areas, which have been largely segmented to date. At the same time, it’s frustrating that this is all still locked within a closed ecosystem. It will be interesting to see where ADT, Google, and Yale take this.

ADT’s new smart security system is now available nationwide. ADT Plus features new ADT hardware and integrates with Google Nest cameras and video doorbells, and the Yale Assure Lock 2. | Image: ADT

ADT’s long-rumored new security system is now live on ADT.com. Featuring entirely new ADT hardware and integrations with Google Nest cameras, smart speakers, and more, the new ADT Plus system has a distinctly Google Nest Secure look and feel (RIP). But how it will work in your home remains to be seen.

The company also announced that the Yale Assure Lock 2 will be the first smart lock compatible with Trusted Neighbor, its new feature that leverages technologies such as smart locks and facial recognition to make it easier for people you trust to get into your home in an emergency — or just to feed the dog.

If you have the smart lock as part of the system, you’ll be able to set it so that when your “trusted neighbor” comes to the house to help you out, the door can unlock automatically and the system disarm, then re-lock and re-arm when they leave.

Yale’s Assure Lock 2 (Z-Wave) will be the first smart door lock to work with ADT’s new Trusted Neighbor feature.

The new ADT Plus system is a significant shift for the company as it brings its professionally installed system and DIY system to parity. Now, you get the same hardware no matter which installation route you go.

ADT Plus replaces ADT’s Self Setup system, which launched in 2023 following Google’s investment in the company. It was offered for free to Nest Secure users when that service shut down and will continue to be supported.

According to ADT’s chief business officer, Wayne Thorsen, the technology behind the new system — which uses DECT Ultra Low Energy, Z-Wave, Wi-Fi, and BLE protocols — is able to integrate more deeply with hardware from smart home partners like Yale and Google, allowing for more advanced automation.

In an interview with The Verge, Thorsen said this is just the start of more intelligent integrations that will be coming to the new platform. The Z-Wave Yale lock is the first lock to be compatible, but Thorsen says there will be more options in the future.

The access feature allows someone you trust to disarm your system and unlock the door using the app or a key code based on parameters you set. These parameters can be time-based or — uniquely — event-based. So, you can set it to let Suzy in if a package shows up at your door and to let the plumber in if a leak detector is triggered.

It can also leverage the Familiar Face feature of Google’s Nest cameras and the Yale lock’s connection to the ADT system to have the home “magically” disarm itself and unlock the door when it recognizes a trusted neighbor.

Trusted Neighbor is a new feature in the ADT Plus app that lets you give friends and neighbors secure time and event-based access to your home.

The Yale Assure Lock 2 is being offered in a bundled starter kit for the new system, which regularly costs $658.98 but is launching with a 30 percent discount for $461.29. The Front Door Protection bundle includes the new base station, two door/window sensors, a Yale Assure Lock 2, and a Google Nest Doorbell (battery). Professional monitoring is $45 a month and includes a subscription to Nest Aware for event-based video recording (if you want 24/7 video recording, it’s an extra $7).

The premium Total Safety package adds a third door/window sensor, a motion sensor, three Google Nest Cams (Nest Cam indoor, Nest Cam indoor/outdoor, and Nest Cam floodlight), and three water temp sensors for $1,101.76 (reg $1,573.95). You can also build a package that starts at $269 for the base station and one door/window sensor.

The ADT Plus system can be self-installed and self-monitored. However, you have to pay for one month of monitoring to purchase the system — after that, you can self-monitor for free, according to the company. For professional installation, a 36-month monitoring plan is required.

Shades of Nest Secure

Image: Google Nest
Google’s Nest Secure home security system shut down earlier this year.

The new ADT Plus system borrows heavily from the much-missed Google Secure system. It has a similar-looking base station that features a backlit touch-button keypad with proximity sensing and the option of premium door and window sensors, similar to the Nest Detects that were part of Nest Secure. The sensor can be disabled with a button press — so you can let the dog out to pee without waking up the whole house — although it doesn’t double as a motion sensor like the Detect did.

Since Google’s investment in ADT in 2020, there have been several changes at the top of the company, with many Nest employees moving over from Google to ADT —including Thorsen and new CTO Gilles Drieu, who was director of engineering at Google Nest.

This all means the new hardware has Nest’s fingerprint all over it. I’m looking forward to testing it out to see if it’s a worthy Nest Secure successor. However, while ADT Plus works with Google Nest hardware, the company tells me it’s not compatible with the Google Home app — app control is only through the ADT Plus app.

As someone who has covered the smart home and home security for over a decade, I’m excited to see better integration and innovation between the two areas, which have been largely segmented to date. At the same time, it’s frustrating that this is all still locked within a closed ecosystem. It will be interesting to see where ADT, Google, and Yale take this.

Read More 

Apple’s iOS 18.1 developer beta adds AI call recording and transcription

Image: Apple

Apple’s iOS 18.1 developer preview comes with an AI-powered feature that lets you record and transcribe phone calls, as reported by MacRumors. Users with access to the beta can tap the new “record” button in the top-left corner of the call screen to keep an audio log of their call.
Once enabled, all callers will hear a message saying, “This call will be recorded.” The Phone app will then record the call, while automatically creating a transcription that will appear in the Notes app. In addition, users will also be able to access the full audio recording and an AI-generated summary of the call in the Notes app.
Apple first showed off phone call recording during its Worldwide Developers Conference in June. The feature seems useful for journalists who often conduct interviews over the phone and could even come in handy if you’re taking an important call from a doctor and want to remember exactly what they say.
Other Apple Intelligence features in the iOS 18.1 developer beta include natural language search in photos, email summaries in the Mail app, and an updated Siri design. While we’re going to have to wait a little longer to see an AI-supercharged Siri, Bloomberg reports this upgrade could arrive in 2025.

Image: Apple

Apple’s iOS 18.1 developer preview comes with an AI-powered feature that lets you record and transcribe phone calls, as reported by MacRumors. Users with access to the beta can tap the new “record” button in the top-left corner of the call screen to keep an audio log of their call.

Once enabled, all callers will hear a message saying, “This call will be recorded.” The Phone app will then record the call, while automatically creating a transcription that will appear in the Notes app. In addition, users will also be able to access the full audio recording and an AI-generated summary of the call in the Notes app.

Apple first showed off phone call recording during its Worldwide Developers Conference in June. The feature seems useful for journalists who often conduct interviews over the phone and could even come in handy if you’re taking an important call from a doctor and want to remember exactly what they say.

Other Apple Intelligence features in the iOS 18.1 developer beta include natural language search in photos, email summaries in the Mail app, and an updated Siri design. While we’re going to have to wait a little longer to see an AI-supercharged Siri, Bloomberg reports this upgrade could arrive in 2025.

Read More 

Scroll to top
Generated by Feedzy