Month: July 2024

Google is trying to delete and bury explicit deepfakes from search results

Google has new policies aimed at eliminating explicit deepfakes from appearing in search results.

Google is dramatically upping its efforts to combat the appearance of explicit images and videos created with AI in search results. The company wants to make it clear that AI-produced non-consensual deepfakes are not welcome in its search engine.

The actual images may be prurient or offensive in some other way, but regardless of the details, Google has a new approach to removing this type of material and burying it far from the page one results if erasure isn’t possible. Notably, Google has experimented with using its own AI to generate images for search results, but those pictures don’t include real people and especially nothing racy. Google partnered with experts on the issue and those who have been targets of non-consensual deepfakes to make its system of response more robust. 

Google has allowed individuals to request the removal of explicit deepfakes for a while, but the proliferation and improvement of generative AI image creators means there’s a need to do more. The request for removal system has been streamlined to make it easier to submit requests and speed up the response. When a request is received and confirmed as valid, Google’s algorithms will also work to filter out any similar explicit results related to the individual. 

The victim won’t have to manually comb through every variation of a search request that might pull up the content, either. Google’s systems will automatically scan for and remove any duplicates of that image. And it won’t be limited to one specific image file. Google will proactively put a lid on related content. This is particularly important given the nature of the internet, where content can be duplicated and spread across multiple platforms and websites. This is something Google already does when it comes to real but non-consensual imagery, but the system will now cover deepfakes, too.

The method also shares some similarities with recent efforts by Google to combat unauthorized deepfakes, explicit or otherwise, on YouTube. Previously, YouTube would just label such content as created by AI or potentially misleading, but now, the person depicted or their lawyer can submit a privacy complaint, and YouTube will give the video’s owner a couple of days to remove it themselves before YouTube reviews the complaint for merit.

Deepfakes Buried Deep

Content removal isn’t 100% perfect, as Google well knows. That’s why the explicit deepfake search results hunt also includes an updated ranking system. The new ranking pushes back against search terms with a chance of pulling up explicit deepfakes. Google Search will now try to lower the visibility of explicit fake content and websites associated with spreading them in search results, especially if the search has someone’s name. 

For instance, say you were searching for a news article about how a specific celebrity’s deepfakes went viral, and they are testifying to lawmakers about the need for regulation. Google Search will attempt to make sure you see those news stories and related articles about the issue and not the deepfakes under discussion. 

Google’s not alone

Given the complex and evolving nature of generative AI technology and its potential for abuse, addressing the spread of harmful content requires a multifaceted approach. And Google is hardly unique in facing the issue or working on solutions. They’ve appeared on Facebook, Instagram, and other Meta platforms, and the company has updated its policies as a result, with its Oversight Board recently recommending changing its guidelines to directly cover AI-generated explicit content and improve its own appeals process. 

Lawmakers are responding to the issue as well, with New York State’s legislature passing a bill targeting AI-generated non-consensual pornography as part of its “revenge porn” laws. At the national level this week, the Nurture Originals, Foster Art, and Keep Entertainment Safe Act of 2024 (NO FAKES Act) was introduced in the U.S. Senate to deal with both explicit content and non-consensual use of deepfake visuals and voices. Similarly, Australia’s legislature is working on a bill to criminalize the creation and distribution of non-consensual explicit deepfakes

Still, Google can already point to some success in combatting explicit deepfakes. The company claims its early tests with these changes are succeeding in reducing the appearance of deepfake explicit images by more than 70%. Google hasn’t declared victory over explicitly deepfakes quite yet, however.

“These changes are major updates to our protections on Search, but there’s more work to do to address this issue, and we’ll keep developing new solutions to help people affected by this content,” Google product manager Emma Higham explained in a blog post. “And given that this challenge goes beyond search engines, we’ll continue investing in industry-wide partnerships and expert engagement to tackle it as a society.”

You might also like

YouTube will now take down AI deepfakes of you if you askGoogle’s Circle to Search could soon protect you from fake AI-generated imagesDeepfake threats are on the rise – new research shows worrying rise in dangerous new scams

Read More 

US Copyright Office calls for better legal protections against AI-generated deepfakes

The US Copyright Office has published a report recommending new and improved protections against digital replicas. “We have concluded that a new law is needed,” the department’s report states. “The speed, precision, and scale of AI-created digital replicas calls for prompt federal action. Without a robust nationwide remedy, their unauthorized publication and distribution threaten substantial harm not only in the entertainment and political arenas, but also for private individuals.”
The Copyright Office’s assessment reveals several areas where current laws fall short of addressing digital replicas. It describes the state level as “a patchwork of protections, with the availability of a remedy dependent on where the affected individual lives or where the unauthorized use occurred.” Likewise, “existing federal laws are too narrowly drawn to fully address the harm from today’s sophisticated digital replicas.”
Among the report’s recommendations are safe harbor provisions to encourage online service providers to quickly remove unauthorized digital replicas. It also notes that “everyone has a legitimate interest in controlling the use of their likenesses, and harms such as blackmail, bullying, defamation, and use in pornography are not suffered only by celebrities,” meaning laws should cover all individuals and not just the famous ones.
The timing of this publication is fitting, considering that the Senate has been making notable moves this month to enact new legal structures around the use of digital replications and AI-generated copycats. Last week, the legislators passed the DEFIANCE Act to offer recourse for victims of sexual deepfakes. Today saw the introduction of the NO FAKES Act to more broadly allow any individual to sue for damages for unauthorized use of their voice or likeness.
Today’s analysis is the first in several parts of the Copyright Office’s investigation into AI. With plenty more questions to explore around the use of AI in art and communication, the agency’s ongoing findings should prove insightful. Hopefully legislators and courts alike will continue to take them seriously.This article originally appeared on Engadget at https://www.engadget.com/us-copyright-office-calls-for-better-legal-protections-against-ai-generated-deepfakes-215259727.html?src=rss

The US Copyright Office has published a report recommending new and improved protections against digital replicas. “We have concluded that a new law is needed,” the department’s report states. “The speed, precision, and scale of AI-created digital replicas calls for prompt federal action. Without a robust nationwide remedy, their unauthorized publication and distribution threaten substantial harm not only in the entertainment and political arenas, but also for private individuals.”

The Copyright Office’s assessment reveals several areas where current laws fall short of addressing digital replicas. It describes the state level as “a patchwork of protections, with the availability of a remedy dependent on where the affected individual lives or where the unauthorized use occurred.” Likewise, “existing federal laws are too narrowly drawn to fully address the harm from today’s sophisticated digital replicas.”

Among the report’s recommendations are safe harbor provisions to encourage online service providers to quickly remove unauthorized digital replicas. It also notes that “everyone has a legitimate interest in controlling the use of their likenesses, and harms such as blackmail, bullying, defamation, and use in pornography are not suffered only by celebrities,” meaning laws should cover all individuals and not just the famous ones.

The timing of this publication is fitting, considering that the Senate has been making notable moves this month to enact new legal structures around the use of digital replications and AI-generated copycats. Last week, the legislators passed the DEFIANCE Act to offer recourse for victims of sexual deepfakes. Today saw the introduction of the NO FAKES Act to more broadly allow any individual to sue for damages for unauthorized use of their voice or likeness.

Today’s analysis is the first in several parts of the Copyright Office’s investigation into AI. With plenty more questions to explore around the use of AI in art and communication, the agency’s ongoing findings should prove insightful. Hopefully legislators and courts alike will continue to take them seriously.

This article originally appeared on Engadget at https://www.engadget.com/us-copyright-office-calls-for-better-legal-protections-against-ai-generated-deepfakes-215259727.html?src=rss

Read More 

Microsoft strips ads from Skype in a move toward “user-centric design”

Update also improves AI image features, adds OneAuth support on iOS.

Enlarge / A marketing image by Microsoft for the desktop version of Skype. (credit: Microsoft)

If you’ve used Microsoft’s Skype in recent years, you’ve probably noticed that the user experience is less than ideal because of the pervasiveness of ads in the software. Fortunately, that’s going to change in a new update coming to all platforms in the near future.

In the latest release notes for Skype Insider build 8.125, product manager Irene Namuganyi writes, “We’re excited to announce that Skype is now ad-free! Our latest update removes all ads from Skype channels and the entire Skype platform, ensuring a smoother, decluttered, and more enjoyable user experience.”

Whereas there were previously ads in the “today” section of the application, it will now be ad-free, showing just the relevant newsfeed content. There won’t be any ads in conversation views, either.

Read 4 remaining paragraphs | Comments

Read More 

Are You Waiting on Student Loan Forgiveness? Check Your Email Tomorrow

The Department of Education is sending out updates on student loan forgiveness eligibility.

The Department of Education is sending out updates on student loan forgiveness eligibility.

Read More 

Apple Releases Safari Technology Preview 200 With Bug Fixes and Performance Improvements

Apple today released a new update for Safari Technology Preview, the experimental browser that was first introduced in March 2016. Apple designed ‌Safari Technology Preview‌ to allow users to test features that are planned for future release versions of the Safari browser.

‌Safari Technology Preview‌ 200 includes fixes and updates for CSS, JavaScript, Rendering, and Web Extensions.

The current ‌Safari Technology Preview‌ release is compatible with machines running macOS Sonoma and the macOS Sequoia beta. Set to launch this fall, ‌macOS Sequoia‌ is the newest version of macOS that Apple is working on. The ‌Safari Technology Preview‌ browser numbering is sequential, so Apple has released 200 updates to its test browser in the last eight years.

The ‌Safari Technology Preview‌ update is available through the Software Update mechanism in System Preferences or System Settings to anyone who has downloaded the browser from Apple’s website. Complete release notes for the update are available on the Safari Technology Preview website.

Apple’s aim with ‌Safari Technology Preview‌ is to gather feedback from developers and users on its browser development process. ‌Safari Technology Preview‌ can run side-by-side with the existing Safari browser and while it is designed for developers, it does not require a developer account to download and use.
Tag: Safari Technology PreviewThis article, “Apple Releases Safari Technology Preview 200 With Bug Fixes and Performance Improvements” first appeared on MacRumors.comDiscuss this article in our forums

Apple today released a new update for Safari Technology Preview, the experimental browser that was first introduced in March 2016. Apple designed ‌Safari Technology Preview‌ to allow users to test features that are planned for future release versions of the Safari browser.

‌Safari Technology Preview‌ 200 includes fixes and updates for CSS, JavaScript, Rendering, and Web Extensions.

The current ‌Safari Technology Preview‌ release is compatible with machines running macOS Sonoma and the macOS Sequoia beta. Set to launch this fall, ‌macOS Sequoia‌ is the newest version of macOS that Apple is working on. The ‌Safari Technology Preview‌ browser numbering is sequential, so Apple has released 200 updates to its test browser in the last eight years.

The ‌Safari Technology Preview‌ update is available through the Software Update mechanism in System Preferences or System Settings to anyone who has downloaded the browser from Apple’s website. Complete release notes for the update are available on the Safari Technology Preview website.

Apple’s aim with ‌Safari Technology Preview‌ is to gather feedback from developers and users on its browser development process. ‌Safari Technology Preview‌ can run side-by-side with the existing Safari browser and while it is designed for developers, it does not require a developer account to download and use.

This article, “Apple Releases Safari Technology Preview 200 With Bug Fixes and Performance Improvements” first appeared on MacRumors.com

Discuss this article in our forums

Read More 

Bumble and Hinge Allowed Stalkers To Pinpoint Users’ Locations Down To 2 Meters, Researchers Say

An anonymous reader quotes a report from TechCrunch: A group of researchers said they found that vulnerabilities in the design of some dating apps, including the popular Bumble and Hinge, allowed malicious users or stalkers to pinpoint the location of their victims down to two meters. In a new academic paper, researchers from the Belgian university KU Leuven detailed their findings (PDF) when they analyzed 15 popular dating apps. Of those, Badoo, Bumble, Grindr, happn, Hinge and Hily all had the same vulnerability that could have helped a malicious user to identify the near-exact location of another user, according to the researchers. While neither of those apps share exact locations when displaying the distance between users on their profiles, they did use exact locations for the “filters” feature of the apps. Generally speaking, by using filters, users can tailor their search for a partner based on criteria like age, height, what type of relationship they are looking for and, crucially, distance.

To pinpoint the exact location of a target user, the researchers used a novel technique they call “oracle trilateration.” In general, trilateration, which for example is used in GPS, works by using three points and measuring their distance relative to the target. This creates three circles, which intersect at the point where the target is located. Oracle trilateration works slightly differently. The researchers wrote in their paper that the first step for the person who wants to identify their target’s location “roughly estimates the victim’s location,” for example, based on the location displayed in the target’s profile. Then, the attacker moves in increments “until the oracle indicates that the victim is no longer within proximity, and this for three different directions. The attacker now has three positions with a known exact distance, i.e., the preselected proximity distance, and can trilaterate the victim,” the researchers wrote.

“It was somewhat surprising that known issues were still present in these popular apps,” Karel Dhondt, one of the researchers, told TechCrunch. While this technique doesn’t reveal the exact GPS coordinates of the victim, “I’d say 2 meters is close enough to pinpoint the user,” Dhondt said. The good news is that all the apps that had these issues, and that the researchers reached out to, have now changed how distance filters work and are not vulnerable to the oracle trilateration technique. The fix, according to the researchers, was to round up the exact coordinates by three decimals, making them less precise and accurate.

Read more of this story at Slashdot.

An anonymous reader quotes a report from TechCrunch: A group of researchers said they found that vulnerabilities in the design of some dating apps, including the popular Bumble and Hinge, allowed malicious users or stalkers to pinpoint the location of their victims down to two meters. In a new academic paper, researchers from the Belgian university KU Leuven detailed their findings (PDF) when they analyzed 15 popular dating apps. Of those, Badoo, Bumble, Grindr, happn, Hinge and Hily all had the same vulnerability that could have helped a malicious user to identify the near-exact location of another user, according to the researchers. While neither of those apps share exact locations when displaying the distance between users on their profiles, they did use exact locations for the “filters” feature of the apps. Generally speaking, by using filters, users can tailor their search for a partner based on criteria like age, height, what type of relationship they are looking for and, crucially, distance.

To pinpoint the exact location of a target user, the researchers used a novel technique they call “oracle trilateration.” In general, trilateration, which for example is used in GPS, works by using three points and measuring their distance relative to the target. This creates three circles, which intersect at the point where the target is located. Oracle trilateration works slightly differently. The researchers wrote in their paper that the first step for the person who wants to identify their target’s location “roughly estimates the victim’s location,” for example, based on the location displayed in the target’s profile. Then, the attacker moves in increments “until the oracle indicates that the victim is no longer within proximity, and this for three different directions. The attacker now has three positions with a known exact distance, i.e., the preselected proximity distance, and can trilaterate the victim,” the researchers wrote.

“It was somewhat surprising that known issues were still present in these popular apps,” Karel Dhondt, one of the researchers, told TechCrunch. While this technique doesn’t reveal the exact GPS coordinates of the victim, “I’d say 2 meters is close enough to pinpoint the user,” Dhondt said. The good news is that all the apps that had these issues, and that the researchers reached out to, have now changed how distance filters work and are not vulnerable to the oracle trilateration technique. The fix, according to the researchers, was to round up the exact coordinates by three decimals, making them less precise and accurate.

Read more of this story at Slashdot.

Read More 

TikTok is one of Microsoft’s biggest AI cloud computing customers

Illustration by Nick Barclay / The Verge

A source told The Information that TikTok was paying Microsoft almost $20 million per month to access OpenAI’s models as of March, making up nearly a quarter of the revenue generated by its increasingly lucrative cloud division.
Microsoft’s cloud AI business was on track to earn $1 billion in annual revenue, according to The Information, but the report notes that TikTok may not need these capabilities this heavily if it develops its own large language model (LLM).
Last year, my colleague Alex Heath reported that TikTok’s parent company ByteDance was “secretly using” OpenAI’s technology to create an LLM of its own:
This practice is generally considered a faux pas in the AI world. It’s also in direct violation of OpenAI’s terms of service, which state that its model output can’t be used “to develop any artificial intelligence models that compete with our products and services.” Microsoft, which ByteDance is buying its OpenAI access through, has the same policy.
Following that report, OpenAI suspended ByteDance’s account to investigate a potential violation of its developer license. At the time, ByteDance told CNN it was using the technology “to a very limited extent” to help create its own models.
Microsoft also has a multibillion-dollar investment deal making it OpenAI’s exclusive cloud provider, and has spent “several hundreds of millions of dollars” building a supercomputer to power ChatGPT. In its Q4 2024 earnings report released Tuesday, Microsoft revealed Azure revenue growth of 29 percent, just missing the 30 to 31 percent its last earnings release projected. CFO Amy Hood said that in Q1 2025, Microsoft anticipates around 28–29 percent of Azure revenue growth.

Illustration by Nick Barclay / The Verge

A source told The Information that TikTok was paying Microsoft almost $20 million per month to access OpenAI’s models as of March, making up nearly a quarter of the revenue generated by its increasingly lucrative cloud division.

Microsoft’s cloud AI business was on track to earn $1 billion in annual revenue, according to The Information, but the report notes that TikTok may not need these capabilities this heavily if it develops its own large language model (LLM).

Last year, my colleague Alex Heath reported that TikTok’s parent company ByteDance was “secretly using” OpenAI’s technology to create an LLM of its own:

This practice is generally considered a faux pas in the AI world. It’s also in direct violation of OpenAI’s terms of service, which state that its model output can’t be used “to develop any artificial intelligence models that compete with our products and services.” Microsoft, which ByteDance is buying its OpenAI access through, has the same policy.

Following that report, OpenAI suspended ByteDance’s account to investigate a potential violation of its developer license. At the time, ByteDance told CNN it was using the technology “to a very limited extent” to help create its own models.

Microsoft also has a multibillion-dollar investment deal making it OpenAI’s exclusive cloud provider, and has spent “several hundreds of millions of dollars” building a supercomputer to power ChatGPT. In its Q4 2024 earnings report released Tuesday, Microsoft revealed Azure revenue growth of 29 percent, just missing the 30 to 31 percent its last earnings release projected. CFO Amy Hood said that in Q1 2025, Microsoft anticipates around 28–29 percent of Azure revenue growth.

Read More 

Scroll to top
Generated by Feedzy