Month: July 2024

Senators introduce bill to protect individuals against AI-generated deepfakes

Today, a group of senators introduced the NO FAKES Act, a law that would make it illegal to create digital recreations of a person’s voice or likeness without that individual’s consent. It’s a bipartisan effort from Senators Chris Coons (D-Del.), Marsha Blackburn (R-Tenn.), Amy Klobuchar (D-Minn.) and Thom Tillis (R-N.C.), fully titled the Nurture Originals, Foster Art, and Keep Entertainment Safe Act of 2024.
If it passes, the NO FAKES Act would create an option for people to seek damages when their voice, face or body are recreated by AI. Both individuals and companies would be held liable for producing, hosting or sharing unauthorized digital replicas, including ones made by generative AI.
We’ve already seen many instances of celebrities finding their imitations of themselves out in the world. “Taylor Swift” was used to scam people with a fake Le Creuset cookware giveaway. A voice that sounded a lot like Scarlet Johannson’s showed up in a ChatGPT voice demo. AI can also be used to make political candidates appear to make false statements, with Kamala Harris the most recent example. And it’s not only celebrities who can be victims of deepfakes.
“Everyone deserves the right to own and protect their voice and likeness, no matter if you’re Taylor Swift or anyone else,” Senator Coons said. “Generative AI can be used as a tool to foster creativity, but that can’t come at the expense of the unauthorized exploitation of anyone’s voice or likeness.”
The speed of new legislation notoriously flags behind the speed of new tech development, so it’s encouraging to see lawmakers taking AI regulation seriously. Today’s proposed act follows the Senate’s recent passage of the DEFIANCE Act, which would allow victims of sexual deepfakes to sue for damages. 
Several entertainment organizations have lent their support to the NO FAKES Act, including SAG-AFTRA, the RIAA, the Motion Picture Association, and the Recording Academy. Many of these groups have been pursuing their own actions to get protection against unauthorized AI recreations. SAG-AFTRA recently went on strike against several game publishers to try and secure a union agreement for likenesses in video games.
Even OpenAI is listed among the act’s backers. “OpenAI is pleased to support the NO FAKES Act, which would protect creators and artists from unauthorized digital replicas of their voices and likenesses,” said Anna Makanju, OpenAI’s vice president of global affairs. “Creators and artists should be protected from improper impersonation, and thoughtful legislation at the federal level can make a difference.”This article originally appeared on Engadget at https://www.engadget.com/senators-introduce-bill-to-protect-individuals-against-ai-generated-deepfakes-202809816.html?src=rss

Today, a group of senators introduced the NO FAKES Act, a law that would make it illegal to create digital recreations of a person’s voice or likeness without that individual’s consent. It’s a bipartisan effort from Senators Chris Coons (D-Del.), Marsha Blackburn (R-Tenn.), Amy Klobuchar (D-Minn.) and Thom Tillis (R-N.C.), fully titled the Nurture Originals, Foster Art, and Keep Entertainment Safe Act of 2024.

If it passes, the NO FAKES Act would create an option for people to seek damages when their voice, face or body are recreated by AI. Both individuals and companies would be held liable for producing, hosting or sharing unauthorized digital replicas, including ones made by generative AI.

We’ve already seen many instances of celebrities finding their imitations of themselves out in the world. “Taylor Swift” was used to scam people with a fake Le Creuset cookware giveaway. A voice that sounded a lot like Scarlet Johannson’s showed up in a ChatGPT voice demo. AI can also be used to make political candidates appear to make false statements, with Kamala Harris the most recent example. And it’s not only celebrities who can be victims of deepfakes.

“Everyone deserves the right to own and protect their voice and likeness, no matter if you’re Taylor Swift or anyone else,” Senator Coons said. “Generative AI can be used as a tool to foster creativity, but that can’t come at the expense of the unauthorized exploitation of anyone’s voice or likeness.”

The speed of new legislation notoriously flags behind the speed of new tech development, so it’s encouraging to see lawmakers taking AI regulation seriously. Today’s proposed act follows the Senate’s recent passage of the DEFIANCE Act, which would allow victims of sexual deepfakes to sue for damages. 

Several entertainment organizations have lent their support to the NO FAKES Act, including SAG-AFTRA, the RIAA, the Motion Picture Association, and the Recording Academy. Many of these groups have been pursuing their own actions to get protection against unauthorized AI recreations. SAG-AFTRA recently went on strike against several game publishers to try and secure a union agreement for likenesses in video games.

Even OpenAI is listed among the act’s backers. “OpenAI is pleased to support the NO FAKES Act, which would protect creators and artists from unauthorized digital replicas of their voices and likenesses,” said Anna Makanju, OpenAI’s vice president of global affairs. “Creators and artists should be protected from improper impersonation, and thoughtful legislation at the federal level can make a difference.”

This article originally appeared on Engadget at https://www.engadget.com/senators-introduce-bill-to-protect-individuals-against-ai-generated-deepfakes-202809816.html?src=rss

Read More 

Could Today’s Fed Meeting Drive Mortgage Rates Down More?

I wish a Magic 8 Ball would tell me “signs point to yes.”

I wish a Magic 8 Ball would tell me “signs point to yes.”

Read More 

This AirFly Duo Is the Ultimate Travel Accessory and It’s Now Just $30

No more struggling with the shoddy wired headphones airlines give you to listen to in-flight entertainment.

No more struggling with the shoddy wired headphones airlines give you to listen to in-flight entertainment.

Read More 

Meta Says It Will Continue Spending, as Growth Surges

The company has been investing in artificial intelligence technologies, as well as building the immersive world of the metaverse.

The company has been investing in artificial intelligence technologies, as well as building the immersive world of the metaverse.

Read More 

AI startups ramp up federal lobbying efforts

AI lobbying at the U.S. federal level is intensifying in the midst of a continued generative AI boom and an election year that could influence future AI regulation. New data from OpenSecrets, a nonprofit group that tracks and publishes metrics on campaign financing and lobbying, shows that the number of groups lobbying the federal government
© 2024 TechCrunch. All rights reserved. For personal use only.

AI lobbying at the U.S. federal level is intensifying in the midst of a continued generative AI boom and an election year that could influence future AI regulation. New data from OpenSecrets, a nonprofit group that tracks and publishes metrics on campaign financing and lobbying, shows that the number of groups lobbying the federal government […]

© 2024 TechCrunch. All rights reserved. For personal use only.

Read More 

Google Updates Its Search Algorithm To Tackle AI Deepfakes

Google is updating its search algorithm and removal request process to make it easier for victims to combat unwanted sexually explicit AI deepfakes. “When reported AI deepfakes are identified, Google Search will automatically filter out related search results that might pop up in the future so users won’t have to repeatedly report similar images or duplicates of an image to Google,” reports PCMag. Additionally, Google will demote sites repeatedly hosting non-consensual deepfakes and aims to differentiate between consensual and non-consensual explicit content. From the report: Google says its Search algorithm update will lower the chances of explicit deepfakes appearing in Search. The search engine will also attempt to differentiate between real sexually explicit content made consensually (such as adult film stars’ work, for example) and AI-generated media made without the person’s consent. But Google says doing this is a “technical challenge,” so these efforts may not be entirely accurate or effective. Regardless, Google claims that the changes it’s already made to Search have reduced the resurfacing of such deepfakes by more than 70%. “With these changes, people can read about the impact deepfakes are having on society, rather than see pages with actual non-consensual fake images,” Google said.

Read more of this story at Slashdot.

Google is updating its search algorithm and removal request process to make it easier for victims to combat unwanted sexually explicit AI deepfakes. “When reported AI deepfakes are identified, Google Search will automatically filter out related search results that might pop up in the future so users won’t have to repeatedly report similar images or duplicates of an image to Google,” reports PCMag. Additionally, Google will demote sites repeatedly hosting non-consensual deepfakes and aims to differentiate between consensual and non-consensual explicit content. From the report: Google says its Search algorithm update will lower the chances of explicit deepfakes appearing in Search. The search engine will also attempt to differentiate between real sexually explicit content made consensually (such as adult film stars’ work, for example) and AI-generated media made without the person’s consent. But Google says doing this is a “technical challenge,” so these efforts may not be entirely accurate or effective. Regardless, Google claims that the changes it’s already made to Search have reduced the resurfacing of such deepfakes by more than 70%. “With these changes, people can read about the impact deepfakes are having on society, rather than see pages with actual non-consensual fake images,” Google said.

Read more of this story at Slashdot.

Read More 

How Kepler’s 400-year-old sunspot sketches helped solve a modern mystery

A sharp decline in sunspot activity in the 17th century has long puzzled astronomers.

Enlarge / A naked-eye sunspot group on May 11, 2024. There are typically 40,000 to 50,000 sunspots observed in ~11-year solar cycles. (credit: E. T. H. Teague)

A team of Japanese and Belgian astronomers has re-examined the sunspot drawings made by 17th century astronomer Johannes Kepler with modern analytical techniques. By doing so, they resolved a long-standing mystery about solar cycles during that period, according to a recent paper published in The Astrophysical Journal Letters.

Precisely who first observed sunspots was a matter of heated debate in the early 17th century. We now know that ancient Chinese astronomers between 364 and 28 BCE observed these features and included them in their official records. A Benedictine monk in 807 thought he’d observed Mercury passing in front of the Sun when, in reality, he had witnessed a sunspot; similar mistaken interpretations were also common in the 12th century. (An English monk made the first known drawings of sunspots in December 1128.)

English astronomer Thomas Harriot made the first telescope observations of sunspots in late 1610 and recorded them in his notebooks, as did Galileo around the same time, although the latter did not publish a scientific paper on sunspots (accompanied by sketches) until 1613. Galileo also argued that the spots were not, as some believed, solar satellites but more like clouds in the atmosphere or the surface of the Sun. But he was not the first to suggest this; that credit belongs to Dutch astronomer Johannes Fabricus, who published his scientific treatise on sunspots in 1611.

Read 9 remaining paragraphs | Comments

Read More 

Scroll to top
Generated by Feedzy