Month: August 2024

South Korea Faces Deepfake Porn ‘Emergency’

An anonymous reader quotes a report from the BBC: South Korea’s president has urged authorities to do more to “eradicate” the country’s digital sex crime epidemic, amid a flood of deepfake pornography targeting young women. Authorities, journalists and social media users recently identified a large number of chat groups where members were creating and sharing sexually explicit “deepfake” images — including some of underage girls. Deepfakes are generated using artificial intelligence, and often combine the face of a real person with a fake body. South Korea’s media regulator is holding an emergency meeting in the wake of the discoveries.

The spate of chat groups, linked to individual schools and universities across the country, were discovered on the social media app Telegram over the past week. Users, mainly teenage students, would upload photos of people they knew — both classmates and teachers — and other users would then turn them into sexually explicit deepfake images. The discoveries follow the arrest of the Russian-born founder of Telegram, Pavel Durov, on Saturday, after it was alleged that child pornography, drug trafficking and fraud were taking place on the encrypted messaging app. South Korean President Yoon Suk Yeol on Tuesday instructed authorities to “thoroughly investigate and address these digital sex crimes to eradicate them.”

“Recently, deepfake videos targeting an unspecified number of people have been circulating rapidly on social media,” President Yoon said at a cabinet meeting. “The victims are often minors and the perpetrators are mostly teenagers.” To build a “healthy media culture,” President Yoon said young men needed to be better educated. “Although it is often dismissed as ‘just a prank,’ it is clearly a criminal act that exploits technology to hide behind the shield of anonymity,” he said.

The Guardian notes that making sexually explicit deepfakes with the intention of distributing them is punishable by five years in prison or a fine of $37,500.

Further reading: 1 in 10 Minors Say Their Friends Use AI to Generate Nudes of Other Kids, Survey Finds (Source: 404 Media)

Read more of this story at Slashdot.

An anonymous reader quotes a report from the BBC: South Korea’s president has urged authorities to do more to “eradicate” the country’s digital sex crime epidemic, amid a flood of deepfake pornography targeting young women. Authorities, journalists and social media users recently identified a large number of chat groups where members were creating and sharing sexually explicit “deepfake” images — including some of underage girls. Deepfakes are generated using artificial intelligence, and often combine the face of a real person with a fake body. South Korea’s media regulator is holding an emergency meeting in the wake of the discoveries.

The spate of chat groups, linked to individual schools and universities across the country, were discovered on the social media app Telegram over the past week. Users, mainly teenage students, would upload photos of people they knew — both classmates and teachers — and other users would then turn them into sexually explicit deepfake images. The discoveries follow the arrest of the Russian-born founder of Telegram, Pavel Durov, on Saturday, after it was alleged that child pornography, drug trafficking and fraud were taking place on the encrypted messaging app. South Korean President Yoon Suk Yeol on Tuesday instructed authorities to “thoroughly investigate and address these digital sex crimes to eradicate them.”

“Recently, deepfake videos targeting an unspecified number of people have been circulating rapidly on social media,” President Yoon said at a cabinet meeting. “The victims are often minors and the perpetrators are mostly teenagers.” To build a “healthy media culture,” President Yoon said young men needed to be better educated. “Although it is often dismissed as ‘just a prank,’ it is clearly a criminal act that exploits technology to hide behind the shield of anonymity,” he said.

The Guardian notes that making sexually explicit deepfakes with the intention of distributing them is punishable by five years in prison or a fine of $37,500.

Further reading: 1 in 10 Minors Say Their Friends Use AI to Generate Nudes of Other Kids, Survey Finds (Source: 404 Media)

Read more of this story at Slashdot.

Read More 

OpenAI Eyes $100 Billion Valuation

While some young A.I. companies have struggled to compete with the tech industry’s giants, OpenAI has been rapidly expanding.

While some young A.I. companies have struggled to compete with the tech industry’s giants, OpenAI has been rapidly expanding.

Read More 

California State Assembly passes sweeping AI safety bill

Illustration by Cath Virginia / The Verge | Photos from Getty Images

The California State Assembly has passed the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047), Reuters reports. The bill is one of the first significant regulations of artificial intelligence in the US.
The bill, which has been a flashpoint for debate in Silicon Valley and beyond, would obligate AI companies operating in California to implement a number of precautions before they train a sophisticated foundation model. Those include making it possible to quickly and fully shut the model down, ensuring the model is protected against “unsafe post-training modifications,” and maintaining a testing procedure to evaluate whether a model or its derivatives is especially at risk of “causing or enabling a critical harm.”
Senator Scott Wiener, the bill’s main author, said SB 1047 is a highly reasonable bill that asks large AI labs to do what they’ve already committed to doing: test their large models for catastrophic safety risk. “We’ve worked hard all year, with open source advocates, Anthropic, and others, to refine and improve the bill. SB 1047 is well calibrated to what we know about forseeable AI risks, and it deserves to be enacted.”

SB 1047 — our AI safety bill — just passed off the Assembly floor. I’m proud of the diverse coalition behind this bill — a coalition that deeply believes in both innovation & safety.AI has so much promise to make the world a better place. It’s exciting.Thank you, colleagues.— Senator Scott Wiener (@Scott_Wiener) August 28, 2024

Critics of SB 1047 — including OpenAI and Anthropic, politicians Zoe Lofgren and Nancy Pelosi, and California’s Chamber of Commerce — have argued that it’s overly focused on catastrophic harms and could unduly harm small, open-source AI developers. The bill was amended in response, replacing potential criminal penalties with civil ones, narrowing enforcement powers granted to California’s attorney general, and adjusting requirements to join a “Board of Frontier Models” created by the bill.
After the State Senate votes on the amended bill — a vote that’s expected to pass — the AI safety bill will head to Governor Gavin Newsom, who will have until the end of September to decide its fate, according to The New York Times.
Anthropic declined to comment beyond pointing to a letter sent by Anthropic CEO Dario Amodei to Governor Newsom last week. OpenAI didn’t immediately respond to a request for comment.

Illustration by Cath Virginia / The Verge | Photos from Getty Images

The California State Assembly has passed the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047), Reuters reports. The bill is one of the first significant regulations of artificial intelligence in the US.

The bill, which has been a flashpoint for debate in Silicon Valley and beyond, would obligate AI companies operating in California to implement a number of precautions before they train a sophisticated foundation model. Those include making it possible to quickly and fully shut the model down, ensuring the model is protected against “unsafe post-training modifications,” and maintaining a testing procedure to evaluate whether a model or its derivatives is especially at risk of “causing or enabling a critical harm.”

Senator Scott Wiener, the bill’s main author, said SB 1047 is a highly reasonable bill that asks large AI labs to do what they’ve already committed to doing: test their large models for catastrophic safety risk. “We’ve worked hard all year, with open source advocates, Anthropic, and others, to refine and improve the bill. SB 1047 is well calibrated to what we know about forseeable AI risks, and it deserves to be enacted.”

SB 1047 — our AI safety bill — just passed off the Assembly floor. I’m proud of the diverse coalition behind this bill — a coalition that deeply believes in both innovation & safety.

AI has so much promise to make the world a better place. It’s exciting.

Thank you, colleagues.

— Senator Scott Wiener (@Scott_Wiener) August 28, 2024

Critics of SB 1047 — including OpenAI and Anthropic, politicians Zoe Lofgren and Nancy Pelosi, and California’s Chamber of Commerce — have argued that it’s overly focused on catastrophic harms and could unduly harm small, open-source AI developers. The bill was amended in response, replacing potential criminal penalties with civil ones, narrowing enforcement powers granted to California’s attorney general, and adjusting requirements to join a “Board of Frontier Models” created by the bill.

After the State Senate votes on the amended bill — a vote that’s expected to pass — the AI safety bill will head to Governor Gavin Newsom, who will have until the end of September to decide its fate, according to The New York Times.

Anthropic declined to comment beyond pointing to a letter sent by Anthropic CEO Dario Amodei to Governor Newsom last week. OpenAI didn’t immediately respond to a request for comment.

Read More 

FAA Grounds SpaceX’s Falcon 9 Rocket Following Landing Mishap

SpaceX’s Falcon 9 rocket has been grounded by the FAA for the second time in less than two months following the failed landing of a first-stage booster, which was destroyed in a fireball after its 23rd flight. Spaceflight Now reports: The booster, serial number B1062 in the SpaceX fleet, suffered a hard landing, at the tail end of its record-setting 23rd flight. It was consumed in a fireball on the deck of the drone ship ‘A Shortfall of Gravitas’, which was stationed in the Atlantic Ocean about 250 miles east of Charleston, South Carolina. The mishap was the first booster landing failure since February 2021. In a statement on Wednesday, the Federal Aviation Administration said that while no public injuries or public property damage was reported, “The FAA is requiring an investigation.”

The FAA made a similar declaration following a Falcon 9 upper-stage failure on July 12 during the Starlink 9-3 mission, which resulted in the loss of 20 satellites. Following that incident, SpaceX rockets did not return to flight until the Starlink 10-9 mission, on July 27. […] The booster failure came the same week that SpaceX had to twice delay a launch attempt of the Polaris Dawn astronaut mission, first due to a helium leak and then for recovery weather at the end of the mission. The Polaris Dawn crew remain in quarantine for now, according to social media posts from Isaacman, but the timing of the next launch attempt is uncertain. In addition to landing weather concerns and resolving the FAA investigation, there is also the matter of launch pad availability.

Read more of this story at Slashdot.

SpaceX’s Falcon 9 rocket has been grounded by the FAA for the second time in less than two months following the failed landing of a first-stage booster, which was destroyed in a fireball after its 23rd flight. Spaceflight Now reports: The booster, serial number B1062 in the SpaceX fleet, suffered a hard landing, at the tail end of its record-setting 23rd flight. It was consumed in a fireball on the deck of the drone ship ‘A Shortfall of Gravitas’, which was stationed in the Atlantic Ocean about 250 miles east of Charleston, South Carolina. The mishap was the first booster landing failure since February 2021. In a statement on Wednesday, the Federal Aviation Administration said that while no public injuries or public property damage was reported, “The FAA is requiring an investigation.”

The FAA made a similar declaration following a Falcon 9 upper-stage failure on July 12 during the Starlink 9-3 mission, which resulted in the loss of 20 satellites. Following that incident, SpaceX rockets did not return to flight until the Starlink 10-9 mission, on July 27. […] The booster failure came the same week that SpaceX had to twice delay a launch attempt of the Polaris Dawn astronaut mission, first due to a helium leak and then for recovery weather at the end of the mission. The Polaris Dawn crew remain in quarantine for now, according to social media posts from Isaacman, but the timing of the next launch attempt is uncertain. In addition to landing weather concerns and resolving the FAA investigation, there is also the matter of launch pad availability.

Read more of this story at Slashdot.

Read More 

Scroll to top
Generated by Feedzy