Month: September 2024
California Gov. Newsom vetoes bill SB 1047 that aims to prevent AI disasters
California Gov. Gavin Newsom has vetoed bill SB 1047, which aims to prevent bad actors from using AI to cause “critical harm” to humans. The California state assembly passed the legislation by a margin of 41-9 on August 28, but several organizations including the Chamber of Commerce had urged Newsom to veto the bill. In his veto message on Sept. 29, Newsom said the bill is “well-intentioned” but “does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data. Instead, the bill applies stringent standards to even the most basic functions – so long as a large system deploys it.”
SB 1047 would have made the developers of AI models liable for adopting safety protocols that would stop catastrophic uses of their technology. That includes preventive measures such as testing and outside risk assessment, as well as an “emergency stop” that would completely shut down the AI model. A first violation would cost a minimum of $10 million and $30 million for subsequent infractions. However, the bill was revised to eliminate the state attorney general’s ability to sue AI companies with negligent practices if a catastrophic event does not occur. Companies would only be subject to injunctive relief and could be sued if their model caused critical harm.
This law would apply to AI models that cost at least $100 million to use and 10^26 FLOPS for training. It also would have covered derivative projects in instances where a third party has invested $10 million or more in developing or modifying the original model. Any company doing business in California would be subject to the rules if it meets the other requirements. Addressing the bill’s focus on large-scale systems, Newsom said, “I do not believe this is the best approach to protecting the public from real threats posed by the technology.” The veto message adds:
By focusing only on the most expensive and large-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology. Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047 – at the potential expense of curtailing the very innovation that fuels advancement in favor of the public good.
The earlier version of SB 1047 would have created a new department called the Frontier Model Division to oversee and enforce the rules. Instead, the bill was altered ahead of a committee vote to place governance at the hands of a Board of Frontier Models within the Government Operations Agency. The nine members would be appointed by the state’s governor and legislature.
The bill faced a complicated path to the final vote. SB 1047 was authored by California State Sen. Scott Wiener, who told TechCrunch: “We have a history with technology of waiting for harms to happen, and then wringing our hands. Let’s not wait for something bad to happen. Let’s just get out ahead of it.” Notable AI researchers Geoffrey Hinton and Yoshua Bengio backed the legislation, as did the Center for AI Safety, which has been raising the alarm about AI’s risks over the past year.
“Let me be clear – I agree with the author – we cannot afford to wait for a major catastrophe to occur before taking action to protect the public,” Newsom said in the veto message. The statement continues:
California will not abandon its responsibility. Safety protocols must be adopted. Proactive guardrails should be implemented, and severe consequences for bad actors must be clear and enforceable. I do not agree, however, that to keep the public safe, we must settle for a solution that is not informed by an empirical trajectory analysis of AI systems and capabilities. Ultimately, any framework for effectively regulating AI needs to keep pace with the technology itself.
SB 1047 drew heavy-hitting opposition from across the tech space. Researcher Fei-Fei Li critiqued the bill, as did Meta Chief AI Scientist Yann LeCun, for limiting the potential to explore new uses of AI. The trade group repping tech giants such as Amazon, Apple and Google said SB 1047 would limit new developments in the state’s tech sector. Venture capital firm Andreeson Horowitz and several startups also questioned whether the bill placed unnecessary financial burdens on AI innovators. Anthropic and other opponents of the original bill pushed for amendments that were adopted in the version of SB 1047 that passed California’s Appropriations Committee on August 15. This article originally appeared on Engadget at https://www.engadget.com/ai/california-gov-newsom-vetoes-bill-sb-1047-that-aims-to-prevent-ai-disasters-220826827.html?src=rss
California Gov. Gavin Newsom has vetoed bill SB 1047, which aims to prevent bad actors from using AI to cause “critical harm” to humans. The California state assembly passed the legislation by a margin of 41-9 on August 28, but several organizations including the Chamber of Commerce had urged Newsom to veto the bill. In his veto message on Sept. 29, Newsom said the bill is “well-intentioned” but “does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data. Instead, the bill applies stringent standards to even the most basic functions – so long as a large system deploys it.”
SB 1047 would have made the developers of AI models liable for adopting safety protocols that would stop catastrophic uses of their technology. That includes preventive measures such as testing and outside risk assessment, as well as an “emergency stop” that would completely shut down the AI model. A first violation would cost a minimum of $10 million and $30 million for subsequent infractions. However, the bill was revised to eliminate the state attorney general’s ability to sue AI companies with negligent practices if a catastrophic event does not occur. Companies would only be subject to injunctive relief and could be sued if their model caused critical harm.
This law would apply to AI models that cost at least $100 million to use and 10^26 FLOPS for training. It also would have covered derivative projects in instances where a third party has invested $10 million or more in developing or modifying the original model. Any company doing business in California would be subject to the rules if it meets the other requirements. Addressing the bill’s focus on large-scale systems, Newsom said, “I do not believe this is the best approach to protecting the public from real threats posed by the technology.” The veto message adds:
By focusing only on the most expensive and large-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology. Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047 – at the potential expense of curtailing the very innovation that fuels advancement in favor of the public good.
The earlier version of SB 1047 would have created a new department called the Frontier Model Division to oversee and enforce the rules. Instead, the bill was altered ahead of a committee vote to place governance at the hands of a Board of Frontier Models within the Government Operations Agency. The nine members would be appointed by the state’s governor and legislature.
The bill faced a complicated path to the final vote. SB 1047 was authored by California State Sen. Scott Wiener, who told TechCrunch: “We have a history with technology of waiting for harms to happen, and then wringing our hands. Let’s not wait for something bad to happen. Let’s just get out ahead of it.” Notable AI researchers Geoffrey Hinton and Yoshua Bengio backed the legislation, as did the Center for AI Safety, which has been raising the alarm about AI’s risks over the past year.
“Let me be clear – I agree with the author – we cannot afford to wait for a major catastrophe to occur before taking action to protect the public,” Newsom said in the veto message. The statement continues:
California will not abandon its responsibility. Safety protocols must be adopted. Proactive guardrails should be implemented, and severe consequences for bad actors must be clear and enforceable. I do not agree, however, that to keep the public safe, we must settle for a solution that is not informed by an empirical trajectory analysis of AI systems and capabilities. Ultimately, any framework for effectively regulating AI needs to keep pace with the technology itself.
SB 1047 drew heavy-hitting opposition from across the tech space. Researcher Fei-Fei Li critiqued the bill, as did Meta Chief AI Scientist Yann LeCun, for limiting the potential to explore new uses of AI. The trade group repping tech giants such as Amazon, Apple and Google said SB 1047 would limit new developments in the state’s tech sector. Venture capital firm Andreeson Horowitz and several startups also questioned whether the bill placed unnecessary financial burdens on AI innovators. Anthropic and other opponents of the original bill pushed for amendments that were adopted in the version of SB 1047 that passed California’s Appropriations Committee on August 15.
This article originally appeared on Engadget at https://www.engadget.com/ai/california-gov-newsom-vetoes-bill-sb-1047-that-aims-to-prevent-ai-disasters-220826827.html?src=rss
Method
My thanks to Method Financial for sponsoring last week at Daring Fireball. Method Financial’s authentication technology allows instant access to a consumer’s full liability portfolio using just personal information and consent, eliminating the need for usernames and passwords.
With just a few lines of code, Method’s APIs enable real-time, read-write, and frictionless access to all consumer liability data with integrated payment rails. Method leverages integrations with over 15,000 financial institutions to stream up-to-date, high-fidelity data from users’ accounts and to facilitate payment to them.
Method has helped 3 million consumers connect over 24 million liability accounts at companies like Aven, SoFi, Figure, and Happy Money, saving borrowers millions in interest and providing access to billions of dollars in personalized loans.
★
My thanks to Method Financial for sponsoring last week at Daring Fireball. Method Financial’s authentication technology allows instant access to a consumer’s full liability portfolio using just personal information and consent, eliminating the need for usernames and passwords.
With just a few lines of code, Method’s APIs enable real-time, read-write, and frictionless access to all consumer liability data with integrated payment rails. Method leverages integrations with over 15,000 financial institutions to stream up-to-date, high-fidelity data from users’ accounts and to facilitate payment to them.
Method has helped 3 million consumers connect over 24 million liability accounts at companies like Aven, SoFi, Figure, and Happy Money, saving borrowers millions in interest and providing access to billions of dollars in personalized loans.
California’s Governor Just Vetoed Its Controversial AI Bill
“California Governor Gavin Newsom has vetoed SB 1047, a high-profile bill that would have regulated the development of AI,” reports TechCrunch.
The bill “would have made companies that develop AI models liable for implementing safety protocols to prevent ‘critical harms’.”
The rules would only have applied to models that cost at least $100 million and use 10^26 FLOPS (floating point operations, a measure of computation) during training.
SB 1047 was opposed by many in Silicon Valley, including companies like OpenAI, high-profile technologists like Meta’s chief AI scientist Yann LeCun, and even Democratic politicians such as U.S. Congressman Ro Khanna. That said, the bill had also been amended based on suggestions by AI company Anthropic and other opponents.
In a statement about today’s veto, Newsom said, “While well-intentioned, SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data. Instead, the..” bill applies stringent standards to even the most basic functions — so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology.”
“Over the past 30 days, Governor Newsom signed 17 bills covering the deployment and regulation of GenAI technology…” according to a statement from the governor’s office, “cracking down on deepfakes, requiring AI watermarking, protecting children and workers, and combating AI-generated misinformation… The Newsom Administration will also immediately engage academia to convene labor stakeholders and the private sector to explore approaches to use GenAI technology in the workplace.”
In a separate statement the governor pointed out California ” is home to 32 of the world’s 50 leading Al companies,” and warned that the bill “could give the public a false sense of security about controlling this fast-moving technology. Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047 — at the potential expense of curtailing the very innovation that fuels advancement in favor of the public good…”
“While well-intentioned, SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data. Instead, the bill applies stringent standards to even the most basic functions — so long as a large system deploys it.
“I do not believe this is the best approach to protecting the public from real threats posed by the technology.”
Read more of this story at Slashdot.
“California Governor Gavin Newsom has vetoed SB 1047, a high-profile bill that would have regulated the development of AI,” reports TechCrunch.
The bill “would have made companies that develop AI models liable for implementing safety protocols to prevent ‘critical harms’.”
The rules would only have applied to models that cost at least $100 million and use 10^26 FLOPS (floating point operations, a measure of computation) during training.
SB 1047 was opposed by many in Silicon Valley, including companies like OpenAI, high-profile technologists like Meta’s chief AI scientist Yann LeCun, and even Democratic politicians such as U.S. Congressman Ro Khanna. That said, the bill had also been amended based on suggestions by AI company Anthropic and other opponents.
In a statement about today’s veto, Newsom said, “While well-intentioned, SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data. Instead, the..” bill applies stringent standards to even the most basic functions — so long as a large system deploys it. I do not believe this is the best approach to protecting the public from real threats posed by the technology.”
“Over the past 30 days, Governor Newsom signed 17 bills covering the deployment and regulation of GenAI technology…” according to a statement from the governor’s office, “cracking down on deepfakes, requiring AI watermarking, protecting children and workers, and combating AI-generated misinformation… The Newsom Administration will also immediately engage academia to convene labor stakeholders and the private sector to explore approaches to use GenAI technology in the workplace.”
In a separate statement the governor pointed out California ” is home to 32 of the world’s 50 leading Al companies,” and warned that the bill “could give the public a false sense of security about controlling this fast-moving technology. Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047 — at the potential expense of curtailing the very innovation that fuels advancement in favor of the public good…”
“While well-intentioned, SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data. Instead, the bill applies stringent standards to even the most basic functions — so long as a large system deploys it.
“I do not believe this is the best approach to protecting the public from real threats posed by the technology.”
Read more of this story at Slashdot.
Apple’s rumored smart display may arrive in 2025 running new homeOS
Apple is planning to debut a new operating system called homeOS with its long-rumored smart displays, the first of which is expected to arrive as soon as 2025, according to Bloomberg’s Mark Gurman. Reports of a HomePod-like device with a display have been swirling for over a year, and Gurman said just this summer that Apple is working on a tabletop smart display equipped with a robotic arm that can tilt and rotate the screen for better viewing. In his latest report, Gurman says there are two versions in the works: a low-end display that will offer the basics, like FaceTime and smart home controls, and the high-end robotic variant that’ll cost upwards of $1,000.
We’ll reportedly see the cheaper version first — possibly next year — followed by the high-end display. Gurman previously said the robotic smart display could be released in 2026 at the earliest. You won’t have to wait for the premium model to get a taste of Apple’s vision for home AI, though. According to Gurman, Apple Intelligence will be a key part of the experience for both devices. The new homeOS will be based on Apple TV’s tvOS, he notes.
This article originally appeared on Engadget at https://www.engadget.com/home/smart-home/apples-rumored-smart-display-may-arrive-in-2025-running-new-homeos-212401853.html?src=rss
Apple is planning to debut a new operating system called homeOS with its long-rumored smart displays, the first of which is expected to arrive as soon as 2025, according to Bloomberg’s Mark Gurman. Reports of a HomePod-like device with a display have been swirling for over a year, and Gurman said just this summer that Apple is working on a tabletop smart display equipped with a robotic arm that can tilt and rotate the screen for better viewing. In his latest report, Gurman says there are two versions in the works: a low-end display that will offer the basics, like FaceTime and smart home controls, and the high-end robotic variant that’ll cost upwards of $1,000.
We’ll reportedly see the cheaper version first — possibly next year — followed by the high-end display. Gurman previously said the robotic smart display could be released in 2026 at the earliest. You won’t have to wait for the premium model to get a taste of Apple’s vision for home AI, though. According to Gurman, Apple Intelligence will be a key part of the experience for both devices. The new homeOS will be based on Apple TV’s tvOS, he notes.
This article originally appeared on Engadget at https://www.engadget.com/home/smart-home/apples-rumored-smart-display-may-arrive-in-2025-running-new-homeos-212401853.html?src=rss