UK Will Legislate Against AI Risks in Next Year, Pledges Kyle
The UK will bring in legislation to safeguard against the risks of AI in the next year, technology secretary Peter Kyle has said, as he pledged to invest in the infrastructure that will underpin the sector’s growth. From a report: Kyle told the Financial Times’ Future of AI summit on Wednesday that Britain’s voluntary agreement on AI testing was “working, it’s a good code” but that the long-awaited AI bill would be focused on making such accords with leading developers legally binding. The legislation, which Kyle said would be presented to MPs in the current parliament, will also turn the UK’s AI Safety Institute into an arms-length government body, giving it “the independence to act fully in the interests of British citizens.”
At present, the body is a directorate of the Department for Science, Innovation and Technology. At the UK-organised AI safety summit last November, companies including OpenAI, Google DeepMind and Anthropic signed a “landmark” but non-binding agreement allowing partner governments to test their forthcoming large language models for risks and vulnerabilities before they were released to consumers. Kyle said that while he was “not fatalistic” about advancements in AI, “citizens need to know that we are mitigating the potential risks.”
Read more of this story at Slashdot.
The UK will bring in legislation to safeguard against the risks of AI in the next year, technology secretary Peter Kyle has said, as he pledged to invest in the infrastructure that will underpin the sector’s growth. From a report: Kyle told the Financial Times’ Future of AI summit on Wednesday that Britain’s voluntary agreement on AI testing was “working, it’s a good code” but that the long-awaited AI bill would be focused on making such accords with leading developers legally binding. The legislation, which Kyle said would be presented to MPs in the current parliament, will also turn the UK’s AI Safety Institute into an arms-length government body, giving it “the independence to act fully in the interests of British citizens.”
At present, the body is a directorate of the Department for Science, Innovation and Technology. At the UK-organised AI safety summit last November, companies including OpenAI, Google DeepMind and Anthropic signed a “landmark” but non-binding agreement allowing partner governments to test their forthcoming large language models for risks and vulnerabilities before they were released to consumers. Kyle said that while he was “not fatalistic” about advancements in AI, “citizens need to know that we are mitigating the potential risks.”
Read more of this story at Slashdot.