FTC Should Stop OpenAI From Launching New GPT Models, Says AI Policy Group
An artificial intelligence-focused tech ethics group has asked the Federal Trade Commission to investigate OpenAI for violating consumer protection rules, arguing that the organization’s rollout of AI text generation tools has been “biased, deceptive, and a risk to public safety.” From a report: The Center for AI and Digital Policy (CAIDP) filed its complaint today following the publication of a high-profile open letter calling for a pause on large generative AI experiments. CAIDP president Marc Rotenberg was one of the letter’s signatories, alongside a number of AI researchers and OpenAI co-founder Elon Musk. Similar to that letter, the complaint calls to slow down the development of generative AI models and implement stricter government oversight.
The CAIDP complaint points out potential threats from OpenAI’s GPT-4 generative text model, which was announced in mid-March. They include ways that GPT-4 could produce malicious code and highly tailored propaganda as well as ways that biased training data could result in baked-in stereotypes or unfair race and gender preferences in things like hiring. It also points out significant privacy failures with OpenAI’s product interface — like a recent bug that exposed OpenAI ChatGPT histories and possibly payment details to other users.
Read more of this story at Slashdot.
An artificial intelligence-focused tech ethics group has asked the Federal Trade Commission to investigate OpenAI for violating consumer protection rules, arguing that the organization’s rollout of AI text generation tools has been “biased, deceptive, and a risk to public safety.” From a report: The Center for AI and Digital Policy (CAIDP) filed its complaint today following the publication of a high-profile open letter calling for a pause on large generative AI experiments. CAIDP president Marc Rotenberg was one of the letter’s signatories, alongside a number of AI researchers and OpenAI co-founder Elon Musk. Similar to that letter, the complaint calls to slow down the development of generative AI models and implement stricter government oversight.
The CAIDP complaint points out potential threats from OpenAI’s GPT-4 generative text model, which was announced in mid-March. They include ways that GPT-4 could produce malicious code and highly tailored propaganda as well as ways that biased training data could result in baked-in stereotypes or unfair race and gender preferences in things like hiring. It also points out significant privacy failures with OpenAI’s product interface — like a recent bug that exposed OpenAI ChatGPT histories and possibly payment details to other users.
Read more of this story at Slashdot.