AI think tank calls GPT-4 a risk to public safety

AI think tank calls GPT-4 a risk to public safety

The Center for Artificial Intelligence and Digital Policy (CAIDP), an AI think tank, has lodged a complaint with the Federal Trade Commission (FTC) in an attempt to halt OpenAI’s further commercial deployment of GPT-4. CAIDP alleges that OpenAI has engaged in deceptive and unfair practices, in violation of section five of the FTC Act. Marc Rotenberg, Founder and President of CAIDP, emphasized the responsibility of the FTC to investigate and prohibit such practices and called for scrutiny of OpenAI and GPT-4’s compliance with federal guidance.

CAIDP argues that GPT-4 poses risks to privacy, public safety, and perpetuates biases. The think tank pointed to statements in OpenAI’s GPT-4 System Card acknowledging the model’s potential to reinforce harmful stereotypes and biases. CAIDP claims that GPT-4 was released without an independent assessment of its risks. The FTC recently urged companies advertising AI products to establish durable deterrent measures instead of relying solely on warnings or disclosures.

In its filing, CAIDP urges the FTC to investigate OpenAI and other operators of powerful AI systems, halt further commercial releases of GPT-4, and establish necessary safeguards to protect consumers, businesses, and the commercial marketplace. Merve Hickok, Chair and Research Director of CAIDP, highlighted the importance of addressing bias and deception in AI products to ensure the safety of businesses, consumers, and public welfare, and stated that the FTC is uniquely positioned to tackle this challenge.

The complaint coincides with a petition signed by prominent figures in the field of AI, including Elon Musk and Steve Wozniak, calling for a “pause” in the development of AI systems more powerful than GPT-4.

Leave a Reply

Your email address will not be published. Required fields are marked *