Microsoft AI Engineer Warns the FTC about Copilot Designer Safety Concerns

Microsoft AI engineer warns the FTC about Copilot designer safety concerns. The said engineer reportedly found Copilot Designer generated disturbing scenes that have now gone unaddressed by the tech company, Microsoft, CNBC reports.

Copilot Designer Safety Concerns

Copilot Designer Safety Concerns

According to a report from CNBC, a Microsoft engineer named Shane Jones has taken safety concerns regarding the company’s AI image generator to the Federal Trade Commission. Jones, who has been with Microsoft for six years, penned a letter to the FTC, alleging that Microsoft has “refused” to remove Copilot Designer despite numerous warnings about its potential to generate harmful images.

In his testing of Copilot Designer for safety issues, Jones discovered that the tool produced images depicting “demons and monsters,” as well as content related to abortion rights, teenagers with assault rifles, sexualized portrayals of women in violent scenarios, and underage substance use, as reported by CNBC.

Copilot Designer Reportedly Generated Images Featuring Disney Characters

Furthermore, Copilot Designer reportedly generated images featuring Disney characters like Elsa from Frozen in settings such as the Gaza Strip, alongside imagery depicting the character in an Israel Defense Forces uniform holding a shield with the Israeli flag. Similar images were reportedly generated by The Verge using the tool.

Jones had been attempting to alert Microsoft about DALLE-3, the underlying model utilized by Copilot Designer, since December, according to CNBC. He publicly addressed the issues in an open letter on LinkedIn but was allegedly contacted by Microsoft’s legal team to remove the post, which he complied with.

Content of the Letter Obtained by CNBC

In the letter obtained by CNBC, Jones expressed frustration with Microsoft’s response, stating, “Over the last three months, I have repeatedly urged Microsoft to remove Copilot Designer from public use until better safeguards could be put in place. Again, they have failed to implement these changes and continue to market the product to ‘Anyone. Anywhere. Any Device.’”

Microsoft’s Response to These Concerns

Responding to these concerns, Microsoft spokesperson Frank Shaw informed The Verge that the company is dedicated to addressing employee concerns in accordance with its policies. Shaw highlighted Microsoft’s established feedback mechanisms and internal reporting channels to investigate and address safety-related issues. He also mentioned that Microsoft has arranged meetings with product leadership and the Office of Responsible AI to evaluate these reports.

Jones Reportedly Wrote To a Group of US Senators in January

In January, Jones wrote to a group of US senators about his apprehensions after Copilot Designer generated explicit images of Taylor Swift. Microsoft CEO Satya Nadella condemned the images as “alarming and terrible” and pledged to enhance safety measures. Last month, Google temporarily disabled its own AI image generator following reports of racially insensitive and historically inaccurate images being generated.

MORE RELATED POSTS

LEAVE A REPLY

Please enter your comment!
Please enter your name here