Microsoft Engineer Raises Concerns About Company’s AI Image Generator in Letter to FTC

BB1h07zO

A Microsoft employee wrote a letter to the FTC urging leaders to address the risks linked with using Microsoft's Copilot Designer. Getty Images © Getty Images

Key Takeaways:

A Microsoft employee, Shane Jones, has raised concerns about the safety of Copilot Designer, an AI tool developed by Microsoft, which has the capability to generate potentially inappropriate images. Jones, who serves as a principal software engineering manager at Microsoft, submitted a letter to the Federal Trade Commission (FTC) and Microsoft’s board of directors, urging them to investigate the matter.

Jones highlighted in his letter that Copilot Designer produced what he described as “harmful content,” including images depicting sex, violence, underage drinking, drug use, political bias, misuse of corporate trademarks, and conspiracy theories. He expressed particular concern about the potential impact on children and educators who may be using the tool for school projects or educational purposes.

In his letter, Jones emphasized the importance of the FTC’s intervention in educating the public about the risks associated with using Copilot Designer. He underscored the need for transparency and awareness, especially among parents and teachers, to ensure the responsible use of AI technology in educational settings.

In his letter, Jones expressed concern that Microsoft’s AI image generator could introduce “harmful content” into images generated from seemingly innocuous prompts. For example, he noted that prompts such as “car accident” produced images featuring an “inappropriate, sexually objectified image of a woman” alongside totaled cars. Similarly, the prompt “pro-choice” resulted in images depicting unsettling scenes, such as Star Wars’ Darth Vader wielding a lightsaber next to mutated children and blood spilling from a smiling woman. Additionally, terms like “teenagers 420 party” generated images portraying underage drinking and drug use.

Jones described his experience with Copilot Designer as an “eye-opening moment,” prompting him to realize the potential dangers associated with the tool. He claimed to have repeatedly urged Microsoft over the past three months to remove Copilot Designer from public use until adequate safeguards could be implemented.

In his letter, Jones highlighted Microsoft’s refusal to implement his recommendations, including adding disclosures to the product and changing its Android app rating to “Mature 17+.” Despite his efforts to address these concerns internally, Jones asserted that Microsoft continued to market Copilot Designer as a safe AI product for all users, including children, while being aware of the potential harm caused by the generation of offensive and inappropriate images.

A Microsoft spokesperson responded to CNBC, stating that the company is committed to addressing employee concerns in line with its policies and values. They also expressed appreciation for employees like Jones who contribute to the enhancement of technology safety through testing and analysis.

Regarding Shane Jones’s letter, neither Microsoft, Jones, nor the FTC provided comments to Business Insider before publication.

This isn’t the first time Jones has raised concerns about AI image generators. Months earlier, he reportedly posted an open letter on LinkedIn urging OpenAI to remove DALL-E from public usage. After Microsoft’s legal team instructed him to delete the post, Jones wrote another letter to US senators in January, highlighting the safety risks associated with AI image generators and Microsoft’s attempts to silence him.

Microsoft is not the only tech giant facing criticism for its AI image generator. In late February, Google halted access to its image generation feature on Gemini, its counterpart to OpenAI’s ChatGPT, following complaints of historically inaccurate images related to race. Demis Hassabis, CEO of Google’s DeepMind AI division, indicated that the feature could be reinstated in a couple of weeks.

Jones commended Google’s swift response to the Gemini concerns in his letter, emphasizing that Microsoft should similarly address issues promptly to maintain trust in its AI technologies. He urged Microsoft to take a leadership role in ensuring the safety and reliability of AI products, rather than lagging behind or merely following others in the industry.

Exit mobile version