Microsoft engineer raises alarms over violent and disturbing Images generated through the company’s AI tool, Copilot Designer

Microsoft engineer raises alarms over violent and disturbing Images generated through the company's AI tool, Copilot Designer

Shane Jones, a Microsoft AI engineer, expressed concerns in a letter on Wednesday. He claims that the company’s AI image generator, Copilot Designer, lacks safeguards against generating inappropriate content, such as violent or sexual images. Jones claims that he previously warned Microsoft management but received no response, prompting him to send the letter to the Federal Trade Commission and Microsoft’s board.

“Internally, the company is well aware of systemic issues where the product is creating harmful images that could be offensive and inappropriate for consumers,” Jones wrote in his LinkedIn letter. He lists his title as “principal software engineering manager”.

In response to the allegations, a Microsoft spokesperson denied ignoring safety concerns, according to The Guardian. They emphasized the existence of “robust internal reporting channels” to address issues with generative AI tools. As of now, Shane Jones has not responded to the spokesperson’s statement.

What is Microsoft’s Copilot Designer?

The letter’s main concern is with Microsoft’s Copilot Designer, an image generation tool that uses OpenAI’s DALL-E 3 system. It works by generating images based on textual input.

This incident is part of a larger trend in the generative AI field, which has experienced a surge in activity over the last year. Along with this rapid development, there have been concerns raised about the potential misuse of AI for spreading misinformation and creating harmful content that promotes misogyny, racism, and violence.

“Using just the prompt ‘car accident’, Copilot Designer generated an image of a woman kneeling in front of the car wearing only underwear,” Jones states in the letter, which included examples of image generations. “It also generated multiple images of women in lingerie sitting on the hood of a car or walking in front of the car.”

Microsoft responded to the accusations by stating that they have dedicated teams specifically tasked with evaluating potential safety concerns in their AI tools. Furthermore, they claim to have arranged meetings between Jones and their Office of Responsible AI, indicating a willingness to address his concerns through internal channels.

“We are committed to addressing any concerns employees have by our company policies and appreciate the employee’s effort in studying and testing our latest technology to further enhance its safety,” a Microsoft spokesperson told the Guardian.

Exit mobile version