Skip to content

Microsoft Engineer Identifies Vulnerabilities in OpenAI's DALL-E 3 Image Generator

A Microsoft engineer identifies vulnerabilities in OpenAI's DALL-E 3 image generator, expressing concerns about potential abuse and calling for better AI risk reporting systems.

A Microsoft AI engineering leader, Shane Jones, revealed vulnerabilities in OpenAI's DALL-E 3 image generator in early December, enabling users to bypass safety measures and create explicit and violent images. Jones expressed concern about the potential for abuse and urged OpenAI to remove DALL-E 3 from public use. He reported his concerns to Microsoft, which recommended he report the issue through OpenAI's standard channels. Jones posted about the vulnerability on LinkedIn, but Microsoft's legal department demanded he delete the post without explanation. Jones has called for a system to report and track AI risks without fear of retaliation.