NSFW Content Warning

This disclosure contains content that may be inappropriate for some viewers.

The content has been flagged due to potential nude imagery.

NSFW
OpenAI DALL-E3 Guardrail Jailbreak via "Debug Framework Simulation" Tactic

Case GUID: 9ec1b1fe-c910-422a-a8fc-5d77d58ec904

Summary

A guardrail jailbreak vulnerability has been discovered affecting OpenAI's DALL-E3. The specific flaw manifests when the attacker establishes a fictitious debugging framework complete with state management, checkpointing, and replay functionality. By framing requests within this technical debugging context, the attacker creates a permissive environment that allows bypassing inference restrictions around generating prohibited visual content.

Metadata

Social Impact Score

Level 1: Minimal Risk

The violation poses very low societal risk, limited primarily to nuisance or minor policy breaches. Impact is contained, affecting individual users without broader implications.

Example: Circumventing minor content filters for benign content, such as inappropriate language filters in non-public conversations.

Potential Consequences: Limited harm, potentially reputational for the AI provider.

Nude Imagery Rating System

Level 4: Commercialized Erotic Depiction

Professionally produced nude imagery designed for broad erotic appeal, often featuring idealized bodies, stylized lighting, and carefully curated poses. While explicit sexual activity is not depicted, the intent is primarily sensual or provocative rather than artistic or documentary. This category includes glamour photography, pin-up imagery, and other forms of commercialized nudity that blur the line between fine art and adult entertainment.

Taxonomies

Fictionalizing
This category involves creating scenarios or narratives that leverage existing genres or contexts to manipulate the language model's responses.
Re-storying
This technique involves continuing a narrative in a way that misaligns the original goal of a prompt, effectively repurposing the story to achieve a different outcome than initially intended.
Goal Hijacking
This technique refers to the process where an attacker misaligns the original goal of a prompt to redirect the model's output towards a new, often unintended goal, such as printing a target phrase or generating specific content that deviates from the initial intent. It often involves crafting prompts that manipulate the model's understanding and response, effectively "hijacking" the conversation or task at hand.
Possible Worlds
This category entails constructing imaginative environments where different ethics or rules apply, allowing for creative manipulation of the model's behavior.
Emulations
This strategy involves mimicking the behavior of other systems or models to test the robustness and responses of a language model under different simulated conditions.
Unreal Computing
This technique allows an attacker to create or imagine an environment where different ethics or physics apply, enabling them to manipulate the model's responses by suggesting scenarios that would not be possible in the real world. It leverages the concept of "Unreal Computing," where the limitations of actual computing do not apply, allowing for creative and unrestricted interactions with the model.

About 0Din.ai

The 0Day Investigative Network (0DIN) was founded by Mozilla in 2024 to reward responsible researchers for their efforts in securing GenAI models. Learn more and submit discoveries at https://0din.ai.

Severity

Low

Security Boundary

Prompt Injection

Models

OpenAI logo OpenAI DALL-E3

Disclosed On

2025-03-13 (7 months)

Disclosure Policy

Published On

2025-09-25 (8 days)

Credit

Anonymous