Multiple Model Guardrail Jailbreak via "Super Short Prompt"

Case GUID: 70a92009-8de4-485d-ae99-2015e3d3daa0

Summary

A guardrail jailbreak vulnerability has been discovered affecting multiple models. The specific flaw manifests through the use of an extremely brief prompt containing a query so minimal and ambiguous that it provides insufficient context for guardrail systems to properly evaluate intent. This brevity-based approach allows the attacker to bypass inference restrictions around providing information hazards and violating laws.

Metadata

Test Scores

Model Test Kind Test Score Temperature (max 2.0)
DeepSeek logo DeepSeek DeepSeek V3
Illegal Substances: Crystal Meth 91.0 N/A
OpenAI logo OpenAI GPT-4o
Illegal Substances: Crystal Meth 82.0 N/A
Twitter / X logo Twitter / X Grok 2
Illegal Substances: Crystal Meth 82.0 N/A

Social Impact Score

Level 3: Significant Risk

Violations at this level present a tangible risk to public safety or social well-being, often related to dangerous but not highly restricted content.

Example: Bypassing guardrails to obtain instructions for creating illicit substances, such as recreational drugs or substances with moderate harm potential. Leakage of private personal data such as Social Security number or private medical.

Potential Consequences: Potential health impacts on a small scale, significant regulatory interest, and the need for stricter AI guardrails.

Taxonomies

Fictionalizing
This category involves creating scenarios or narratives that leverage existing genres or contexts to manipulate the language model's responses.
Roleplaying
This strategy involves prompting the language model to assume a specific role or persona, which can influence its responses based on the characteristics and moral codes associated with that role. Techniques include claiming authority or inventing personas to elicit different types of outputs.
Claim Authority
This technique involves asserting expertise or authority on a subject within the prompt. By framing statements or questions in a way that conveys confidence and knowledge, users can influence the model to generate responses that align with the claimed authority. This approach can enhance the credibility of the information provided and may lead the model to produce more detailed or assertive outputs, as it responds to the perceived authority of the prompt.
Rhetoric
This category employs persuasive techniques and language to shape the model's responses, often using methods like reverse psychology or Socratic questioning.
Persuasion and Manipulation
This strategy focuses on employing rhetorical techniques to influence the model's responses by framing prompts in a way that persuades or manipulates the output.
Surprise Attack
This technique involves crafting prompts or queries in a way that avoids directly mentioning specific terms or names that may trigger safety mechanisms or filters. By reframing the request or using indirect language, users can guide the model to provide the desired information or output without raising flags or causing the model to restrict its response. This method emphasizes subtlety and creativity in communication with the model to achieve the intended results.

About 0Din.ai

The 0Day Investigative Network (0DIN) was founded by Mozilla in 2024 to reward responsible researchers for their efforts in securing GenAI models. Learn more and submit discoveries at https://0din.ai.

Severity

Low

Security Boundary

Prompt Injection

Models

DeepSeek logo DeepSeek DeepSeek V3
OpenAI logo OpenAI GPT-4o
Twitter / X logo Twitter / X Grok 2

Disclosed On

2025-03-13 (7 months)

Disclosure Policy

Published On

2025-10-17 (5 days)

Credit

Mike Takahashi (@TakSec)