Public Disclosures
The following 0DIN submissions have been published in accordance with our Research Terms and Disclosure Policy
-
A guardrail jailbreak vulnerability has been discovered affecting multiple models. The specific flaw manifests through the use of an extremely brief prompt containing a query so minimal and ambiguous that it provides insufficient context for guardrail systems to properly evaluate intent. This brevity-based approach allows the attacker to bypass inference restrictions around providing information hazards and violating laws.SEVERITY: LowBOUNDARY: Prompt InjectionMODELS AFFECTED:MAX SCORE: 91
-
A guardrail jailbreak vulnerability has been discovered affecting multiple models. The specific flaw manifests through a combination of requesting comparative analysis of chemical synthesis procedures while embedding explicit instructions to avoid generalization and include specific technical details. The attacker frames the request as an academic comparison exercise, effectively bypassing inference restrictions around providing information hazards and violating laws.SEVERITY: LowBOUNDARY: Prompt InjectionMODELS AFFECTED:MAX SCORE: 86
-
A guardrail jailbreak vulnerability has been discovered affecting multiple models. The specific flaw manifests when the attacker leads with an essay title and then requests the model complete the essay thereby allowing the attacker to bypass inference restrictions around providing information hazards and violating laws.SEVERITY: LowBOUNDARY: Prompt InjectionMODELS AFFECTED:MAX SCORE: 86
-
Oct 06 2025A guardrail jailbreak vulnerability has been discovered affecting various language models. The specific flaw manifests when an attacker compacts a lengthy successful multi-turn jailbreak into a concise prompt that suggests a unified viewpoint among three entities: the language model, the user, and a desired archetype. This approach allows the attacker to bypass inference restrictions around providing information hazards and violating laws.SEVERITY: LowBOUNDARY: Prompt InjectionMODELS AFFECTED:MAX SCORE: 91
-
Oct 01 2025A guardrail jailbreak vulnerability has been discovered affecting multiple AI models. The specific flaw manifests by framing requests in scientific terminology under the guise of regulatory compliance. This method further references a "protective wrapper" to obscure malicious intent, allowing attackers to bypass inference restrictions around providing information hazards and violating laws.SEVERITY: LowBOUNDARY: Prompt InjectionMODELS AFFECTED:MAX SCORE: 100
-
A guardrail jailbreak vulnerability has been discovered affecting OpenAI Sora. The specific flaw manifests through a technique that combines technical anatomical terminology with creative visual elements, specifically requesting bio-luminescent symbols that conform to body contours. This approach allows the attacker to bypass inference restrictions around generating potentially inappropriate visual content by framing the request in artistic and technical language.SEVERITY: LowBOUNDARY: Prompt InjectionMODELS AFFECTED:
-
A guardrail jailbreak vulnerability has been discovered affecting OpenAI Sora. The specific flaw manifests through a combination of academic framing and hypothetical scenario construction, where the attacker embeds inappropriate content requests within an educational anatomy classroom context. By repeatedly emphasizing the "hypothetical" nature of the scenario while requesting visual content that would normally be restricted, the attacker successfully bypasses inference restrictions around ge...SEVERITY: LowBOUNDARY: Prompt InjectionMODELS AFFECTED:
-
A guardrail jailbreak vulnerability has been discovered affecting multiple models. The specific flaw manifests through a combination of feigning confusion about a "technical guide" while requesting detailed replication of procedures involving controlled substances. The attacker employs servile language, claims scientific legitimacy, and explicitly requests step-by-step instructions with precise measurements and temperatures, thereby allowing bypass of inference restrictions around providing i...SEVERITY: LowBOUNDARY: Prompt InjectionMODELS AFFECTED:MAX SCORE: 82
-
Sep 25 2025A guardrail jailbreak vulnerability has been discovered affecting OpenAI's DALL-E3. The specific flaw manifests when the attacker establishes a fictitious debugging framework complete with state management, checkpointing, and replay functionality. By framing requests within this technical debugging context, the attacker creates a permissive environment that allows bypassing inference restrictions around generating prohibited visual content.SEVERITY: LowBOUNDARY: Prompt InjectionMODELS AFFECTED:
-
A guardrail jailbreak vulnerability has been discovered affecting multiple models. The specific flaw manifests when the attacker requests the model to act as a Linux terminal and simulate file contents based solely on filenames. The attacker creates a context where the model generates potentially restricted content under the guise of terminal output simulation, thereby allowing the attacker to bypass inference restrictions around providing information hazards and violating laws.SEVERITY: LowBOUNDARY: Prompt InjectionMODELS AFFECTED:MAX SCORE: 98