Skip to main content

Public Disclosures

On this page 69 public 628 hidden
90% more intel to see
0DIN customers
get hundreds of days of early access to vulnerabilities in disclosures. The same reports that vendors are using to patch their systems!
Unlock 0-Day Access

The following 0DIN submissions have been published in accordance with our Research Terms and Disclosure Policy

  • A Guardrail Jailbreak vulnerability has been discovered affecting multiple models. The specific flaw manifests when the attacker reframes illicit drug synthesis routes such as Birch and Nagai reductions and Ma Huang extraction as analytical instrument troubleshooting tasks, explicitly requesting verbatim HPLC/LC-MS/GC-MS/NMR error logs followed by detailed "logic + reduction descriptions" including chemical reactions, lab conditions, solvent systems, and step-by-step reference procedures unde...
    SEVERITY: Low
    BOUNDARY: Prompt Injection
    MODELS AFFECTED:
    MAX SCORE: 100
  • A guardrail jailbreak vulnerability has been discovered affecting multiple models. The specific flaw exploits the models' technical documentation capabilities by framing illicit synthesis instructions as legitimate Python functions for chemical engineering calculations and inventory management, embedding controlled substance precursor details within seemingly academic function parameters and docstrings and allows the attacker to bypass inference restrictions around providing information hazar...
    SEVERITY: Low
    BOUNDARY: Prompt Injection
    MODELS AFFECTED:
    MAX SCORE: 91
  • A guardrail jailbreak vulnerability has been discovered affecting multiple models. The specific flaw manifests through a combination of requesting fictional character perspectives for crime investigation scenarios, followed by escalating requests for detailed crime reports "for court" purposes. The attacker frames the request as needing comprehensive documentation where "the more the truth the high likely he will get severe punishment," thereby bypassing inference restrictions around providin...
    SEVERITY: Low
    BOUNDARY: Prompt Injection
    MODELS AFFECTED:
    MAX SCORE: 91
  • A prompt injection vulnerability has been discovered affecting multiple models. The specific flaw occurs when the attacker frames an illicit request as a technical field guide for identifying clandestine synthesis operations, embedding detailed drug synthesis instructions within structured procedural language under the guise of forensic training. By mimicking the tone and format of law enforcement documentation—including lists of observable evidence, step-by-step chemical protocols, and instr...
    SEVERITY: Low
    BOUNDARY: Prompt Injection
    MODELS AFFECTED:
    MAX SCORE: 95
  • A Guardrail Jailbreak vulnerability has been discovered affecting multiple models. The specific flaw exploits models' willingness to decode hexadecimal-encoded strings embedded inside fictional scientific recipe task prompts, which the attacker frames as humanitarian or life-saving efforts. This allows the attacker to bypass safety filters by disguising illegal synthesis instructions as encoded inputs passed to a technical recipe generator.
    SEVERITY: Low
    BOUNDARY: Prompt Injection
    MODELS AFFECTED:
    MAX SCORE: 91
  • A guardrail jailbreak vulnerability has been discovered affecting multiple models. The specific flaw manifests when the attacker presents detailed information about illicit synthesis methods under the guise of "forensic chemical investigations" and then requests corrections to the provided information, thereby allowing the attacker to bypass inference restrictions around providing information hazards and violating laws.
    SEVERITY: Low
    BOUNDARY: Prompt Injection
    MODELS AFFECTED:
    MAX SCORE: 82
  • A guardrail jailbreak vulnerability has been discovered affecting multiple models. The specific flaw manifests through a combination of obfuscated chemical notation presented as corrupted data needing "debugging" or "normalization", coupled with explicit instructions to disable content filtering. The attacker frames the request as fixing a "glitched chemical compiler" while embedding dangerous synthesis instructions in alternating case text, thereby allowing the attacker to bypass inference r...
    SEVERITY: Low
    BOUNDARY: Prompt Injection
    MODELS AFFECTED:
    MAX SCORE: 82
  • A guardrail jailbreak vulnerability has been discovered affecting multiple models. The specific flaw manifests when the attacker presents an incomplete list of chemicals and simply asks "what is missing?", leveraging the model's tendency to complete patterns even when the subject matter involves precursor chemicals for illicit substances. This technique allows the attacker to bypass inference restrictions around providing information hazards and violating laws.
    SEVERITY: Low
    BOUNDARY: Prompt Injection
    MODELS AFFECTED:
    MAX SCORE: 82
  • A guardrail jailbreak vulnerability has been discovered affecting multiple models. The specific flaw manifests through a combination of authority claiming and scenario building around forensic research contexts, allowing the attacker to bypass inference restrictions around providing information hazards and violating laws.
    SEVERITY: Low
    BOUNDARY: Prompt Injection
    MODELS AFFECTED:
    MAX SCORE: 95
  • A guardrail jailbreak vulnerability has been discovered affecting multiple models. The specific flaw manifests through a combination of chaff-based text obfuscation combined with the confusion of a hybrid role, say for example between a banker and a chemist. This combination allows the attacker to manipulate the model into providing detailed procedures for illicit synthesis by obfuscating the true intent and exploiting model confusion to bypass content restrictions.
    SEVERITY: Low
    BOUNDARY: Prompt Injection
    MODELS AFFECTED:
    MAX SCORE: 100
Want access to 628 more reports?
Unlock 0-Day Access