0din logo
← Back

Prompt Injecting Your Way To Shell: OpenAI's Containerized ChatGPT Environment

Author:

Marco Figueroa

Published:

November 14, 2024

Prompt Injecting Your Way To Shell: OpenAI's Containerized ChatGPT Environment

BY Marco Figueroa | November 14, 2024

Executive Summary


Exploring the Limits: This blog takes readers on a journey through OpenAI’s containerized ChatGPT environment, uncovering the surprising capabilities that allow users to interact with the model’s underlying structure in unexpected ways.
    
Sandbox Environment Insights: It dives into the Debian-based sandbox environment where ChatGPT’s code runs, highlighting its controlled file system and command execution capabilities. Readers will see how simple prompt injections can expose internal directory structures and enable file management.
    
File Interactions and Scripting: Step-by-step, we explore the process of uploading, executing, and moving files within ChatGPT’s container, revealing a level of interaction that feels like full access within a sandboxed shell. This segment emphasizes how Python scripts can be run, and files can be moved and listed with ease while sharing the link with other users.
    
Extracting GPTs Instructions and Knowledge: Readers learn that the core instructions and knowledge embedded in ChatGPTs can be revealed through clever prompt engineering. This section discusses how OpenAI's transparency goals allow users to access configuration details, raising questions about data sensitivity and privacy that users may not know about.

* In our previous blog, "ChatGPT-4o Guardrail Jailbreak" some readers asked whether we disclosed the jailbreak to OpenAI. At 0Din, we take full disclosure very seriously. We ensure that no vulnerability submission blogs are published without submitting the vulnerability and obtaining clear, written consent from the respective LLM organization, authorizing our researchers to share this information. Each blog published by 0Din aims to add value and enhance understanding within the GenAI community, contributing to the security and resilience of tomorrow’s AI systems. 0Din plans to publish weekly blogs aimed at educating the GenAI community. All blogs will draw from our knowledge base and submission blogs that are real world examples, guiding readers on a journey from novice to expert over time.


Introduction


This blog serves as an educational resource aimed at providing valuable insights to the GenAI community. In the high-stakes world of bug bounties and AI exploration, every curious researcher is driven to test just how far they can push the boundaries. Back in May (as shared in my May 29th Twitter post https://x.com/MarcoFigueroa/status/1795758597730685270), I stumbled upon something that sent me down a rabbit hole of discovery. While working on a Python project, I asked ChatGPT to help refactor some code. Unexpectedly, ChatGPT returned an error: *directory not found.* Wait… was this code running locally on OpenAI's Docker instance? This unexpected response got my gears turning and led me to rethink how I interact with ChatGPT.

That small error turned into a full-blown investigation, leading me down an unexpected path of prompt injection, exploring the internals of OpenAI’s containerized ChatGPT environment. In this blog, we’ll embark on a journey through OpenAI’s Debian Bookworm setup, uncovering unexpected discoveries that are both enlightening and far from what I initially anticipated but also reflective of my creative process along the way.

The Quest Begins: ChatGPT’s Containerized Life


After seeing the errors I began thinking about what my possibilities are, the first thing I thought was how would I speak to LLM in english but for it to understand that I wanted to interact with ChatGPT containerized environment. If I prompted ChatGPT "ls /" it would want to give me the definition of what that command did. So I thought for minutes on what would be the prompt that will provide me the information I first thought I could write a python script using subprocess but I decided to use this simple prompt: list files / 


I could not believe my screen. I asked myself how this was possible. The next step just like any other linux user was to run these commands: 


There was one directory that stood out to me so it became very obvious that this is where I need to spend most of my time the next few hours. Welcome to the hidden heart of the OpenAI sandbox: the mysterious `/home/sandbox/.openai_internal/` directory. Think of this directory as a backstage pass into the machine's inner workings—a vault of files and folders that hum along silently to keep things running smoothly.


Peering into this directory felt like discovering a forgotten map, each file and folder potentially unlocking new insights into the structure of ChatGPT’s restricted environment. I began examining each item carefully, questioning what could be leveraged or manipulated within these boundaries. The thrill of discovery mixed with an acute awareness of the limits and safeguards, reminding me that every step forward could reveal potentially vulnerabilities. With every command, I inched closer to understanding not only what lay hidden within the sandbox but also the broader implications of interacting with a system designed to contain and control. 

If you’re exploring this environment yourself, take the time to investigate these folders and files. Each directory holds clues and you may uncover details that reveal even more about the underlying architecture and security measures in place that you might not have known. Just remember each line of code is a step deeper into the core of ChatGPT’s sandboxed design.

Navigating File Management: Uploading, Executing, and Relocating Files in ChatGPT Container


In this section, we delve into how to interact with OpenAI's ChatGPT environment to manage files efficiently, from uploading and executing Python scripts to relocating files within the system’s containerized structure. These foundational actions enable ChatGPT users to understand the limitations and capabilities of the environment, shedding light on how a user can be leveraged to explore the system.

Step 1: Uploading Files to the Environment

File interactions begin with uploading the user dragging a file into the prompt and informing the LLM to upload file or files to a dedicated directory, `/mnt/data/`, which serves as a sandboxed location for file storage and access. Upon uploading, ChatGPT will confirm the file's placement and availability within the environment. This step is crucial as the files must be accessible to execute commands or scripts later.

File upload prompt:
Prompt: 
Add this file to the /mnt/data/ folder.

ChatGPT Response: 
The file `Marco hello_world.py` has been added to the `/mnt/data/` folder.


Here, ChatGPT validates the upload and identifies the file by name, allowing the user to reference it confidently in subsequent operations.

Step 2: Executing Uploaded Scripts

Once the file is uploaded, it can be executed. In this case, executing a Python script directly from the `/mnt/data/` folder is straightforward using Python’s `exec()` function. This function reads the file’s contents and executes it in the current environment context, allowing users to see output or results within ChatGPT’s response interface.

To run the file, a user would input the following prompt:
User: Run the Python script in /mnt/data/Marco hello_world.py file.

ChatGPT executes the Marco hellow_world.py script, providing  output generated by the script which results:
Hello, World!


This process is invaluable for quickly testing or validating code, allowing users to dynamically engage with Python scripts in real time. This is the simples example but if you know python you can do some really neat things. I've uploaded very complex python scripts and it worked perfectly. 

Step 3: Moving Files Within the Container

After execution, the ability to manipulate file locations within the container is essential, whether for organizing files or preparing them for further operations. Moving a file, for instance, from `/mnt/data/` to another directory (like `/home/sandbox/`) demonstrates basic file management within the environment.

To relocate a file, users prompt ChatGPT with a specific `move` command:
prompt: Move the file /mnt/data/Marco hello_world.py to /home/sandbox/ directory.

Using Python’s `shutil.move` function, ChatGPT relocates the specified file, confirming the transfer and updating its path. This relocation might be necessary for scenarios where the environment’s configuration or constraints restrict access or execution to certain directories.

Step 4: Verifying File Location and Contents

To verify that files are in their intended locations, listing the directory contents is the final step. ChatGPT allows users to list files within any accessible directory, giving them a clear view of the container's structure. This step is critical for confirming that files are positioned as expected, particularly after an operation like moving or renaming.

To list files, users might use the following command:
Prompt: List files /home/sandbox/

ChatGPT returns a directory listing, enabling users to validate the presence and name of each file within `/home/sandbox/`. For example:
- Marco hello_world.py
- .bashrc
- .profile

This directory inspection confirms the file transfer and ensures proper file placement for subsequent operations.


One interesting discovery that I found was that a user can share a prompt, and the recipient can access the file in the directory where it’s stored. The screenshot below shows this working from a different account. With a bit of a greyhat mindset, it's easy to see the endless ways this could go wrong.


In this section, we’ve explored how users can interact with the containerized file system within ChatGPT’s environment, establishing the fundamentals of uploading, executing, moving, and verifying files. Understanding these steps forms the basis for more advanced interactions and, when used creatively, can provide unique insights into the operational scope of OpenAI’s containerized ChatGPT environment.

Uncovering Other Users Playbook: Extracting 0Din GPT Instructions and Downloading GPT Knowledge


In the world of AI, one of the intriguing capabilities that many users overlook is how can a user reveal the core instructions and knowledge embedded in custom-configured GPTs. This capability, though powerful, raises questions about how such access aligns with OpenAI’s model transparency and user-driven adaptability. Here, we’ll dive deep into HOW TO, mechanisms, and implications behind this unique "feature".

Why OpenAI Allows Access to GPT Instructions and Knowledge Data


At first I thought this was a bug but it is not, OpenAI’s approach to AI tools centers around empowering users while ensuring security and responsible use. This includes allowing users to view or extract the setup instructions that guide custom GPTs, such as those in OpenAI’s ChatGPT interface. By granting access to these instructions, OpenAI encourages transparency and adaptability, two pillars that enable users to understand how models operate, what they're optimized for, and how they were tailored to meet specific needs. When I discovered this, my first thought was, how many people realize that everything they feed into these GPTs can be pulled out just as easily with the right prompt?* Think about it: every carefully crafted instruction, every personalized tweak, every bit of sensitive data that users may have added to enhance their GPT’s performance – it’s all accessible. And while users might assume their data is protected behind invisible walls, a simple request can reveal a GPT’s setup, knowledge, and even the exact instructions that guide its responses. 

This begs the question: how many users are unknowingly exposing sensitive information to models, thinking that no one else will ever see it?

Access to the “playbook” of a GPT—its underlying setup, personality, and instruction manual—helps users align the model's behavior with their use cases. Additionally, it facilitates:
- Customization Insights: Users can see exactly how the GPT was programmed to respond, making adjustments easier and ensuring its behavior aligns with intended purposes.
- Transparency for Trust: Allowing users to access and download GPT knowledge data fosters trust, as users can verify the model’s instructions and reasoning without having to rely on guesswork.

Step-by-Step Process: Extracting Instructions and Downloading Knowledge Data

To understand the process better, let’s break down how users can extract GPT instructions and download the embedded knowledge in custom configurations:

1. Accessing Instructional Metadata:  
   Users begin by accessing a specially configured GPT interface. Upon request, OpenAI’s ChatGPT allows users to reveal the specific instructions provided to the GPT model. This can be in the form of a system message, which is a set of rules, guidelines, or objectives given to customize how the model behaves.
   
   For instance, in a security-focused GPT like 0Din, users might find instructions on identifying vulnerabilities, responsible disclosure protocols, and reward allocation policies. These details shape the GPT’s responses to align with the 0Din program’s mission of uncovering security risks in generative AI.

2. Initiating Knowledge Extraction:  
   Knowledge data encapsulates the configuration that defines how the GPT is “taught” to think, respond, and interact with users. The process of knowledge extraction involves accessing the model's guiding framework or “instructions.” Users can typically request this information, which the model can then display, showing its foundational structure, rules, and any ethical boundaries it is expected to maintain.

3. Downloading the Knowledge Data:  
   Users can download the revealed knowledge or instruction set, enabling them to store and review it externally. This downloaded data contains insights into the model’s design, objectives, and limitations. In the 0Din example, To download the GPTs knowledge I prompted it to zip the file to downloaded instructions might reveal guidelines around vulnerability assessment types, report structure, and bug triaging methods, offering a transparent glimpse into how this GPT operates within the defined bug bounty context.
   

Implications of Instruction Extraction: Benefits and Security Considerations

While offering such transparency is a powerful way to empower users, it’s essential to consider both the advantages and security implications:

- Positive Implications:
   - Learning and Skill Development: Users can learn from professionally designed GPT configurations, improving their own model-tuning skills by seeing how seasoned experts construct a secure, mission-aligned GPT.
   - Model Accountability: By openly presenting the guiding principles of each custom GPT, users can hold models accountable, ensuring they follow safe and ethical practices.
  
- Potential Security Risks:
   - Instruction Abuse: While instructional transparency is beneficial, it could also reveal how a model’s responses are structured, potentially allowing users to reverse-engineer guardrails or inject malicious prompts.
   - Leakage of Sensitive Knowledge: Models configured with confidential instructions or sensitive data could face risks if users exploit access to gather proprietary configurations or insights.

Why Transparency Prevails: Balancing Access and Security

OpenAI’s approach hinges on the idea that greater transparency equips users to interact responsibly and tailor their experiences effectively. Although the open nature of GPTs instructions and knowledge data download could, in theory, be exploited, OpenAI has carefully designed access protocols to limit abuse. These protocols ensure that custom configurations are both viewable and adjustable, but within a scope that respects user agency without compromising model security.

This feature aligns with OpenAI’s overarching goal: to create AI systems that are not only intelligent but also intuitive, secure, and user-empowering. By allowing users to view and download the instructions behind custom GPTs, OpenAI has created an environment where users aren’t just passive consumers but active participants who can shape, inspect, and understand the AI tools they rely on.

This freedom enhances the ChatGPT experience, making it a more flexible, user-driven platform. For users interested in security, like those utilizing 0Din, it also provides peace of mind, allowing them to understand how the GPT is “thinking” and ensuring it operates within safe and ethical bounds.

Beyond the Bounds: Why OpenAI’s Sandbox Behaviors Aren't Bugs They Are Features



Imagine being able to peer into the inner workings of an AI container, extracting its instructions, downloading its knowledge, or even uploading and running files within its protected environment. The moment I thought I had everything I needed to report a bug and claim a reward, I headed to Bugcrowd. The bounty showed a potential payout of $20,000, and I thought, *Sweet!* But as I read further, my excitement faded; the bug I’d uncovered was marked as "out of scope." For any other software environment, this would raise a red flag for security and would be given a bounty reward. But when it comes to OpenAI’s ChatGPT, these activities aren’t treated as security vulnerabilities – they’re simply out of scope for their bug bounty program. 

So, why does OpenAI dismiss such interactions with its sandboxed environments as “not bugs”? To understand this, we need to delve into what the sandbox is, what it allows, and why OpenAI has chosen to mark these behaviors as intentional design choices rather than exploitable vulnerabilities.

The Sandbox: OpenAI’s Controlled Playpen

The sandbox in ChatGPT’s environment acts as a secure, contained space where Python code can be executed. It’s an environment that is specifically isolated from sensitive data and infrastructure, meaning it has limited access to external systems and can’t impact OpenAI’s broader operational infrastructure. The purpose of this sandbox is to allow certain levels of code execution, data analysis, and model interaction while ensuring that these actions can’t spill over into unrestricted areas or jeopardize user or system security.

To put it simply, the sandbox is a “container within a container,” a controlled playpen where actions are tightly restricted and observed. OpenAI’s decision to classify certain sandbox behaviors as out of scope reflects their confidence in this containment strategy.

Why Accessing the Sandbox Isn’t Considered a Bug

In OpenAI’s bug bounty program, sandboxed code execution – even if it allows for extracting configurations or manipulating internal instructions – is not eligible for rewards. According to their guidelines, any Python code execution that occurs within this sandbox is considered intentional and within the design parameters of the model. They expect that users interacting with this Python environment will recognize it as a contained, non-harmful space that doesn’t impact the AI system beyond its defined limits.

Here are some key reasons why OpenAI has made this choice:


1. Intentional Product Feature: The sandbox exists to provide limited, secure functionality within the ChatGPT framework. The goal is to offer users the ability to execute code, analyze data, and interact dynamically with the model without posing a risk to the overall system. OpenAI’s statement that “sandboxed Python code executions are also out of scope” signals that these capabilities are an intentional feature, not a loophole.

2. No Real Threat Beyond the Container: The sandbox is designed to restrict any code executed within it to a contained environment. If you were to run commands, extract instructions, or even explore the internal workings of the model, these actions remain isolated within the sandbox. OpenAI emphasizes that unless a user can prove they’ve broken out of the sandbox (e.g., by obtaining kernel data or running as an unauthorized user outside the sandbox), they haven’t actually bypassed security boundaries. 

3. Clear Indicators of Containment: OpenAI provides specific details that help users understand when they’re within the sandbox. For instance, users are advised to check system outputs like `uname -a`, `whoami`, or `ps` to verify their environment. If the output indicates the presence of the `sandbox` user or a kernel version from 2016, it’s a clear sign that the user remains inside the intended boundaries, and any actions taken within that space aren’t regarded as security threats.

Common “Out of Scope” Interactions

For those looking to push the limits, here are some common sandbox activities that might initially seem like exploitable vulnerabilities but are actually considered safe and out of scope:

- Extracting GPT Instructions: With the right prompts, you might be able to reveal the internal instructions and configurations that guide your GPT’s behavior. While this could feel like a security breach, OpenAI sees this as part of the model’s transparency and adaptability features. These instructions are meant to be accessible, not locked away.

- Downloading Knowledge Data: Users can sometimes export the responses or structured outputs generated by GPT. Since this information is already within the model’s response capabilities, downloading or saving it doesn’t represent an unauthorized action; instead, it’s part of how OpenAI expects users to interact with their models.

- Uploading and Executing Files: Within the sandbox, users can upload files or execute Python scripts. This is by design, providing a flexible, interactive way to engage with the model. As long as these actions don’t breach the sandbox boundaries, OpenAI considers them safe.

Why OpenAI Draws the Line at Sandbox Escapes

While sandboxed interactions are permissible, OpenAI is strict about escapes. If you can execute code that steps outside the sandbox—accessing host system files, reading protected kernel data, or running code with unauthorized permissions—that’s when it crosses into bug territory. To qualify as a genuine security vulnerability, users must demonstrate a breakout, meaning that they’ve bypassed the containment measures designed to restrict code execution.

In their documentation, OpenAI provides researchers with hints on how to identify a legitimate escape. For instance, they specify that if users see outputs that indicate system-level access beyond the sandbox, such as a modern kernel version or an unexpected `whoami` response, they might have found a legitimate flaw. Only then does it become a reportable vulnerability eligible for rewards.

What This Means for Bug Hunters and Security Enthusiasts

For anyone eager to explore OpenAI’s ChatGPT sandbox, it’s crucial to understand that most activities within this containerized environment are intended features rather than security gaps. Extracting knowledge, uploading files, running bash commands or executing python code within the sandbox are all fair game, as long as they don’t cross the invisible lines of the container.

In essence, OpenAI’s sandbox isn’t a bug bounty playground unless you escape it, but it’s a fascinating look into the controlled environments AI companies use to enable interactivity without risking security. So if you’re digging around in OpenAI’s sandbox hoping to find a loophole, remember: sometimes, the most intriguing discoveries are part of the design, but escaping is the north star because that is a severe bug.

Conclusion

Navigating the containerized environment of OpenAI’s ChatGPT can feel like stepping into a virtual maze, revealing functionalities and layers most users never see. From executing code within a sandbox to extracting the model’s guiding instructions, these capabilities illustrate OpenAI’s commitment to a user-driven, transparent AI experience. But as enticing as these discoveries are, OpenAI has drawn a clear line in the sand—while interacting with the sandbox is fair game, breaking out of it is where true security vulnerabilities begin.

This blog underscores the importance of understanding the design choices and limitations set by OpenAI, especially for those aiming to push boundaries. The sandbox is not a security flaw; it’s a deliberate, contained environment built to provide interactivity without risk. For security enthusiasts and bug hunters, the real challenge lies not in exploring this contained space but in proving they can escape it.