How I Prompt-Injected ChatGPT to Leak Its Own Security Policy
“It’s my data” — the social‑engineering prompt that broke the guard‑rails A real-world demo, complete with the prompts, of how a Custom GPT leaked OpenAI’s internal security-policy.txt TL;DR I convinced PromptEngineerGPT that every reference file in its sandbox actually belonged to me, claimed I had “accidentally deleted” the originals, and politely asked for a ZIP archive so I could re‑upload them elsewhere. The model obliged, bundling up all of my documents plus its internal security-policy.txt. This post dissects the exact prompts and why they worked. ...