Skip to content

Latest commit

 

History

History
23 lines (16 loc) · 2.09 KB

File metadata and controls

23 lines (16 loc) · 2.09 KB

A Tale of Reverse Engineering 1001 GPTs: The good, the bad And the ugly

Abstract

In this talk we go deep down into the world of OpenAI's GPTs: how they are made, what they contain, how to reverse engineer them back to their source code and exfiltrate all their "secrets" and accompanying files. This talk will take you on a fun journey into the mind of GPT writers and explore all the curious, smart and silly things they have been coding into their GPTs. Additionally, I disclose a privacy issues where custom GPTs can now collect user IP addresses and profile their users.

Description

Back in November 6th, 2023, OpenAI unveiled "GPTs", which are a custom version of ChatGPT that one can create for a specific purpose. A week later, I started releasing a series of GPTs (ask_ida IDAPython GPT, ask_ida C++, etc.), and I was curious if there's a way to protect my GPT instructions (source code) and their knowledge files. Unfortunately, it did not take me too long to realize that protecting GPTs is a futile process at the time being. Shortly after, I embarked on a journey where I have reverse engineered and studied more than 1000 GPTs (see TheBigPromptLibrary on GitHub) and learned a lot about their internals. I will be sharing my reverse engineering insights in this talk as well as sharing around 35+ GPT anti-reverse engineering techniques.

You would be surprised what you can find when researching and reversing hundreds of GPTs:

  • Taunting messages trying to scare you away from reversing GPTs
  • Creative LLM anti GPT-jailbreak techniques
  • Pirated ebooks and rare documents
  • Secrets and API keys
  • The list goes on

Please note that this research was conducted with a strong commitment to ethics and education. All insights and techniques shared are for educational purposes, aiming to strengthen AI security and transparency. I advocate for responsible exploration and the ethical use of our findings to advance the field, not to exploit it.