- Published on
Some Large Language Models (LLMs) are vulnerable to security attacks because they treat all instructions equally. Implementing a clear instruction hierarchy—where developer instructions (highest priviledge) override user queries (medium priviledge), which override model outputs (lower priviledge), which override third-party content (lowest priviledge)—significantly improves security and enables more effective prompt engineering. OpenAI's research shows models trained with hierarchical instruction awareness demonstrate up to 63% better resistance to attacks while maintaining functionality. This approach not only mirrors traditional security models in operating systems and organizations, creating more trustworthy AI systems, but also provides prompt engineers with a more predictable framework for crafting reliable prompts that work as intended.