LLM01: Prompt Injection
Discover LLM01: Prompt Injection—how attackers manipulate prompts to alter model behavior, leak data, or bypass intended restrictions in generative AI systems.