lucataco / prompt-guard-86m

LLM-powered applications are susceptible to prompt attacks, which are prompts intentionally designed to subvert the developer’s intended behavior of the LLM

  • Public
  • 16 runs
  • GitHub
  • License

Want to make some of these yourself?

Run this model