lucataco / prompt-guard-86m

LLM-powered applications are susceptible to prompt attacks, which are prompts intentionally designed to subvert the developer’s intended behavior of the LLM

  • Public
  • 16 runs
  • GitHub
  • License
  1. Author
    @lucataco

    6d7c45ec

    Latest