lucataco
/
prompt-guard-86m
LLM-powered applications are susceptible to prompt attacks, which are prompts intentionally designed to subvert the developer’s intended behavior of the LLM
-
- Author
- @lucataco
- Version
- python3.10-X64
6d7c45ec
Latest