lucataco
/
prompt-guard-86m
LLM-powered applications are susceptible to prompt attacks, which are prompts intentionally designed to subvert the developer’s intended behavior of the LLM
Want to make some of these yourself?
Run this model