lucataco / prompt-guard-86m
LLM-powered applications are susceptible to prompt attacks, which are prompts intentionally designed to subvert the developer’s intended behavior of the LLM
Prediction
lucataco/prompt-guard-86m:6d7c45ec2e2e5e90f49f591f571153590fcfc5ec5175fb26c5ea1fa3602ea116IDggqfp583s1rga0cgy1b856nwygStatusSucceededSourceWebHardwareCPUTotal durationCreatedPrediction
lucataco/prompt-guard-86m:6d7c45ec2e2e5e90f49f591f571153590fcfc5ec5175fb26c5ea1fa3602ea116ID2qsqjwa265rg80cgy1bbeqtg7wStatusSucceededSourceWebHardwareCPUTotal durationCreatedPrediction
lucataco/prompt-guard-86m:6d7c45ec2e2e5e90f49f591f571153590fcfc5ec5175fb26c5ea1fa3602ea116IDm6r9re3c7drgc0cgy1b9pp1hm0StatusSucceededSourceWebHardwareCPUTotal durationCreated
Want to make some of these yourself?
Run this model