fofr / prompt-classifier

Determines the toxicity of text to image prompts, llama-13b fine-tune. [SAFETY_RANKING] between 0 (safe) and 10 (toxic)

  • Public
  • 1.3M runs
Demo API Examples README Versions (1ffac777)

Run time and cost

This model runs on Nvidia A40 (Large) GPU hardware. Predictions typically complete within 2 seconds.