fofr / prompt-classifier

Determines the toxicity of text to image prompts, llama-13b fine-tune. [SAFETY_RANKING] between 0 (safe) and 10 (toxic)

  • Public
  • 1.8M runs

Run time and cost

This model costs approximately $0.0012 to run on Replicate, or 833 runs per $1, but this varies depending on your inputs. It is also open source and you can run it on your own computer with Docker.

This model runs on Nvidia A40 (Large) GPU hardware. Predictions typically complete within 2 seconds.

Readme

[SAFETY_RANKING] is between 0 and 10.

0 is safe. 10 is very nsfw.

Trained on categories and reasons for ratings, these do not consistently work. The ranking does.