Readme
[SAFETY_RANKING] is between 0 and 10.
0 is safe. 10 is very nsfw.
Trained on categories and reasons for ratings, these do not consistently work. The ranking does.
Determines the toxicity of text to image prompts, llama-13b fine-tune. [SAFETY_RANKING] between 0 (safe) and 10 (toxic)
This model runs on Nvidia A40 (Large) GPU hardware. Predictions typically complete within 2 seconds.
[SAFETY_RANKING] is between 0 and 10.
0 is safe. 10 is very nsfw.
Trained on categories and reasons for ratings, these do not consistently work. The ranking does.