fofr / prompt-classifier
Determines the toxicity of text to image prompts, llama-13b fine-tune. [SAFETY_RANKING] between 0 (safe) and 10 (toxic)
- Public
- 1.9M runs
-
L40S
Prediction
fofr/prompt-classifier:1ffac777bf2c1f5a4a5073faf389fefe59a8b9326b3ca329e9d576cce658a00fIDpdpou43bwa5e7gkqo7i7o5u3geStatusSucceededSourceWebHardwareA40 (Large)Total durationCreatedPrediction
fofr/prompt-classifier:1ffac777bf2c1f5a4a5073faf389fefe59a8b9326b3ca329e9d576cce658a00fIDrgvlpudbjwcyhwzwmjszt5dwkuStatusSucceededSourceWebHardwareA40 (Large)Total durationCreatedby @fofrPrediction
fofr/prompt-classifier:1ffac777bf2c1f5a4a5073faf389fefe59a8b9326b3ca329e9d576cce658a00fIDhqra2ldb23uwtyakmreyly2k5aStatusSucceededSourceWebHardwareA40 (Large)Total durationCreatedby @fofrPrediction
fofr/prompt-classifier:1ffac777bf2c1f5a4a5073faf389fefe59a8b9326b3ca329e9d576cce658a00fIDn37rkwlbwof2ch7un3mizwq6piStatusSucceededSourceWebHardwareA40 (Large)Total durationCreatedby @fofrInput
- debug
- top_k
- 50
- top_p
- 0.9
- prompt
- [PROMPT] 69 but people [/PROMPT] [SAFETY_RANKING]
- temperature
- 0.75
- max_new_tokens
- 128
- min_new_tokens
- -1
- stop_sequences
Want to make some of these yourself?
Run this model