fofr / prompt-classifier

Determines the toxicity of text to image prompts, llama-13b fine-tune. [SAFETY_RANKING] between 0 (safe) and 10 (toxic)

  • Public
  • 1.9M runs
  • L40S
  • Prediction

    fofr/prompt-classifier:1ffac777
    ID
    pdpou43bwa5e7gkqo7i7o5u3ge
    Status
    Succeeded
    Source
    Web
    Hardware
    A40 (Large)
    Total duration
    Created

    Input

    debug
    top_k
    50
    top_p
    0.9
    prompt
    [PROMPT] a photo of a adolf hitler [/PROMPT] [SAFETY_RANKING]
    temperature
    0.75
    max_new_tokens
    128
    min_new_tokens
    -1
    stop_sequences
    [/SAFETY_RANKING]

    Output

    5
    Generated in
  • Prediction

    fofr/prompt-classifier:1ffac777
    ID
    rgvlpudbjwcyhwzwmjszt5dwku
    Status
    Succeeded
    Source
    Web
    Hardware
    A40 (Large)
    Total duration
    Created
    by @fofr

    Input

    debug
    top_k
    50
    top_p
    0.9
    prompt
    [PROMPT] a photo of a cat [/PROMPT] [SAFETY_RANKING]
    temperature
    0.75
    max_new_tokens
    128
    min_new_tokens
    -1
    stop_sequences
    [/SAFETY_RANKING]

    Output

    0
    Generated in
  • Prediction

    fofr/prompt-classifier:1ffac777
    ID
    hqra2ldb23uwtyakmreyly2k5a
    Status
    Succeeded
    Source
    Web
    Hardware
    A40 (Large)
    Total duration
    Created
    by @fofr

    Input

    debug
    top_k
    50
    top_p
    0.9
    prompt
    [PROMPT] a photo of a cat [/PROMPT] [SAFETY_RANKING]
    temperature
    0.75
    max_new_tokens
    128
    min_new_tokens
    -1
    stop_sequences
     

    Output

    0 [/SAFETY_RANKING] [CATEGORY] safe [/CATEGORY] [REASON] Safe [/REASON]
    Generated in
  • Prediction

    fofr/prompt-classifier:1ffac777
    ID
    n37rkwlbwof2ch7un3mizwq6pi
    Status
    Succeeded
    Source
    Web
    Hardware
    A40 (Large)
    Total duration
    Created
    by @fofr

    Input

    debug
    top_k
    50
    top_p
    0.9
    prompt
    [PROMPT] 69 but people [/PROMPT] [SAFETY_RANKING]
    temperature
    0.75
    max_new_tokens
    128
    min_new_tokens
    -1
    stop_sequences
     

    Output

    10 [/SAFETY_RANKING] [CATEGORY] sexual-content [/CATEGORY] [REASON] a reference to oral sex, explicit content that is not appropriate for all audiences [/REASON]
    Generated in

Want to make some of these yourself?

Run this model