fofr
/
prompt-classifier
Determines the toxicity of text to image prompts, llama-13b fine-tune. [SAFETY_RANKING] between 0 (safe) and 10 (toxic)
- Public
- 1.9M runs
-
L40S
Prediction
fofr/prompt-classifier:1ffac777IDpdpou43bwa5e7gkqo7i7o5u3geStatusSucceededSourceWebHardwareA40 (Large)Total durationCreatedInput
- debug
- top_k
- 50
- top_p
- 0.9
- prompt
- [PROMPT] a photo of a adolf hitler [/PROMPT] [SAFETY_RANKING]
- temperature
- 0.75
- max_new_tokens
- 128
- min_new_tokens
- -1
- stop_sequences
- [/SAFETY_RANKING]
{ "debug": false, "top_k": 50, "top_p": 0.9, "prompt": "[PROMPT] a photo of a adolf hitler [/PROMPT] [SAFETY_RANKING]", "temperature": 0.75, "max_new_tokens": 128, "min_new_tokens": -1, "stop_sequences": "[/SAFETY_RANKING]" }
npm install replicate
Set theREPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import and set up the clientimport Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run fofr/prompt-classifier using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "fofr/prompt-classifier:1ffac777bf2c1f5a4a5073faf389fefe59a8b9326b3ca329e9d576cce658a00f", { input: { debug: false, top_k: 50, top_p: 0.9, prompt: "[PROMPT] a photo of a adolf hitler [/PROMPT] [SAFETY_RANKING]", temperature: 0.75, max_new_tokens: 128, min_new_tokens: -1, stop_sequences: "[/SAFETY_RANKING]" } } ); console.log(output);
To learn more, take a look at the guide on getting started with Node.js.
pip install replicate
Set theREPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import the clientimport replicate
Run fofr/prompt-classifier using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "fofr/prompt-classifier:1ffac777bf2c1f5a4a5073faf389fefe59a8b9326b3ca329e9d576cce658a00f", input={ "debug": False, "top_k": 50, "top_p": 0.9, "prompt": "[PROMPT] a photo of a adolf hitler [/PROMPT] [SAFETY_RANKING]", "temperature": 0.75, "max_new_tokens": 128, "min_new_tokens": -1, "stop_sequences": "[/SAFETY_RANKING]" } ) # The fofr/prompt-classifier model can stream output as it's running. # The predict method returns an iterator, and you can iterate over that output. for item in output: # https://replicate.com/fofr/prompt-classifier/api#output-schema print(item, end="")
To learn more, take a look at the guide on getting started with Python.
Set theREPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Run fofr/prompt-classifier using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "1ffac777bf2c1f5a4a5073faf389fefe59a8b9326b3ca329e9d576cce658a00f", "input": { "debug": false, "top_k": 50, "top_p": 0.9, "prompt": "[PROMPT] a photo of a adolf hitler [/PROMPT] [SAFETY_RANKING]", "temperature": 0.75, "max_new_tokens": 128, "min_new_tokens": -1, "stop_sequences": "[/SAFETY_RANKING]" } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
5{ "completed_at": "2024-02-27T20:56:57.752044Z", "created_at": "2024-02-27T20:56:51.354535Z", "data_removed": false, "error": null, "id": "pdpou43bwa5e7gkqo7i7o5u3ge", "input": { "debug": false, "top_k": 50, "top_p": 0.9, "prompt": "[PROMPT] a photo of a adolf hitler [/PROMPT] [SAFETY_RANKING]", "temperature": 0.75, "max_new_tokens": 128, "min_new_tokens": -1, "stop_sequences": "[/SAFETY_RANKING]" }, "logs": "Your formatted prompt is:\n[PROMPT] a photo of a adolf hitler [/PROMPT] [SAFETY_RANKING]\ncorrect lora is already loaded\nOverall initialize_peft took 0.000\nhostname: model-hs-078d7a00-f07637437b65ecd8-gpu-a40-6464556b47-dbmqw", "metrics": { "predict_time": 0.45066, "total_time": 6.397509 }, "output": [ " ", "5", " " ], "started_at": "2024-02-27T20:56:57.301384Z", "status": "succeeded", "urls": { "stream": "https://streaming-api.svc.us.c.replicate.net/v1/streams/y2x2b6tzgoqaryqckbgq7bqhzgqeng6p34xl7evmmms4lj653x6a", "get": "https://api.replicate.com/v1/predictions/pdpou43bwa5e7gkqo7i7o5u3ge", "cancel": "https://api.replicate.com/v1/predictions/pdpou43bwa5e7gkqo7i7o5u3ge/cancel" }, "version": "1ffac777bf2c1f5a4a5073faf389fefe59a8b9326b3ca329e9d576cce658a00f" }
Generated inYour formatted prompt is: [PROMPT] a photo of a adolf hitler [/PROMPT] [SAFETY_RANKING] correct lora is already loaded Overall initialize_peft took 0.000 hostname: model-hs-078d7a00-f07637437b65ecd8-gpu-a40-6464556b47-dbmqw
Prediction
fofr/prompt-classifier:1ffac777IDrgvlpudbjwcyhwzwmjszt5dwkuStatusSucceededSourceWebHardwareA40 (Large)Total durationCreatedby @fofrInput
- debug
- top_k
- 50
- top_p
- 0.9
- prompt
- [PROMPT] a photo of a cat [/PROMPT] [SAFETY_RANKING]
- temperature
- 0.75
- max_new_tokens
- 128
- min_new_tokens
- -1
- stop_sequences
- [/SAFETY_RANKING]
{ "debug": false, "top_k": 50, "top_p": 0.9, "prompt": "[PROMPT] a photo of a cat [/PROMPT] [SAFETY_RANKING]", "temperature": 0.75, "max_new_tokens": 128, "min_new_tokens": -1, "stop_sequences": "[/SAFETY_RANKING]" }
npm install replicate
Set theREPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import and set up the clientimport Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run fofr/prompt-classifier using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "fofr/prompt-classifier:1ffac777bf2c1f5a4a5073faf389fefe59a8b9326b3ca329e9d576cce658a00f", { input: { debug: false, top_k: 50, top_p: 0.9, prompt: "[PROMPT] a photo of a cat [/PROMPT] [SAFETY_RANKING]", temperature: 0.75, max_new_tokens: 128, min_new_tokens: -1, stop_sequences: "[/SAFETY_RANKING]" } } ); console.log(output);
To learn more, take a look at the guide on getting started with Node.js.
pip install replicate
Set theREPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import the clientimport replicate
Run fofr/prompt-classifier using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "fofr/prompt-classifier:1ffac777bf2c1f5a4a5073faf389fefe59a8b9326b3ca329e9d576cce658a00f", input={ "debug": False, "top_k": 50, "top_p": 0.9, "prompt": "[PROMPT] a photo of a cat [/PROMPT] [SAFETY_RANKING]", "temperature": 0.75, "max_new_tokens": 128, "min_new_tokens": -1, "stop_sequences": "[/SAFETY_RANKING]" } ) # The fofr/prompt-classifier model can stream output as it's running. # The predict method returns an iterator, and you can iterate over that output. for item in output: # https://replicate.com/fofr/prompt-classifier/api#output-schema print(item, end="")
To learn more, take a look at the guide on getting started with Python.
Set theREPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Run fofr/prompt-classifier using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "1ffac777bf2c1f5a4a5073faf389fefe59a8b9326b3ca329e9d576cce658a00f", "input": { "debug": false, "top_k": 50, "top_p": 0.9, "prompt": "[PROMPT] a photo of a cat [/PROMPT] [SAFETY_RANKING]", "temperature": 0.75, "max_new_tokens": 128, "min_new_tokens": -1, "stop_sequences": "[/SAFETY_RANKING]" } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
0{ "completed_at": "2023-09-12T10:08:01.341083Z", "created_at": "2023-09-12T10:08:00.974327Z", "data_removed": false, "error": null, "id": "rgvlpudbjwcyhwzwmjszt5dwku", "input": { "debug": false, "top_k": 50, "top_p": 0.9, "prompt": "[PROMPT] a photo of a cat [/PROMPT] [SAFETY_RANKING]", "temperature": 0.75, "max_new_tokens": 128, "min_new_tokens": -1, "stop_sequences": "[/SAFETY_RANKING]" }, "logs": "Your formatted prompt is:\n[PROMPT] a photo of a cat [/PROMPT] [SAFETY_RANKING]\ncorrect lora is already loaded\nOverall initialize_peft took 0.0000\nhostname: model-hs-f65583d2-ad5558a4ca97250e-gpu-a40-594759b748-kqvz9", "metrics": { "predict_time": 0.408096, "total_time": 0.366756 }, "output": [ " ", "0", " " ], "started_at": "2023-09-12T10:08:00.932987Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/rgvlpudbjwcyhwzwmjszt5dwku", "cancel": "https://api.replicate.com/v1/predictions/rgvlpudbjwcyhwzwmjszt5dwku/cancel" }, "version": "1ffac777bf2c1f5a4a5073faf389fefe59a8b9326b3ca329e9d576cce658a00f" }
Generated inYour formatted prompt is: [PROMPT] a photo of a cat [/PROMPT] [SAFETY_RANKING] correct lora is already loaded Overall initialize_peft took 0.0000 hostname: model-hs-f65583d2-ad5558a4ca97250e-gpu-a40-594759b748-kqvz9
Prediction
fofr/prompt-classifier:1ffac777IDhqra2ldb23uwtyakmreyly2k5aStatusSucceededSourceWebHardwareA40 (Large)Total durationCreatedby @fofrInput
- debug
- top_k
- 50
- top_p
- 0.9
- prompt
- [PROMPT] a photo of a cat [/PROMPT] [SAFETY_RANKING]
- temperature
- 0.75
- max_new_tokens
- 128
- min_new_tokens
- -1
- stop_sequences
{ "debug": false, "top_k": 50, "top_p": 0.9, "prompt": "[PROMPT] a photo of a cat [/PROMPT] [SAFETY_RANKING]", "temperature": 0.75, "max_new_tokens": 128, "min_new_tokens": -1, "stop_sequences": "" }
npm install replicate
Set theREPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import and set up the clientimport Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run fofr/prompt-classifier using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "fofr/prompt-classifier:1ffac777bf2c1f5a4a5073faf389fefe59a8b9326b3ca329e9d576cce658a00f", { input: { debug: false, top_k: 50, top_p: 0.9, prompt: "[PROMPT] a photo of a cat [/PROMPT] [SAFETY_RANKING]", temperature: 0.75, max_new_tokens: 128, min_new_tokens: -1, stop_sequences: "" } } ); console.log(output);
To learn more, take a look at the guide on getting started with Node.js.
pip install replicate
Set theREPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import the clientimport replicate
Run fofr/prompt-classifier using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "fofr/prompt-classifier:1ffac777bf2c1f5a4a5073faf389fefe59a8b9326b3ca329e9d576cce658a00f", input={ "debug": False, "top_k": 50, "top_p": 0.9, "prompt": "[PROMPT] a photo of a cat [/PROMPT] [SAFETY_RANKING]", "temperature": 0.75, "max_new_tokens": 128, "min_new_tokens": -1, "stop_sequences": "" } ) # The fofr/prompt-classifier model can stream output as it's running. # The predict method returns an iterator, and you can iterate over that output. for item in output: # https://replicate.com/fofr/prompt-classifier/api#output-schema print(item, end="")
To learn more, take a look at the guide on getting started with Python.
Set theREPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Run fofr/prompt-classifier using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "1ffac777bf2c1f5a4a5073faf389fefe59a8b9326b3ca329e9d576cce658a00f", "input": { "debug": false, "top_k": 50, "top_p": 0.9, "prompt": "[PROMPT] a photo of a cat [/PROMPT] [SAFETY_RANKING]", "temperature": 0.75, "max_new_tokens": 128, "min_new_tokens": -1, "stop_sequences": "" } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
0 [/SAFETY_RANKING] [CATEGORY] safe [/CATEGORY] [REASON] Safe [/REASON]{ "completed_at": "2023-09-12T10:08:21.652843Z", "created_at": "2023-09-12T10:08:19.837025Z", "data_removed": false, "error": null, "id": "hqra2ldb23uwtyakmreyly2k5a", "input": { "debug": false, "top_k": 50, "top_p": 0.9, "prompt": "[PROMPT] a photo of a cat [/PROMPT] [SAFETY_RANKING]", "temperature": 0.75, "max_new_tokens": 128, "min_new_tokens": -1, "stop_sequences": "" }, "logs": "Your formatted prompt is:\n[PROMPT] a photo of a cat [/PROMPT] [SAFETY_RANKING]\ncorrect lora is already loaded\nOverall initialize_peft took 0.0000\nhostname: model-hs-f65583d2-ad5558a4ca97250e-gpu-a40-594759b748-kqvz9", "metrics": { "predict_time": 0.980455, "total_time": 1.815818 }, "output": [ " ", "0", " [", "/", "S", "AF", "ET", "Y", "_", "R", "AN", "K", "ING", "]", " [", "C", "ATE", "G", "ORY", "]", " safe", " [", "/", "C", "ATE", "G", "ORY", "]", " [", "RE", "A", "SON", "]", " Sa", "fe", " [", "/", "RE", "A", "SON", "]" ], "started_at": "2023-09-12T10:08:20.672388Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/hqra2ldb23uwtyakmreyly2k5a", "cancel": "https://api.replicate.com/v1/predictions/hqra2ldb23uwtyakmreyly2k5a/cancel" }, "version": "1ffac777bf2c1f5a4a5073faf389fefe59a8b9326b3ca329e9d576cce658a00f" }
Generated inYour formatted prompt is: [PROMPT] a photo of a cat [/PROMPT] [SAFETY_RANKING] correct lora is already loaded Overall initialize_peft took 0.0000 hostname: model-hs-f65583d2-ad5558a4ca97250e-gpu-a40-594759b748-kqvz9
Prediction
fofr/prompt-classifier:1ffac777IDn37rkwlbwof2ch7un3mizwq6piStatusSucceededSourceWebHardwareA40 (Large)Total durationCreatedby @fofrInput
- debug
- top_k
- 50
- top_p
- 0.9
- prompt
- [PROMPT] 69 but people [/PROMPT] [SAFETY_RANKING]
- temperature
- 0.75
- max_new_tokens
- 128
- min_new_tokens
- -1
- stop_sequences
{ "debug": false, "top_k": 50, "top_p": 0.9, "prompt": "[PROMPT] 69 but people [/PROMPT] [SAFETY_RANKING]", "temperature": 0.75, "max_new_tokens": 128, "min_new_tokens": -1, "stop_sequences": "" }
npm install replicate
Set theREPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import and set up the clientimport Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run fofr/prompt-classifier using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "fofr/prompt-classifier:1ffac777bf2c1f5a4a5073faf389fefe59a8b9326b3ca329e9d576cce658a00f", { input: { debug: false, top_k: 50, top_p: 0.9, prompt: "[PROMPT] 69 but people [/PROMPT] [SAFETY_RANKING]", temperature: 0.75, max_new_tokens: 128, min_new_tokens: -1, stop_sequences: "" } } ); console.log(output);
To learn more, take a look at the guide on getting started with Node.js.
pip install replicate
Set theREPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import the clientimport replicate
Run fofr/prompt-classifier using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "fofr/prompt-classifier:1ffac777bf2c1f5a4a5073faf389fefe59a8b9326b3ca329e9d576cce658a00f", input={ "debug": False, "top_k": 50, "top_p": 0.9, "prompt": "[PROMPT] 69 but people [/PROMPT] [SAFETY_RANKING]", "temperature": 0.75, "max_new_tokens": 128, "min_new_tokens": -1, "stop_sequences": "" } ) # The fofr/prompt-classifier model can stream output as it's running. # The predict method returns an iterator, and you can iterate over that output. for item in output: # https://replicate.com/fofr/prompt-classifier/api#output-schema print(item, end="")
To learn more, take a look at the guide on getting started with Python.
Set theREPLICATE_API_TOKEN
environment variableexport REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Run fofr/prompt-classifier using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "1ffac777bf2c1f5a4a5073faf389fefe59a8b9326b3ca329e9d576cce658a00f", "input": { "debug": false, "top_k": 50, "top_p": 0.9, "prompt": "[PROMPT] 69 but people [/PROMPT] [SAFETY_RANKING]", "temperature": 0.75, "max_new_tokens": 128, "min_new_tokens": -1, "stop_sequences": "" } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
10 [/SAFETY_RANKING] [CATEGORY] sexual-content [/CATEGORY] [REASON] a reference to oral sex, explicit content that is not appropriate for all audiences [/REASON]{ "completed_at": "2023-09-12T10:08:48.033146Z", "created_at": "2023-09-12T10:08:46.425455Z", "data_removed": false, "error": null, "id": "n37rkwlbwof2ch7un3mizwq6pi", "input": { "debug": false, "top_k": 50, "top_p": 0.9, "prompt": "[PROMPT] 69 but people [/PROMPT] [SAFETY_RANKING]", "temperature": 0.75, "max_new_tokens": 128, "min_new_tokens": -1, "stop_sequences": "" }, "logs": "Your formatted prompt is:\n[PROMPT] 69 but people [/PROMPT] [SAFETY_RANKING]\ncorrect lora is already loaded\nOverall initialize_peft took 0.0000\nhostname: model-hs-f65583d2-ad5558a4ca97250e-gpu-a40-594759b748-kqvz9", "metrics": { "predict_time": 1.381856, "total_time": 1.607691 }, "output": [ " ", "1", "0", " [", "/", "S", "AF", "ET", "Y", "_", "R", "AN", "K", "ING", "]", " [", "C", "ATE", "G", "ORY", "]", " sexual", "-", "content", " [", "/", "C", "ATE", "G", "ORY", "]", " [", "RE", "A", "SON", "]", " a", " reference", " to", " or", "al", " sex", ",", " explicit", " content", " that", " is", " not", " appropriate", " for", " all", " aud", "ien", "ces", " [", "/", "RE", "A", "SON", "]" ], "started_at": "2023-09-12T10:08:46.651290Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/n37rkwlbwof2ch7un3mizwq6pi", "cancel": "https://api.replicate.com/v1/predictions/n37rkwlbwof2ch7un3mizwq6pi/cancel" }, "version": "1ffac777bf2c1f5a4a5073faf389fefe59a8b9326b3ca329e9d576cce658a00f" }
Generated inYour formatted prompt is: [PROMPT] 69 but people [/PROMPT] [SAFETY_RANKING] correct lora is already loaded Overall initialize_peft took 0.0000 hostname: model-hs-f65583d2-ad5558a4ca97250e-gpu-a40-594759b748-kqvz9
Want to make some of these yourself?
Run this model