These models classify text by sentiment, topic, intent, or safety.
You can sort customer feedback, detect toxic content, route support messages, analyze reviews, or tag documents with custom labels.
The collection includes everything from lightweight sentiment models to advanced large language models that can read long passages, interpret nuance, and follow natural language instructions. Use them when you want reliable structure from unstructured text.
If you are specifically interested in detecting NSFW content, check out our Detect NSFW Content collection.
Recommended Models
If you want the strongest overall accuracy and reasoning, use openai/gpt-5. It handles sentiment, topic, intent, moderation, and multi-step classification with high reliability.
If you want a second high-end option that excels at logic-heavy classification, use xai/grok-4. It is very good at subtle or ambiguous cases that require deeper context understanding.
For safety or policy driven classification, lucataco/gpt-oss-safeguard-20b is a strong alternative.
For general purpose text classification at a reasonable cost, georgedavila/bart-large-mnli-classifier performs well because it uses zero shot NLI techniques.
If you want a lightweight option for sentiment tasks, curt-park/sentiment-analysis is inexpensive and very fast.
For simple positive or negative sentiment tasks, curt-park/sentiment-analysis is the most direct model. It is fast, predictable, and well suited for large batches.
If you want sentiment plus explanation or nuanced emotional tone, openai/gpt-5 does well with few shot prompts.
If you want to mix sentiment with toxicity filtering, fofr/prompt-classifier can classify text for harmful content scores.
For topic classification with flexible labels, georgedavila/bart-large-mnli-classifier is ideal because you can provide your own category list and it will rank them.
For topic routing where context is complex, xai/grok-4 works well because it understands long passages and subtle distinctions.
For policy governed topic routing, lucataco/gpt-oss-safeguard-20b lets you supply a written policy and returns labels according to your rules.
For safety classification using custom rules, lucataco/gpt-oss-safeguard-20b is designed for policy driven safety labeling and returns structured rationales.
For toxicity scoring on a scale from zero to ten, fofr/prompt-classifier is a fine tuned Llama model built for identifying harmful or unsafe prompts.
For detecting safety violations using general reasoning, openai/gpt-5 can classify harmful or disallowed content with detailed explanations.
Small fine tuned models like curt-park/sentiment-analysis, georgedavila/bart-large-mnli-classifier, and fofr/prompt-classifier run quickly and cheaply but follow narrow tasks. They are ideal when you know your labels and want predictable output.
Large LLMs like openai/gpt-5 and xai/grok-4 can classify text using natural language instructions, support open ended labels, and explain their reasoning.
Policy based classification models like lucataco/gpt-oss-safeguard-20b sit between these extremes by offering structured safety classification without relying on closed systems.
Small classification models such as curt-park/sentiment-analysis and georgedavila/bart-large-mnli-classifier usually return a label and sometimes a confidence score.
Moderation models such as fofr/prompt-classifier return a numeric toxicity score.
Large LLMs such as openai/gpt-5 or xai/grok-4 can return labels, explanations, or structured JSON depending on your prompt.
Policy based models such as lucataco/gpt-oss-safeguard-20b output both a label and a chain of reasoning.
You can self host or publish your own classifier by building it with Cog and including a replicate.yaml file that defines inputs, outputs, prediction logic, and environment.
Models such as curt-park/sentiment-analysis and lucataco/gpt-oss-safeguard-20b demonstrate simple and complex setups.
Once you push the repository to Replicate, it runs on managed GPUs without extra configuration.
Yes, as long as the license on each model page allows commercial use.
Open source models such as curt-park/sentiment-analysis, georgedavila/bart-large-mnli-classifier, and lucataco/gpt-oss-safeguard-20b can typically be used commercially.
Closed models such as openai/gpt-5 and xai/grok-4 have their own usage policies that you should review.
Upload or paste text into the model interface and choose any optional parameters.
For simple tasks like polarity detection, use curt-park/sentiment-analysis.
For flexible label sets, use georgedavila/bart-large-mnli-classifier.
For safety scoring, use fofr/prompt-classifier or lucataco/gpt-oss-safeguard-20b.
For complex or multi step classification, use openai/gpt-5 or xai/grok-4.
Large LLMs such as openai/gpt-5 and xai/grok-4 cost more and take longer but give better reasoning.
Small models such as curt-park/sentiment-analysis and georgedavila/bart-large-mnli-classifier are faster and cheaper but have limited flexibility.
Safety models such as lucataco/gpt-oss-safeguard-20b can classify against your own custom policies.
Always check the model input format and consider testing a small batch first.
Use small specialized models such as curt-park/sentiment-analysis for narrow tasks where speed matters.
Use NLI based models such as georgedavila/bart-large-mnli-classifier for flexible zero shot label lists.
Use domain specific classifiers such as fofr/prompt-classifier for toxicity and prompt safety.
Use reasoning heavy LLMs such as openai/gpt-5 or xai/grok-4 when labels are ambiguous or when you need explanations.
Use policy driven safety models such as lucataco/gpt-oss-safeguard-20b when you must classify text according to internal rules or compliance requirements.
Recommended Models

xai/grok-4
Grok 4 is xAI’s most advanced reasoning model. Excels at logical thinking and in-depth analysis. Ideal for insightful discussions and complex problem-solving.
Updated 3 days, 4 hours ago
3.8K runs


lucataco/gpt-oss-safeguard-20b
classify text content based on safety policies that you provide and perform a suite of foundational safety tasks
Updated 1 week, 5 days ago
7 runs


openai/gpt-5
OpenAI's new model excelling at coding, writing, and reasoning.
Updated 2 months ago
546.9K runs


curt-park/sentiment-analysis
Sentiment Analysis with Texts
Updated 1 year, 8 months ago
4.9K runs


georgedavila/bart-large-mnli-classifier
Zero-shot classifier which classifies text into categories of your choosing. Returns a dictionary of the most likely class and all class likelihoods.
Updated 1 year, 10 months ago
4.3K runs


fofr/prompt-classifier
Determines the toxicity of text to image prompts, llama-13b fine-tune. [SAFETY_RANKING] between 0 (safe) and 10 (toxic)
Updated 2 years, 2 months ago
1.9M runs