georgedavila / bart-large-mnli-classifier
Zero-shot classifier which classifies text into categories of your choosing. Returns a dictionary of the most likely class and all class likelihoods.
- Public
- 3.6K runs
-
T4
- GitHub
Prediction
georgedavila/bart-large-mnli-classifier:d929487cf059f96a17752ebe55ae5a85b2e8be6cd627078e49c6caa2fd4213dbIDxltgkejb32yjrdsnoiym7jtjeiStatusSucceededSourceWebHardwareT4Total durationCreatedInput
- labels
- Cooking Instructions, Question about Astronomy
- text2classify
- how big is the galaxy?
{ "labels": "Cooking Instructions, Question about Astronomy", "text2classify": "how big is the galaxy?" }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run georgedavila/bart-large-mnli-classifier using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "georgedavila/bart-large-mnli-classifier:d929487cf059f96a17752ebe55ae5a85b2e8be6cd627078e49c6caa2fd4213db", { input: { labels: "Cooking Instructions, Question about Astronomy", text2classify: "how big is the galaxy?" } } ); console.log(output);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run georgedavila/bart-large-mnli-classifier using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "georgedavila/bart-large-mnli-classifier:d929487cf059f96a17752ebe55ae5a85b2e8be6cd627078e49c6caa2fd4213db", input={ "labels": "Cooking Instructions, Question about Astronomy", "text2classify": "how big is the galaxy?" } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run georgedavila/bart-large-mnli-classifier using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "georgedavila/bart-large-mnli-classifier:d929487cf059f96a17752ebe55ae5a85b2e8be6cd627078e49c6caa2fd4213db", "input": { "labels": "Cooking Instructions, Question about Astronomy", "text2classify": "how big is the galaxy?" } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "allClasses": { "Cooking Instructions": 0.01837284117937088, "Question about Astronomy": 0.9816271662712097 }, "mostLikelyClass": "Question about Astronomy" }{ "completed_at": "2024-01-03T01:32:14.822533Z", "created_at": "2024-01-03T01:32:08.445212Z", "data_removed": false, "error": null, "id": "xltgkejb32yjrdsnoiym7jtjei", "input": { "labels": "Cooking Instructions, Question about Astronomy", "text2classify": "how big is the galaxy?" }, "logs": null, "metrics": { "predict_time": 6.363915, "total_time": 6.377321 }, "output": { "allClasses": { "Cooking Instructions": 0.01837284117937088, "Question about Astronomy": 0.9816271662712097 }, "mostLikelyClass": "Question about Astronomy" }, "started_at": "2024-01-03T01:32:08.458618Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/xltgkejb32yjrdsnoiym7jtjei", "cancel": "https://api.replicate.com/v1/predictions/xltgkejb32yjrdsnoiym7jtjei/cancel" }, "version": "d929487cf059f96a17752ebe55ae5a85b2e8be6cd627078e49c6caa2fd4213db" }
Generated inPrediction
georgedavila/bart-large-mnli-classifier:d929487cf059f96a17752ebe55ae5a85b2e8be6cd627078e49c6caa2fd4213dbIDr2p7wyrbtp7angxbhjgc4fmvhiStatusSucceededSourceWebHardwareT4Total durationCreatedInput
- labels
- Directions, Food, Car Advice
- text2classify
- How do i get to the mall?
{ "labels": "Directions, Food, Car Advice", "text2classify": "How do i get to the mall?" }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run georgedavila/bart-large-mnli-classifier using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "georgedavila/bart-large-mnli-classifier:d929487cf059f96a17752ebe55ae5a85b2e8be6cd627078e49c6caa2fd4213db", { input: { labels: "Directions, Food, Car Advice", text2classify: "How do i get to the mall?" } } ); console.log(output);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run georgedavila/bart-large-mnli-classifier using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "georgedavila/bart-large-mnli-classifier:d929487cf059f96a17752ebe55ae5a85b2e8be6cd627078e49c6caa2fd4213db", input={ "labels": "Directions, Food, Car Advice", "text2classify": "How do i get to the mall?" } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run georgedavila/bart-large-mnli-classifier using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "georgedavila/bart-large-mnli-classifier:d929487cf059f96a17752ebe55ae5a85b2e8be6cd627078e49c6caa2fd4213db", "input": { "labels": "Directions, Food, Car Advice", "text2classify": "How do i get to the mall?" } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "allClasses": { "Food": 0.018019305542111397, "Car Advice": 0.058397695422172546, "Directions": 0.9235830307006836 }, "mostLikelyClass": "Directions" }{ "completed_at": "2024-01-03T02:05:24.181237Z", "created_at": "2024-01-03T02:05:20.076776Z", "data_removed": false, "error": null, "id": "r2p7wyrbtp7angxbhjgc4fmvhi", "input": { "labels": "Directions, Food, Car Advice", "text2classify": "How do i get to the mall?" }, "logs": null, "metrics": { "predict_time": 4.085678, "total_time": 4.104461 }, "output": { "allClasses": { "Food": 0.018019305542111397, "Car Advice": 0.058397695422172546, "Directions": 0.9235830307006836 }, "mostLikelyClass": "Directions" }, "started_at": "2024-01-03T02:05:20.095559Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/r2p7wyrbtp7angxbhjgc4fmvhi", "cancel": "https://api.replicate.com/v1/predictions/r2p7wyrbtp7angxbhjgc4fmvhi/cancel" }, "version": "d929487cf059f96a17752ebe55ae5a85b2e8be6cd627078e49c6caa2fd4213db" }
Generated inPrediction
georgedavila/bart-large-mnli-classifier:d929487cf059f96a17752ebe55ae5a85b2e8be6cd627078e49c6caa2fd4213dbIDwwjcl6rbtjkwtyy43knkb462fmStatusSucceededSourceWebHardwareT4Total durationCreatedInput
- labels
- Cooking Instructions, Question about Astronomy
- text2classify
- Add salt to boiling water to prevent pasta from sticking together
{ "labels": "Cooking Instructions, Question about Astronomy", "text2classify": "Add salt to boiling water to prevent pasta from sticking together" }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run georgedavila/bart-large-mnli-classifier using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "georgedavila/bart-large-mnli-classifier:d929487cf059f96a17752ebe55ae5a85b2e8be6cd627078e49c6caa2fd4213db", { input: { labels: "Cooking Instructions, Question about Astronomy", text2classify: "Add salt to boiling water to prevent pasta from sticking together" } } ); console.log(output);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run georgedavila/bart-large-mnli-classifier using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "georgedavila/bart-large-mnli-classifier:d929487cf059f96a17752ebe55ae5a85b2e8be6cd627078e49c6caa2fd4213db", input={ "labels": "Cooking Instructions, Question about Astronomy", "text2classify": "Add salt to boiling water to prevent pasta from sticking together" } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run georgedavila/bart-large-mnli-classifier using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "georgedavila/bart-large-mnli-classifier:d929487cf059f96a17752ebe55ae5a85b2e8be6cd627078e49c6caa2fd4213db", "input": { "labels": "Cooking Instructions, Question about Astronomy", "text2classify": "Add salt to boiling water to prevent pasta from sticking together" } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "allClasses": { "Cooking Instructions": 0.9597448110580444, "Question about Astronomy": 0.04025513678789139 }, "mostLikelyClass": "Cooking Instructions" }{ "completed_at": "2024-01-03T01:30:08.099531Z", "created_at": "2024-01-03T01:28:02.416962Z", "data_removed": false, "error": null, "id": "wwjcl6rbtjkwtyy43knkb462fm", "input": { "labels": "Cooking Instructions, Question about Astronomy", "text2classify": "Add salt to boiling water to prevent pasta from sticking together" }, "logs": "config.json: 0%| | 0.00/1.15k [00:00<?, ?B/s]\nconfig.json: 100%|██████████| 1.15k/1.15k [00:00<00:00, 5.85MB/s]\nmodel.safetensors: 0%| | 0.00/1.63G [00:00<?, ?B/s]\nmodel.safetensors: 1%| | 10.5M/1.63G [00:00<00:39, 41.3MB/s]\nmodel.safetensors: 2%|▏ | 31.5M/1.63G [00:00<00:15, 100MB/s] \nmodel.safetensors: 3%|▎ | 52.4M/1.63G [00:00<00:12, 129MB/s]\nmodel.safetensors: 5%|▍ | 73.4M/1.63G [00:00<00:11, 131MB/s]\nmodel.safetensors: 6%|▌ | 94.4M/1.63G [00:00<00:10, 140MB/s]\nmodel.safetensors: 7%|▋ | 115M/1.63G [00:00<00:09, 158MB/s] \nmodel.safetensors: 8%|▊ | 136M/1.63G [00:00<00:08, 169MB/s]\nmodel.safetensors: 10%|▉ | 157M/1.63G [00:01<00:08, 167MB/s]\nmodel.safetensors: 11%|█ | 178M/1.63G [00:01<00:08, 178MB/s]\nmodel.safetensors: 12%|█▏ | 199M/1.63G [00:01<00:08, 177MB/s]\nmodel.safetensors: 14%|█▎ | 220M/1.63G [00:01<00:07, 183MB/s]\nmodel.safetensors: 15%|█▍ | 241M/1.63G [00:01<00:07, 186MB/s]\nmodel.safetensors: 16%|█▌ | 262M/1.63G [00:01<00:07, 189MB/s]\nmodel.safetensors: 17%|█▋ | 283M/1.63G [00:01<00:06, 193MB/s]\nmodel.safetensors: 19%|█▉ | 315M/1.63G [00:01<00:06, 199MB/s]\nmodel.safetensors: 21%|██ | 336M/1.63G [00:02<00:06, 198MB/s]\nmodel.safetensors: 22%|██▏ | 357M/1.63G [00:02<00:06, 200MB/s]\nmodel.safetensors: 23%|██▎ | 377M/1.63G [00:02<00:06, 197MB/s]\nmodel.safetensors: 24%|██▍ | 398M/1.63G [00:02<00:06, 197MB/s]\nmodel.safetensors: 26%|██▋ | 430M/1.63G [00:02<00:05, 202MB/s]\nmodel.safetensors: 28%|██▊ | 461M/1.63G [00:02<00:05, 206MB/s]\nmodel.safetensors: 30%|███ | 493M/1.63G [00:02<00:05, 209MB/s]\nmodel.safetensors: 32%|███▏ | 524M/1.63G [00:02<00:05, 208MB/s]\nmodel.safetensors: 33%|███▎ | 545M/1.63G [00:03<00:05, 206MB/s]\nmodel.safetensors: 35%|███▍ | 566M/1.63G [00:03<00:05, 203MB/s]\nmodel.safetensors: 36%|███▌ | 587M/1.63G [00:03<00:05, 202MB/s]\nmodel.safetensors: 37%|███▋ | 608M/1.63G [00:03<00:05, 202MB/s]\nmodel.safetensors: 39%|███▊ | 629M/1.63G [00:03<00:04, 203MB/s]\nmodel.safetensors: 41%|████ | 661M/1.63G [00:03<00:04, 206MB/s]\nmodel.safetensors: 42%|████▏ | 682M/1.63G [00:03<00:04, 205MB/s]\nmodel.safetensors: 43%|████▎ | 703M/1.63G [00:03<00:04, 203MB/s]\nmodel.safetensors: 44%|████▍ | 724M/1.63G [00:03<00:05, 178MB/s]\nmodel.safetensors: 46%|████▌ | 744M/1.63G [00:04<00:04, 186MB/s]\nmodel.safetensors: 48%|████▊ | 776M/1.63G [00:04<00:04, 193MB/s]\nmodel.safetensors: 49%|████▉ | 797M/1.63G [00:04<00:04, 195MB/s]\nmodel.safetensors: 50%|█████ | 818M/1.63G [00:04<00:04, 198MB/s]\nmodel.safetensors: 52%|█████▏ | 849M/1.63G [00:04<00:03, 203MB/s]\nmodel.safetensors: 54%|█████▍ | 881M/1.63G [00:04<00:03, 201MB/s]\nmodel.safetensors: 55%|█████▌ | 902M/1.63G [00:04<00:03, 200MB/s]\nmodel.safetensors: 57%|█████▋ | 923M/1.63G [00:04<00:03, 199MB/s]\nmodel.safetensors: 58%|█████▊ | 944M/1.63G [00:05<00:03, 202MB/s]\nmodel.safetensors: 60%|█████▉ | 975M/1.63G [00:05<00:03, 204MB/s]\nmodel.safetensors: 61%|██████ | 996M/1.63G [00:05<00:03, 204MB/s]\nmodel.safetensors: 62%|██████▏ | 1.02G/1.63G [00:05<00:03, 200MB/s]\nmodel.safetensors: 64%|██████▎ | 1.04G/1.63G [00:05<00:02, 200MB/s]\nmodel.safetensors: 66%|██████▌ | 1.07G/1.63G [00:05<00:02, 205MB/s]\nmodel.safetensors: 67%|██████▋ | 1.09G/1.63G [00:05<00:02, 191MB/s]\nmodel.safetensors: 69%|██████▉ | 1.12G/1.63G [00:05<00:02, 199MB/s]\nmodel.safetensors: 70%|███████ | 1.14G/1.63G [00:06<00:02, 200MB/s]\nmodel.safetensors: 71%|███████▏ | 1.16G/1.63G [00:06<00:02, 201MB/s]\nmodel.safetensors: 73%|███████▎ | 1.18G/1.63G [00:06<00:02, 204MB/s]\nmodel.safetensors: 75%|███████▍ | 1.22G/1.63G [00:06<00:01, 207MB/s]\nmodel.safetensors: 77%|███████▋ | 1.25G/1.63G [00:06<00:01, 209MB/s]\nmodel.safetensors: 78%|███████▊ | 1.27G/1.63G [00:06<00:01, 182MB/s]\nmodel.safetensors: 79%|███████▉ | 1.29G/1.63G [00:06<00:01, 185MB/s]\nmodel.safetensors: 80%|████████ | 1.31G/1.63G [00:06<00:01, 172MB/s]\nmodel.safetensors: 82%|████████▏ | 1.33G/1.63G [00:07<00:01, 176MB/s]\nmodel.safetensors: 83%|████████▎ | 1.35G/1.63G [00:07<00:01, 183MB/s]\nmodel.safetensors: 84%|████████▍ | 1.37G/1.63G [00:07<00:01, 189MB/s]\nmodel.safetensors: 86%|████████▌ | 1.41G/1.63G [00:07<00:01, 198MB/s]\nmodel.safetensors: 88%|████████▊ | 1.44G/1.63G [00:07<00:01, 192MB/s]\nmodel.safetensors: 90%|█████████ | 1.47G/1.63G [00:07<00:00, 197MB/s]\nmodel.safetensors: 91%|█████████▏| 1.49G/1.63G [00:07<00:00, 199MB/s]\nmodel.safetensors: 93%|█████████▎| 1.51G/1.63G [00:07<00:00, 189MB/s]\nmodel.safetensors: 94%|█████████▍| 1.53G/1.63G [00:08<00:00, 189MB/s]\nmodel.safetensors: 95%|█████████▌| 1.55G/1.63G [00:08<00:00, 193MB/s]\nmodel.safetensors: 97%|█████████▋| 1.57G/1.63G [00:08<00:00, 178MB/s]\nmodel.safetensors: 98%|█████████▊| 1.59G/1.63G [00:08<00:00, 185MB/s]\nmodel.safetensors: 99%|█████████▉| 1.61G/1.63G [00:08<00:00, 170MB/s]\nmodel.safetensors: 100%|██████████| 1.63G/1.63G [00:08<00:00, 186MB/s]\ntokenizer_config.json: 0%| | 0.00/26.0 [00:00<?, ?B/s]\ntokenizer_config.json: 100%|██████████| 26.0/26.0 [00:00<00:00, 78.8kB/s]\nvocab.json: 0%| | 0.00/899k [00:00<?, ?B/s]\nvocab.json: 100%|██████████| 899k/899k [00:00<00:00, 9.29MB/s]\nmerges.txt: 0%| | 0.00/456k [00:00<?, ?B/s]\nmerges.txt: 100%|██████████| 456k/456k [00:00<00:00, 61.4MB/s]\ntokenizer.json: 0%| | 0.00/1.36M [00:00<?, ?B/s]\ntokenizer.json: 100%|██████████| 1.36M/1.36M [00:00<00:00, 8.29MB/s]\ntokenizer.json: 100%|██████████| 1.36M/1.36M [00:00<00:00, 8.17MB/s]", "metrics": { "predict_time": 12.641566, "total_time": 125.682569 }, "output": { "allClasses": { "Cooking Instructions": 0.9597448110580444, "Question about Astronomy": 0.04025513678789139 }, "mostLikelyClass": "Cooking Instructions" }, "started_at": "2024-01-03T01:29:55.457965Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/wwjcl6rbtjkwtyy43knkb462fm", "cancel": "https://api.replicate.com/v1/predictions/wwjcl6rbtjkwtyy43knkb462fm/cancel" }, "version": "d929487cf059f96a17752ebe55ae5a85b2e8be6cd627078e49c6caa2fd4213db" }
Generated inconfig.json: 0%| | 0.00/1.15k [00:00<?, ?B/s] config.json: 100%|██████████| 1.15k/1.15k [00:00<00:00, 5.85MB/s] model.safetensors: 0%| | 0.00/1.63G [00:00<?, ?B/s] model.safetensors: 1%| | 10.5M/1.63G [00:00<00:39, 41.3MB/s] model.safetensors: 2%|▏ | 31.5M/1.63G [00:00<00:15, 100MB/s] model.safetensors: 3%|▎ | 52.4M/1.63G [00:00<00:12, 129MB/s] model.safetensors: 5%|▍ | 73.4M/1.63G [00:00<00:11, 131MB/s] model.safetensors: 6%|▌ | 94.4M/1.63G [00:00<00:10, 140MB/s] model.safetensors: 7%|▋ | 115M/1.63G [00:00<00:09, 158MB/s] model.safetensors: 8%|▊ | 136M/1.63G [00:00<00:08, 169MB/s] model.safetensors: 10%|▉ | 157M/1.63G [00:01<00:08, 167MB/s] model.safetensors: 11%|█ | 178M/1.63G [00:01<00:08, 178MB/s] model.safetensors: 12%|█▏ | 199M/1.63G [00:01<00:08, 177MB/s] model.safetensors: 14%|█▎ | 220M/1.63G [00:01<00:07, 183MB/s] model.safetensors: 15%|█▍ | 241M/1.63G [00:01<00:07, 186MB/s] model.safetensors: 16%|█▌ | 262M/1.63G [00:01<00:07, 189MB/s] model.safetensors: 17%|█▋ | 283M/1.63G [00:01<00:06, 193MB/s] model.safetensors: 19%|█▉ | 315M/1.63G [00:01<00:06, 199MB/s] model.safetensors: 21%|██ | 336M/1.63G [00:02<00:06, 198MB/s] model.safetensors: 22%|██▏ | 357M/1.63G [00:02<00:06, 200MB/s] model.safetensors: 23%|██▎ | 377M/1.63G [00:02<00:06, 197MB/s] model.safetensors: 24%|██▍ | 398M/1.63G [00:02<00:06, 197MB/s] model.safetensors: 26%|██▋ | 430M/1.63G [00:02<00:05, 202MB/s] model.safetensors: 28%|██▊ | 461M/1.63G [00:02<00:05, 206MB/s] model.safetensors: 30%|███ | 493M/1.63G [00:02<00:05, 209MB/s] model.safetensors: 32%|███▏ | 524M/1.63G [00:02<00:05, 208MB/s] model.safetensors: 33%|███▎ | 545M/1.63G [00:03<00:05, 206MB/s] model.safetensors: 35%|███▍ | 566M/1.63G [00:03<00:05, 203MB/s] model.safetensors: 36%|███▌ | 587M/1.63G [00:03<00:05, 202MB/s] model.safetensors: 37%|███▋ | 608M/1.63G [00:03<00:05, 202MB/s] model.safetensors: 39%|███▊ | 629M/1.63G [00:03<00:04, 203MB/s] model.safetensors: 41%|████ | 661M/1.63G [00:03<00:04, 206MB/s] model.safetensors: 42%|████▏ | 682M/1.63G [00:03<00:04, 205MB/s] model.safetensors: 43%|████▎ | 703M/1.63G [00:03<00:04, 203MB/s] model.safetensors: 44%|████▍ | 724M/1.63G [00:03<00:05, 178MB/s] model.safetensors: 46%|████▌ | 744M/1.63G [00:04<00:04, 186MB/s] model.safetensors: 48%|████▊ | 776M/1.63G [00:04<00:04, 193MB/s] model.safetensors: 49%|████▉ | 797M/1.63G [00:04<00:04, 195MB/s] model.safetensors: 50%|█████ | 818M/1.63G [00:04<00:04, 198MB/s] model.safetensors: 52%|█████▏ | 849M/1.63G [00:04<00:03, 203MB/s] model.safetensors: 54%|█████▍ | 881M/1.63G [00:04<00:03, 201MB/s] model.safetensors: 55%|█████▌ | 902M/1.63G [00:04<00:03, 200MB/s] model.safetensors: 57%|█████▋ | 923M/1.63G [00:04<00:03, 199MB/s] model.safetensors: 58%|█████▊ | 944M/1.63G [00:05<00:03, 202MB/s] model.safetensors: 60%|█████▉ | 975M/1.63G [00:05<00:03, 204MB/s] model.safetensors: 61%|██████ | 996M/1.63G [00:05<00:03, 204MB/s] model.safetensors: 62%|██████▏ | 1.02G/1.63G [00:05<00:03, 200MB/s] model.safetensors: 64%|██████▎ | 1.04G/1.63G [00:05<00:02, 200MB/s] model.safetensors: 66%|██████▌ | 1.07G/1.63G [00:05<00:02, 205MB/s] model.safetensors: 67%|██████▋ | 1.09G/1.63G [00:05<00:02, 191MB/s] model.safetensors: 69%|██████▉ | 1.12G/1.63G [00:05<00:02, 199MB/s] model.safetensors: 70%|███████ | 1.14G/1.63G [00:06<00:02, 200MB/s] model.safetensors: 71%|███████▏ | 1.16G/1.63G [00:06<00:02, 201MB/s] model.safetensors: 73%|███████▎ | 1.18G/1.63G [00:06<00:02, 204MB/s] model.safetensors: 75%|███████▍ | 1.22G/1.63G [00:06<00:01, 207MB/s] model.safetensors: 77%|███████▋ | 1.25G/1.63G [00:06<00:01, 209MB/s] model.safetensors: 78%|███████▊ | 1.27G/1.63G [00:06<00:01, 182MB/s] model.safetensors: 79%|███████▉ | 1.29G/1.63G [00:06<00:01, 185MB/s] model.safetensors: 80%|████████ | 1.31G/1.63G [00:06<00:01, 172MB/s] model.safetensors: 82%|████████▏ | 1.33G/1.63G [00:07<00:01, 176MB/s] model.safetensors: 83%|████████▎ | 1.35G/1.63G [00:07<00:01, 183MB/s] model.safetensors: 84%|████████▍ | 1.37G/1.63G [00:07<00:01, 189MB/s] model.safetensors: 86%|████████▌ | 1.41G/1.63G [00:07<00:01, 198MB/s] model.safetensors: 88%|████████▊ | 1.44G/1.63G [00:07<00:01, 192MB/s] model.safetensors: 90%|█████████ | 1.47G/1.63G [00:07<00:00, 197MB/s] model.safetensors: 91%|█████████▏| 1.49G/1.63G [00:07<00:00, 199MB/s] model.safetensors: 93%|█████████▎| 1.51G/1.63G [00:07<00:00, 189MB/s] model.safetensors: 94%|█████████▍| 1.53G/1.63G [00:08<00:00, 189MB/s] model.safetensors: 95%|█████████▌| 1.55G/1.63G [00:08<00:00, 193MB/s] model.safetensors: 97%|█████████▋| 1.57G/1.63G [00:08<00:00, 178MB/s] model.safetensors: 98%|█████████▊| 1.59G/1.63G [00:08<00:00, 185MB/s] model.safetensors: 99%|█████████▉| 1.61G/1.63G [00:08<00:00, 170MB/s] model.safetensors: 100%|██████████| 1.63G/1.63G [00:08<00:00, 186MB/s] tokenizer_config.json: 0%| | 0.00/26.0 [00:00<?, ?B/s] tokenizer_config.json: 100%|██████████| 26.0/26.0 [00:00<00:00, 78.8kB/s] vocab.json: 0%| | 0.00/899k [00:00<?, ?B/s] vocab.json: 100%|██████████| 899k/899k [00:00<00:00, 9.29MB/s] merges.txt: 0%| | 0.00/456k [00:00<?, ?B/s] merges.txt: 100%|██████████| 456k/456k [00:00<00:00, 61.4MB/s] tokenizer.json: 0%| | 0.00/1.36M [00:00<?, ?B/s] tokenizer.json: 100%|██████████| 1.36M/1.36M [00:00<00:00, 8.29MB/s] tokenizer.json: 100%|██████████| 1.36M/1.36M [00:00<00:00, 8.17MB/s]
Prediction
georgedavila/bart-large-mnli-classifier:d929487cf059f96a17752ebe55ae5a85b2e8be6cd627078e49c6caa2fd4213dbIDxltgkejb32yjrdsnoiym7jtjeiStatusSucceededSourceWebHardwareT4Total durationCreatedInput
- labels
- Cooking Instructions, Question about Astronomy
- text2classify
- how big is the galaxy?
{ "labels": "Cooking Instructions, Question about Astronomy", "text2classify": "how big is the galaxy?" }
Install Replicate’s Node.js client library:npm install replicate
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run georgedavila/bart-large-mnli-classifier using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "georgedavila/bart-large-mnli-classifier:d929487cf059f96a17752ebe55ae5a85b2e8be6cd627078e49c6caa2fd4213db", { input: { labels: "Cooking Instructions, Question about Astronomy", text2classify: "how big is the galaxy?" } } ); console.log(output);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Import the client:import replicate
Run georgedavila/bart-large-mnli-classifier using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "georgedavila/bart-large-mnli-classifier:d929487cf059f96a17752ebe55ae5a85b2e8be6cd627078e49c6caa2fd4213db", input={ "labels": "Cooking Instructions, Question about Astronomy", "text2classify": "how big is the galaxy?" } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Run georgedavila/bart-large-mnli-classifier using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "georgedavila/bart-large-mnli-classifier:d929487cf059f96a17752ebe55ae5a85b2e8be6cd627078e49c6caa2fd4213db", "input": { "labels": "Cooking Instructions, Question about Astronomy", "text2classify": "how big is the galaxy?" } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
Output
{ "allClasses": { "Cooking Instructions": 0.01837284117937088, "Question about Astronomy": 0.9816271662712097 }, "mostLikelyClass": "Question about Astronomy" }{ "completed_at": "2024-01-03T01:32:14.822533Z", "created_at": "2024-01-03T01:32:08.445212Z", "data_removed": false, "error": null, "id": "xltgkejb32yjrdsnoiym7jtjei", "input": { "labels": "Cooking Instructions, Question about Astronomy", "text2classify": "how big is the galaxy?" }, "logs": null, "metrics": { "predict_time": 6.363915, "total_time": 6.377321 }, "output": { "allClasses": { "Cooking Instructions": 0.01837284117937088, "Question about Astronomy": 0.9816271662712097 }, "mostLikelyClass": "Question about Astronomy" }, "started_at": "2024-01-03T01:32:08.458618Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/xltgkejb32yjrdsnoiym7jtjei", "cancel": "https://api.replicate.com/v1/predictions/xltgkejb32yjrdsnoiym7jtjei/cancel" }, "version": "d929487cf059f96a17752ebe55ae5a85b2e8be6cd627078e49c6caa2fd4213db" }
Generated in
Want to make some of these yourself?
Run this model