hudsongraeme
/
cybertruck
SDXL trained on a small cybertruck dataset
- Public
- 534 runs
-
L40S
- SDXL fine-tune
Prediction
hudsongraeme/cybertruck:4e7b9292ID45a5lflblmaagf3bif7o6oe4feStatusSucceededSourceWebHardwareA40 (Large)Total durationCreatedInput
- width
- 1024
- height
- 1024
- prompt
- A photo of TOK driving on the highway
- refine
- no_refiner
- scheduler
- K_EULER
- lora_scale
- 0.6
- num_outputs
- 1
- guidance_scale
- 7.5
- apply_watermark
- high_noise_frac
- 0.8
- negative_prompt
- prompt_strength
- 0.8
- num_inference_steps
- 50
{ "width": 1024, "height": 1024, "prompt": "A photo of TOK driving on the highway", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": true, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 50 }
Install Replicate’s Node.js client library:npm install replicate
Set theREPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run hudsongraeme/cybertruck using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "hudsongraeme/cybertruck:4e7b92920cf8bbec4862ccad2f905d83430d1ee54f47261d52e055aeadf6f9da", { input: { width: 1024, height: 1024, prompt: "A photo of TOK driving on the highway", refine: "no_refiner", scheduler: "K_EULER", lora_scale: 0.6, num_outputs: 1, guidance_scale: 7.5, apply_watermark: true, high_noise_frac: 0.8, negative_prompt: "", prompt_strength: 0.8, num_inference_steps: 50 } } ); console.log(output);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Set theREPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import the client:import replicate
Run hudsongraeme/cybertruck using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "hudsongraeme/cybertruck:4e7b92920cf8bbec4862ccad2f905d83430d1ee54f47261d52e055aeadf6f9da", input={ "width": 1024, "height": 1024, "prompt": "A photo of TOK driving on the highway", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": True, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 50 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Set theREPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Run hudsongraeme/cybertruck using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "4e7b92920cf8bbec4862ccad2f905d83430d1ee54f47261d52e055aeadf6f9da", "input": { "width": 1024, "height": 1024, "prompt": "A photo of TOK driving on the highway", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": true, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 50 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
You can run this model locally using Cog. First, install Cog:brew install cog
If you don’t have Homebrew, there are other installation options available.
Run this to download the model and run it in your local environment:
cog predict r8.im/hudsongraeme/cybertruck@sha256:4e7b92920cf8bbec4862ccad2f905d83430d1ee54f47261d52e055aeadf6f9da \ -i 'width=1024' \ -i 'height=1024' \ -i 'prompt="A photo of TOK driving on the highway"' \ -i 'refine="no_refiner"' \ -i 'scheduler="K_EULER"' \ -i 'lora_scale=0.6' \ -i 'num_outputs=1' \ -i 'guidance_scale=7.5' \ -i 'apply_watermark=true' \ -i 'high_noise_frac=0.8' \ -i 'negative_prompt=""' \ -i 'prompt_strength=0.8' \ -i 'num_inference_steps=50'
To learn more, take a look at the Cog documentation.
Run this to download the model and run it in your local environment:
docker run -d -p 5000:5000 --gpus=all r8.im/hudsongraeme/cybertruck@sha256:4e7b92920cf8bbec4862ccad2f905d83430d1ee54f47261d52e055aeadf6f9da
curl -s -X POST \ -H "Content-Type: application/json" \ -d $'{ "input": { "width": 1024, "height": 1024, "prompt": "A photo of TOK driving on the highway", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": true, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 50 } }' \ http://localhost:5000/predictions
To learn more, take a look at the Cog documentation.
Output
{ "completed_at": "2023-09-23T21:23:45.783145Z", "created_at": "2023-09-23T21:23:30.031531Z", "data_removed": false, "error": null, "id": "45a5lflblmaagf3bif7o6oe4fe", "input": { "width": 1024, "height": 1024, "prompt": "A photo of TOK driving on the highway", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": true, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 50 }, "logs": null, "metrics": { "predict_time": 14.781138, "total_time": 15.751614 }, "output": [ "https://replicate.delivery/pbxt/OsS0hG6ZDkrNLRvOR0pmE1lc3aY0mKnARFQflLGxZElwSozIA/out-0.png" ], "started_at": "2023-09-23T21:23:31.002007Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/45a5lflblmaagf3bif7o6oe4fe", "cancel": "https://api.replicate.com/v1/predictions/45a5lflblmaagf3bif7o6oe4fe/cancel" }, "version": "8d6b87c2a6de0c8ff54ac79aaf9951eeedc74e310cbb44b31a68de3423c2cd67" }
Generated inPrediction
hudsongraeme/cybertruck:4e7b9292IDsufsxo3bv3lm4bbccbppov77smStatusSucceededSourceWebHardwareA40 (Large)Total durationCreatedInput
- width
- 1024
- height
- 1024
- prompt
- A photo of TOK driving on the highway
- refine
- no_refiner
- scheduler
- K_EULER
- lora_scale
- 0.6
- num_outputs
- 1
- guidance_scale
- 7.5
- apply_watermark
- high_noise_frac
- 0.8
- negative_prompt
- prompt_strength
- 0.8
- num_inference_steps
- 50
{ "width": 1024, "height": 1024, "prompt": "A photo of TOK driving on the highway", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": true, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 50 }
Install Replicate’s Node.js client library:npm install replicate
Set theREPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run hudsongraeme/cybertruck using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "hudsongraeme/cybertruck:4e7b92920cf8bbec4862ccad2f905d83430d1ee54f47261d52e055aeadf6f9da", { input: { width: 1024, height: 1024, prompt: "A photo of TOK driving on the highway", refine: "no_refiner", scheduler: "K_EULER", lora_scale: 0.6, num_outputs: 1, guidance_scale: 7.5, apply_watermark: true, high_noise_frac: 0.8, negative_prompt: "", prompt_strength: 0.8, num_inference_steps: 50 } } ); console.log(output);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Set theREPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import the client:import replicate
Run hudsongraeme/cybertruck using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "hudsongraeme/cybertruck:4e7b92920cf8bbec4862ccad2f905d83430d1ee54f47261d52e055aeadf6f9da", input={ "width": 1024, "height": 1024, "prompt": "A photo of TOK driving on the highway", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": True, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 50 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Set theREPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Run hudsongraeme/cybertruck using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "4e7b92920cf8bbec4862ccad2f905d83430d1ee54f47261d52e055aeadf6f9da", "input": { "width": 1024, "height": 1024, "prompt": "A photo of TOK driving on the highway", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": true, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 50 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
You can run this model locally using Cog. First, install Cog:brew install cog
If you don’t have Homebrew, there are other installation options available.
Run this to download the model and run it in your local environment:
cog predict r8.im/hudsongraeme/cybertruck@sha256:4e7b92920cf8bbec4862ccad2f905d83430d1ee54f47261d52e055aeadf6f9da \ -i 'width=1024' \ -i 'height=1024' \ -i 'prompt="A photo of TOK driving on the highway"' \ -i 'refine="no_refiner"' \ -i 'scheduler="K_EULER"' \ -i 'lora_scale=0.6' \ -i 'num_outputs=1' \ -i 'guidance_scale=7.5' \ -i 'apply_watermark=true' \ -i 'high_noise_frac=0.8' \ -i 'negative_prompt=""' \ -i 'prompt_strength=0.8' \ -i 'num_inference_steps=50'
To learn more, take a look at the Cog documentation.
Run this to download the model and run it in your local environment:
docker run -d -p 5000:5000 --gpus=all r8.im/hudsongraeme/cybertruck@sha256:4e7b92920cf8bbec4862ccad2f905d83430d1ee54f47261d52e055aeadf6f9da
curl -s -X POST \ -H "Content-Type: application/json" \ -d $'{ "input": { "width": 1024, "height": 1024, "prompt": "A photo of TOK driving on the highway", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": true, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 50 } }' \ http://localhost:5000/predictions
To learn more, take a look at the Cog documentation.
Output
{ "completed_at": "2023-09-23T23:32:51.118546Z", "created_at": "2023-09-23T23:32:32.769023Z", "data_removed": false, "error": null, "id": "sufsxo3bv3lm4bbccbppov77sm", "input": { "width": 1024, "height": 1024, "prompt": "A photo of TOK driving on the highway", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": true, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 50 }, "logs": null, "metrics": { "predict_time": 16.597095, "total_time": 18.349523 }, "output": [ "https://pbxt.replicate.delivery/Pi4HbOyYV5pdOl8seLI8TfZaq1AnHLaqRZKXPtZk9XqiekOjA/out-0.png" ], "started_at": "2023-09-23T23:32:34.521451Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/sufsxo3bv3lm4bbccbppov77sm", "cancel": "https://api.replicate.com/v1/predictions/sufsxo3bv3lm4bbccbppov77sm/cancel" }, "version": "50ef505f835eb26967d7f3df96103ee0a90d51eeaea60bf7c2372e6ef70b0d06" }
Generated inPrediction
hudsongraeme/cybertruck:4e7b9292IDa4vh2h3btuawwjusgmntecwoxmStatusSucceededSourceWebHardwareA40 (Large)Total durationCreatedInput
- width
- 1024
- height
- 1024
- prompt
- A photo of TOK drifting
- refine
- no_refiner
- scheduler
- K_EULER
- lora_scale
- 0.6
- num_outputs
- 4
- guidance_scale
- 7.5
- apply_watermark
- high_noise_frac
- 0.8
- negative_prompt
- prompt_strength
- 0.8
- num_inference_steps
- 50
{ "width": 1024, "height": 1024, "prompt": "A photo of TOK drifting", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 4, "guidance_scale": 7.5, "apply_watermark": true, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 50 }
Install Replicate’s Node.js client library:npm install replicate
Set theREPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run hudsongraeme/cybertruck using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "hudsongraeme/cybertruck:4e7b92920cf8bbec4862ccad2f905d83430d1ee54f47261d52e055aeadf6f9da", { input: { width: 1024, height: 1024, prompt: "A photo of TOK drifting", refine: "no_refiner", scheduler: "K_EULER", lora_scale: 0.6, num_outputs: 4, guidance_scale: 7.5, apply_watermark: true, high_noise_frac: 0.8, negative_prompt: "", prompt_strength: 0.8, num_inference_steps: 50 } } ); console.log(output);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Set theREPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import the client:import replicate
Run hudsongraeme/cybertruck using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "hudsongraeme/cybertruck:4e7b92920cf8bbec4862ccad2f905d83430d1ee54f47261d52e055aeadf6f9da", input={ "width": 1024, "height": 1024, "prompt": "A photo of TOK drifting", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 4, "guidance_scale": 7.5, "apply_watermark": True, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 50 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Set theREPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Run hudsongraeme/cybertruck using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "4e7b92920cf8bbec4862ccad2f905d83430d1ee54f47261d52e055aeadf6f9da", "input": { "width": 1024, "height": 1024, "prompt": "A photo of TOK drifting", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 4, "guidance_scale": 7.5, "apply_watermark": true, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 50 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
You can run this model locally using Cog. First, install Cog:brew install cog
If you don’t have Homebrew, there are other installation options available.
Run this to download the model and run it in your local environment:
cog predict r8.im/hudsongraeme/cybertruck@sha256:4e7b92920cf8bbec4862ccad2f905d83430d1ee54f47261d52e055aeadf6f9da \ -i 'width=1024' \ -i 'height=1024' \ -i 'prompt="A photo of TOK drifting"' \ -i 'refine="no_refiner"' \ -i 'scheduler="K_EULER"' \ -i 'lora_scale=0.6' \ -i 'num_outputs=4' \ -i 'guidance_scale=7.5' \ -i 'apply_watermark=true' \ -i 'high_noise_frac=0.8' \ -i 'negative_prompt=""' \ -i 'prompt_strength=0.8' \ -i 'num_inference_steps=50'
To learn more, take a look at the Cog documentation.
Run this to download the model and run it in your local environment:
docker run -d -p 5000:5000 --gpus=all r8.im/hudsongraeme/cybertruck@sha256:4e7b92920cf8bbec4862ccad2f905d83430d1ee54f47261d52e055aeadf6f9da
curl -s -X POST \ -H "Content-Type: application/json" \ -d $'{ "input": { "width": 1024, "height": 1024, "prompt": "A photo of TOK drifting", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 4, "guidance_scale": 7.5, "apply_watermark": true, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 50 } }' \ http://localhost:5000/predictions
To learn more, take a look at the Cog documentation.
Output
{ "completed_at": "2023-10-06T17:38:40.838180Z", "created_at": "2023-10-06T17:37:38.680365Z", "data_removed": false, "error": null, "id": "a4vh2h3btuawwjusgmntecwoxm", "input": { "width": 1024, "height": 1024, "prompt": "A photo of TOK drifting", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 4, "guidance_scale": 7.5, "apply_watermark": true, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 50 }, "logs": "Using seed: 880\nEnsuring enough disk space...\nFree disk space: 3320234479616\nDownloading weights: https://pbxt.replicate.delivery/4h6fsYXIdRXfrkTavb4PfmCshSz7LnHFKsOyVMJXHSWa1kOjA/trained_model.tar\nb''\nDownloaded weights in 0.4663057327270508 seconds\nLoading fine-tuned model\nDoes not have Unet. assume we are using LoRA\nLoading Unet LoRA\nPrompt: A photo of <s0><s1> drifting\ntxt2img mode\n 0%| | 0/50 [00:00<?, ?it/s]/root/.pyenv/versions/3.9.18/lib/python3.9/site-packages/torch/nn/modules/conv.py:459: UserWarning: Applied workaround for CuDNN issue, install nvrtc.so (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:80.)\nreturn F.conv2d(input, weight, bias, self.stride,\n 2%|▏ | 1/50 [00:01<01:21, 1.67s/it]\n 4%|▍ | 2/50 [00:02<01:02, 1.31s/it]\n 6%|▌ | 3/50 [00:03<00:56, 1.20s/it]\n 8%|▊ | 4/50 [00:04<00:52, 1.15s/it]\n 10%|█ | 5/50 [00:05<00:50, 1.12s/it]\n 12%|█▏ | 6/50 [00:06<00:48, 1.10s/it]\n 14%|█▍ | 7/50 [00:08<00:46, 1.09s/it]\n 16%|█▌ | 8/50 [00:09<00:45, 1.08s/it]\n 18%|█▊ | 9/50 [00:10<00:44, 1.08s/it]\n 20%|██ | 10/50 [00:11<00:42, 1.07s/it]\n 22%|██▏ | 11/50 [00:12<00:41, 1.07s/it]\n 24%|██▍ | 12/50 [00:13<00:40, 1.07s/it]\n 26%|██▌ | 13/50 [00:14<00:39, 1.06s/it]\n 28%|██▊ | 14/50 [00:15<00:38, 1.06s/it]\n 30%|███ | 15/50 [00:16<00:37, 1.06s/it]\n 32%|███▏ | 16/50 [00:17<00:36, 1.06s/it]\n 34%|███▍ | 17/50 [00:18<00:35, 1.06s/it]\n 36%|███▌ | 18/50 [00:19<00:33, 1.06s/it]\n 38%|███▊ | 19/50 [00:20<00:32, 1.06s/it]\n 40%|████ | 20/50 [00:21<00:31, 1.06s/it]\n 42%|████▏ | 21/50 [00:22<00:30, 1.06s/it]\n 44%|████▍ | 22/50 [00:23<00:29, 1.06s/it]\n 46%|████▌ | 23/50 [00:25<00:28, 1.06s/it]\n 48%|████▊ | 24/50 [00:26<00:27, 1.06s/it]\n 50%|█████ | 25/50 [00:27<00:26, 1.06s/it]\n 52%|█████▏ | 26/50 [00:28<00:25, 1.07s/it]\n 54%|█████▍ | 27/50 [00:29<00:24, 1.06s/it]\n 56%|█████▌ | 28/50 [00:30<00:23, 1.07s/it]\n 58%|█████▊ | 29/50 [00:31<00:22, 1.07s/it]\n 60%|██████ | 30/50 [00:32<00:21, 1.07s/it]\n 62%|██████▏ | 31/50 [00:33<00:20, 1.07s/it]\n 64%|██████▍ | 32/50 [00:34<00:19, 1.07s/it]\n 66%|██████▌ | 33/50 [00:35<00:18, 1.07s/it]\n 68%|██████▊ | 34/50 [00:36<00:17, 1.07s/it]\n 70%|███████ | 35/50 [00:37<00:16, 1.07s/it]\n 72%|███████▏ | 36/50 [00:38<00:14, 1.07s/it]\n 74%|███████▍ | 37/50 [00:40<00:13, 1.07s/it]\n 76%|███████▌ | 38/50 [00:41<00:12, 1.07s/it]\n 78%|███████▊ | 39/50 [00:42<00:11, 1.07s/it]\n 80%|████████ | 40/50 [00:43<00:10, 1.07s/it]\n 82%|████████▏ | 41/50 [00:44<00:09, 1.07s/it]\n 84%|████████▍ | 42/50 [00:45<00:08, 1.07s/it]\n 86%|████████▌ | 43/50 [00:46<00:07, 1.07s/it]\n 88%|████████▊ | 44/50 [00:47<00:06, 1.07s/it]\n 90%|█████████ | 45/50 [00:48<00:05, 1.07s/it]\n 92%|█████████▏| 46/50 [00:49<00:04, 1.07s/it]\n 94%|█████████▍| 47/50 [00:50<00:03, 1.07s/it]\n 96%|█████████▌| 48/50 [00:51<00:02, 1.07s/it]\n 98%|█████████▊| 49/50 [00:52<00:01, 1.07s/it]\n100%|██████████| 50/50 [00:53<00:00, 1.07s/it]\n100%|██████████| 50/50 [00:53<00:00, 1.08s/it]", "metrics": { "predict_time": 61.773797, "total_time": 62.157815 }, "output": [ "https://pbxt.replicate.delivery/ewjeYerw1mBtgoXLONoMRBXX7BfV2POdfAT6BV4fOeBoPwv1IA/out-0.png", "https://pbxt.replicate.delivery/Jl7HQbx0Jl6FGVEk0jhk0vuLe8l75EtfiY8Z1eTZ8dgfBebNC/out-1.png", "https://pbxt.replicate.delivery/uLex6VJOQT3VPaXgwqUhCZSTp7VzkfEfJV63gSPVMygABftGB/out-2.png", "https://pbxt.replicate.delivery/edmvfKyCSohuL0NWJ4gIshyZqsLWenFw3D8TdNMfqlyCCebNC/out-3.png" ], "started_at": "2023-10-06T17:37:39.064383Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/a4vh2h3btuawwjusgmntecwoxm", "cancel": "https://api.replicate.com/v1/predictions/a4vh2h3btuawwjusgmntecwoxm/cancel" }, "version": "50ef505f835eb26967d7f3df96103ee0a90d51eeaea60bf7c2372e6ef70b0d06" }
Generated inUsing seed: 880 Ensuring enough disk space... Free disk space: 3320234479616 Downloading weights: https://pbxt.replicate.delivery/4h6fsYXIdRXfrkTavb4PfmCshSz7LnHFKsOyVMJXHSWa1kOjA/trained_model.tar b'' Downloaded weights in 0.4663057327270508 seconds Loading fine-tuned model Does not have Unet. assume we are using LoRA Loading Unet LoRA Prompt: A photo of <s0><s1> drifting txt2img mode 0%| | 0/50 [00:00<?, ?it/s]/root/.pyenv/versions/3.9.18/lib/python3.9/site-packages/torch/nn/modules/conv.py:459: UserWarning: Applied workaround for CuDNN issue, install nvrtc.so (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:80.) return F.conv2d(input, weight, bias, self.stride, 2%|▏ | 1/50 [00:01<01:21, 1.67s/it] 4%|▍ | 2/50 [00:02<01:02, 1.31s/it] 6%|▌ | 3/50 [00:03<00:56, 1.20s/it] 8%|▊ | 4/50 [00:04<00:52, 1.15s/it] 10%|█ | 5/50 [00:05<00:50, 1.12s/it] 12%|█▏ | 6/50 [00:06<00:48, 1.10s/it] 14%|█▍ | 7/50 [00:08<00:46, 1.09s/it] 16%|█▌ | 8/50 [00:09<00:45, 1.08s/it] 18%|█▊ | 9/50 [00:10<00:44, 1.08s/it] 20%|██ | 10/50 [00:11<00:42, 1.07s/it] 22%|██▏ | 11/50 [00:12<00:41, 1.07s/it] 24%|██▍ | 12/50 [00:13<00:40, 1.07s/it] 26%|██▌ | 13/50 [00:14<00:39, 1.06s/it] 28%|██▊ | 14/50 [00:15<00:38, 1.06s/it] 30%|███ | 15/50 [00:16<00:37, 1.06s/it] 32%|███▏ | 16/50 [00:17<00:36, 1.06s/it] 34%|███▍ | 17/50 [00:18<00:35, 1.06s/it] 36%|███▌ | 18/50 [00:19<00:33, 1.06s/it] 38%|███▊ | 19/50 [00:20<00:32, 1.06s/it] 40%|████ | 20/50 [00:21<00:31, 1.06s/it] 42%|████▏ | 21/50 [00:22<00:30, 1.06s/it] 44%|████▍ | 22/50 [00:23<00:29, 1.06s/it] 46%|████▌ | 23/50 [00:25<00:28, 1.06s/it] 48%|████▊ | 24/50 [00:26<00:27, 1.06s/it] 50%|█████ | 25/50 [00:27<00:26, 1.06s/it] 52%|█████▏ | 26/50 [00:28<00:25, 1.07s/it] 54%|█████▍ | 27/50 [00:29<00:24, 1.06s/it] 56%|█████▌ | 28/50 [00:30<00:23, 1.07s/it] 58%|█████▊ | 29/50 [00:31<00:22, 1.07s/it] 60%|██████ | 30/50 [00:32<00:21, 1.07s/it] 62%|██████▏ | 31/50 [00:33<00:20, 1.07s/it] 64%|██████▍ | 32/50 [00:34<00:19, 1.07s/it] 66%|██████▌ | 33/50 [00:35<00:18, 1.07s/it] 68%|██████▊ | 34/50 [00:36<00:17, 1.07s/it] 70%|███████ | 35/50 [00:37<00:16, 1.07s/it] 72%|███████▏ | 36/50 [00:38<00:14, 1.07s/it] 74%|███████▍ | 37/50 [00:40<00:13, 1.07s/it] 76%|███████▌ | 38/50 [00:41<00:12, 1.07s/it] 78%|███████▊ | 39/50 [00:42<00:11, 1.07s/it] 80%|████████ | 40/50 [00:43<00:10, 1.07s/it] 82%|████████▏ | 41/50 [00:44<00:09, 1.07s/it] 84%|████████▍ | 42/50 [00:45<00:08, 1.07s/it] 86%|████████▌ | 43/50 [00:46<00:07, 1.07s/it] 88%|████████▊ | 44/50 [00:47<00:06, 1.07s/it] 90%|█████████ | 45/50 [00:48<00:05, 1.07s/it] 92%|█████████▏| 46/50 [00:49<00:04, 1.07s/it] 94%|█████████▍| 47/50 [00:50<00:03, 1.07s/it] 96%|█████████▌| 48/50 [00:51<00:02, 1.07s/it] 98%|█████████▊| 49/50 [00:52<00:01, 1.07s/it] 100%|██████████| 50/50 [00:53<00:00, 1.07s/it] 100%|██████████| 50/50 [00:53<00:00, 1.08s/it]
Prediction
hudsongraeme/cybertruck:4e7b9292IDtygkpdtbs7omzsxbsdzcmvuteqStatusSucceededSourceWebHardwareA40 (Large)Total durationCreatedInput
- width
- 1024
- height
- 1024
- prompt
- A photo of TOK driving in deep water
- refine
- no_refiner
- scheduler
- K_EULER
- lora_scale
- 0.6
- num_outputs
- 4
- guidance_scale
- 7.5
- apply_watermark
- high_noise_frac
- 0.8
- negative_prompt
- prompt_strength
- 0.8
- num_inference_steps
- 50
{ "width": 1024, "height": 1024, "prompt": "A photo of TOK driving in deep water", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 4, "guidance_scale": 7.5, "apply_watermark": true, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 50 }
Install Replicate’s Node.js client library:npm install replicate
Set theREPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run hudsongraeme/cybertruck using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "hudsongraeme/cybertruck:4e7b92920cf8bbec4862ccad2f905d83430d1ee54f47261d52e055aeadf6f9da", { input: { width: 1024, height: 1024, prompt: "A photo of TOK driving in deep water", refine: "no_refiner", scheduler: "K_EULER", lora_scale: 0.6, num_outputs: 4, guidance_scale: 7.5, apply_watermark: true, high_noise_frac: 0.8, negative_prompt: "", prompt_strength: 0.8, num_inference_steps: 50 } } ); console.log(output);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Set theREPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import the client:import replicate
Run hudsongraeme/cybertruck using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "hudsongraeme/cybertruck:4e7b92920cf8bbec4862ccad2f905d83430d1ee54f47261d52e055aeadf6f9da", input={ "width": 1024, "height": 1024, "prompt": "A photo of TOK driving in deep water", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 4, "guidance_scale": 7.5, "apply_watermark": True, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 50 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Set theREPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Run hudsongraeme/cybertruck using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "4e7b92920cf8bbec4862ccad2f905d83430d1ee54f47261d52e055aeadf6f9da", "input": { "width": 1024, "height": 1024, "prompt": "A photo of TOK driving in deep water", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 4, "guidance_scale": 7.5, "apply_watermark": true, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 50 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
You can run this model locally using Cog. First, install Cog:brew install cog
If you don’t have Homebrew, there are other installation options available.
Run this to download the model and run it in your local environment:
cog predict r8.im/hudsongraeme/cybertruck@sha256:4e7b92920cf8bbec4862ccad2f905d83430d1ee54f47261d52e055aeadf6f9da \ -i 'width=1024' \ -i 'height=1024' \ -i 'prompt="A photo of TOK driving in deep water"' \ -i 'refine="no_refiner"' \ -i 'scheduler="K_EULER"' \ -i 'lora_scale=0.6' \ -i 'num_outputs=4' \ -i 'guidance_scale=7.5' \ -i 'apply_watermark=true' \ -i 'high_noise_frac=0.8' \ -i 'negative_prompt=""' \ -i 'prompt_strength=0.8' \ -i 'num_inference_steps=50'
To learn more, take a look at the Cog documentation.
Run this to download the model and run it in your local environment:
docker run -d -p 5000:5000 --gpus=all r8.im/hudsongraeme/cybertruck@sha256:4e7b92920cf8bbec4862ccad2f905d83430d1ee54f47261d52e055aeadf6f9da
curl -s -X POST \ -H "Content-Type: application/json" \ -d $'{ "input": { "width": 1024, "height": 1024, "prompt": "A photo of TOK driving in deep water", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 4, "guidance_scale": 7.5, "apply_watermark": true, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 50 } }' \ http://localhost:5000/predictions
To learn more, take a look at the Cog documentation.
Output
{ "completed_at": "2023-10-14T18:12:42.279987Z", "created_at": "2023-10-14T18:11:37.698719Z", "data_removed": false, "error": null, "id": "tygkpdtbs7omzsxbsdzcmvuteq", "input": { "width": 1024, "height": 1024, "prompt": "A photo of TOK driving in deep water", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 4, "guidance_scale": 7.5, "apply_watermark": true, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 50 }, "logs": "Using seed: 56998\nEnsuring enough disk space...\nFree disk space: 3072252948480\nDownloading weights: https://pbxt.replicate.delivery/4h6fsYXIdRXfrkTavb4PfmCshSz7LnHFKsOyVMJXHSWa1kOjA/trained_model.tar\nb'Downloaded 186 MB bytes in 3.357s (55 MB/s)\\nExtracted 186 MB in 0.069s (2.7 GB/s)\\n'\nDownloaded weights in 3.7890095710754395 seconds\nLoading fine-tuned model\nDoes not have Unet. assume we are using LoRA\nLoading Unet LoRA\nPrompt: A photo of <s0><s1> driving in deep water\ntxt2img mode\n 0%| | 0/50 [00:00<?, ?it/s]\n 2%|▏ | 1/50 [00:01<00:51, 1.06s/it]\n 4%|▍ | 2/50 [00:02<00:50, 1.06s/it]\n 6%|▌ | 3/50 [00:03<00:49, 1.06s/it]\n 8%|▊ | 4/50 [00:04<00:48, 1.06s/it]\n 10%|█ | 5/50 [00:05<00:47, 1.06s/it]\n 12%|█▏ | 6/50 [00:06<00:46, 1.06s/it]\n 14%|█▍ | 7/50 [00:07<00:45, 1.06s/it]\n 16%|█▌ | 8/50 [00:08<00:44, 1.06s/it]\n 18%|█▊ | 9/50 [00:09<00:43, 1.06s/it]\n 20%|██ | 10/50 [00:10<00:42, 1.06s/it]\n 22%|██▏ | 11/50 [00:11<00:41, 1.06s/it]\n 24%|██▍ | 12/50 [00:12<00:40, 1.06s/it]\n 26%|██▌ | 13/50 [00:13<00:39, 1.06s/it]\n 28%|██▊ | 14/50 [00:14<00:38, 1.06s/it]\n 30%|███ | 15/50 [00:15<00:37, 1.06s/it]\n 32%|███▏ | 16/50 [00:16<00:36, 1.06s/it]\n 34%|███▍ | 17/50 [00:18<00:35, 1.06s/it]\n 36%|███▌ | 18/50 [00:19<00:33, 1.06s/it]\n 38%|███▊ | 19/50 [00:20<00:32, 1.06s/it]\n 40%|████ | 20/50 [00:21<00:31, 1.06s/it]\n 42%|████▏ | 21/50 [00:22<00:30, 1.06s/it]\n 44%|████▍ | 22/50 [00:23<00:29, 1.06s/it]\n 46%|████▌ | 23/50 [00:24<00:28, 1.06s/it]\n 48%|████▊ | 24/50 [00:25<00:27, 1.06s/it]\n 50%|█████ | 25/50 [00:26<00:26, 1.06s/it]\n 52%|█████▏ | 26/50 [00:27<00:25, 1.06s/it]\n 54%|█████▍ | 27/50 [00:28<00:24, 1.06s/it]\n 56%|█████▌ | 28/50 [00:29<00:23, 1.06s/it]\n 58%|█████▊ | 29/50 [00:30<00:22, 1.06s/it]\n 60%|██████ | 30/50 [00:31<00:21, 1.06s/it]\n 62%|██████▏ | 31/50 [00:32<00:20, 1.06s/it]\n 64%|██████▍ | 32/50 [00:33<00:19, 1.06s/it]\n 66%|██████▌ | 33/50 [00:34<00:18, 1.06s/it]\n 68%|██████▊ | 34/50 [00:36<00:16, 1.06s/it]\n 70%|███████ | 35/50 [00:37<00:15, 1.06s/it]\n 72%|███████▏ | 36/50 [00:38<00:14, 1.06s/it]\n 74%|███████▍ | 37/50 [00:39<00:13, 1.06s/it]\n 76%|███████▌ | 38/50 [00:40<00:12, 1.06s/it]\n 78%|███████▊ | 39/50 [00:41<00:11, 1.06s/it]\n 80%|████████ | 40/50 [00:42<00:10, 1.06s/it]\n 82%|████████▏ | 41/50 [00:43<00:09, 1.06s/it]\n 84%|████████▍ | 42/50 [00:44<00:08, 1.06s/it]\n 86%|████████▌ | 43/50 [00:45<00:07, 1.06s/it]\n 88%|████████▊ | 44/50 [00:46<00:06, 1.06s/it]\n 90%|█████████ | 45/50 [00:47<00:05, 1.06s/it]\n 92%|█████████▏| 46/50 [00:48<00:04, 1.06s/it]\n 94%|█████████▍| 47/50 [00:49<00:03, 1.06s/it]\n 96%|█████████▌| 48/50 [00:50<00:02, 1.06s/it]\n 98%|█████████▊| 49/50 [00:51<00:01, 1.06s/it]\n100%|██████████| 50/50 [00:53<00:00, 1.06s/it]\n100%|██████████| 50/50 [00:53<00:00, 1.06s/it]", "metrics": { "predict_time": 62.753609, "total_time": 64.581268 }, "output": [ "https://pbxt.replicate.delivery/oMpU4u6Gkia4OJX9gt241dNBHIf0ypqBpAZymtwZDhGMYE3IA/out-0.png", "https://pbxt.replicate.delivery/MZASGPBfhuSydK3JAFfeWe3ctXqPbaUQ532o3BWHJdMmBj4GB/out-1.png", "https://pbxt.replicate.delivery/1KyfqI3y0gXODKSNPU11zORefKBSKxhRleb7ap3Pg98kBj4GB/out-2.png", "https://pbxt.replicate.delivery/JzVZ00lMEL6UIhlwqZLJNwhQRO8oHj0hezaL4z5TMtNNYE3IA/out-3.png" ], "started_at": "2023-10-14T18:11:39.526378Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/tygkpdtbs7omzsxbsdzcmvuteq", "cancel": "https://api.replicate.com/v1/predictions/tygkpdtbs7omzsxbsdzcmvuteq/cancel" }, "version": "50ef505f835eb26967d7f3df96103ee0a90d51eeaea60bf7c2372e6ef70b0d06" }
Generated inUsing seed: 56998 Ensuring enough disk space... Free disk space: 3072252948480 Downloading weights: https://pbxt.replicate.delivery/4h6fsYXIdRXfrkTavb4PfmCshSz7LnHFKsOyVMJXHSWa1kOjA/trained_model.tar b'Downloaded 186 MB bytes in 3.357s (55 MB/s)\nExtracted 186 MB in 0.069s (2.7 GB/s)\n' Downloaded weights in 3.7890095710754395 seconds Loading fine-tuned model Does not have Unet. assume we are using LoRA Loading Unet LoRA Prompt: A photo of <s0><s1> driving in deep water txt2img mode 0%| | 0/50 [00:00<?, ?it/s] 2%|▏ | 1/50 [00:01<00:51, 1.06s/it] 4%|▍ | 2/50 [00:02<00:50, 1.06s/it] 6%|▌ | 3/50 [00:03<00:49, 1.06s/it] 8%|▊ | 4/50 [00:04<00:48, 1.06s/it] 10%|█ | 5/50 [00:05<00:47, 1.06s/it] 12%|█▏ | 6/50 [00:06<00:46, 1.06s/it] 14%|█▍ | 7/50 [00:07<00:45, 1.06s/it] 16%|█▌ | 8/50 [00:08<00:44, 1.06s/it] 18%|█▊ | 9/50 [00:09<00:43, 1.06s/it] 20%|██ | 10/50 [00:10<00:42, 1.06s/it] 22%|██▏ | 11/50 [00:11<00:41, 1.06s/it] 24%|██▍ | 12/50 [00:12<00:40, 1.06s/it] 26%|██▌ | 13/50 [00:13<00:39, 1.06s/it] 28%|██▊ | 14/50 [00:14<00:38, 1.06s/it] 30%|███ | 15/50 [00:15<00:37, 1.06s/it] 32%|███▏ | 16/50 [00:16<00:36, 1.06s/it] 34%|███▍ | 17/50 [00:18<00:35, 1.06s/it] 36%|███▌ | 18/50 [00:19<00:33, 1.06s/it] 38%|███▊ | 19/50 [00:20<00:32, 1.06s/it] 40%|████ | 20/50 [00:21<00:31, 1.06s/it] 42%|████▏ | 21/50 [00:22<00:30, 1.06s/it] 44%|████▍ | 22/50 [00:23<00:29, 1.06s/it] 46%|████▌ | 23/50 [00:24<00:28, 1.06s/it] 48%|████▊ | 24/50 [00:25<00:27, 1.06s/it] 50%|█████ | 25/50 [00:26<00:26, 1.06s/it] 52%|█████▏ | 26/50 [00:27<00:25, 1.06s/it] 54%|█████▍ | 27/50 [00:28<00:24, 1.06s/it] 56%|█████▌ | 28/50 [00:29<00:23, 1.06s/it] 58%|█████▊ | 29/50 [00:30<00:22, 1.06s/it] 60%|██████ | 30/50 [00:31<00:21, 1.06s/it] 62%|██████▏ | 31/50 [00:32<00:20, 1.06s/it] 64%|██████▍ | 32/50 [00:33<00:19, 1.06s/it] 66%|██████▌ | 33/50 [00:34<00:18, 1.06s/it] 68%|██████▊ | 34/50 [00:36<00:16, 1.06s/it] 70%|███████ | 35/50 [00:37<00:15, 1.06s/it] 72%|███████▏ | 36/50 [00:38<00:14, 1.06s/it] 74%|███████▍ | 37/50 [00:39<00:13, 1.06s/it] 76%|███████▌ | 38/50 [00:40<00:12, 1.06s/it] 78%|███████▊ | 39/50 [00:41<00:11, 1.06s/it] 80%|████████ | 40/50 [00:42<00:10, 1.06s/it] 82%|████████▏ | 41/50 [00:43<00:09, 1.06s/it] 84%|████████▍ | 42/50 [00:44<00:08, 1.06s/it] 86%|████████▌ | 43/50 [00:45<00:07, 1.06s/it] 88%|████████▊ | 44/50 [00:46<00:06, 1.06s/it] 90%|█████████ | 45/50 [00:47<00:05, 1.06s/it] 92%|█████████▏| 46/50 [00:48<00:04, 1.06s/it] 94%|█████████▍| 47/50 [00:49<00:03, 1.06s/it] 96%|█████████▌| 48/50 [00:50<00:02, 1.06s/it] 98%|█████████▊| 49/50 [00:51<00:01, 1.06s/it] 100%|██████████| 50/50 [00:53<00:00, 1.06s/it] 100%|██████████| 50/50 [00:53<00:00, 1.06s/it]
Prediction
hudsongraeme/cybertruck:4e7b9292IDkzl2ectb3a5ytmbfq22sicbrreStatusSucceededSourceWebHardwareA40 (Large)Total durationCreatedInput
- width
- 1024
- height
- 1024
- prompt
- A photo of TOK drifting in the desert, surreal, twilight
- refine
- no_refiner
- scheduler
- K_EULER
- lora_scale
- 0.6
- num_outputs
- 1
- guidance_scale
- 7.5
- apply_watermark
- high_noise_frac
- 0.8
- negative_prompt
- Two
- prompt_strength
- 0.8
- num_inference_steps
- 50
{ "width": 1024, "height": 1024, "prompt": "A photo of TOK drifting in the desert, surreal, twilight", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": true, "high_noise_frac": 0.8, "negative_prompt": "Two", "prompt_strength": 0.8, "num_inference_steps": 50 }
Install Replicate’s Node.js client library:npm install replicate
Set theREPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run hudsongraeme/cybertruck using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "hudsongraeme/cybertruck:4e7b92920cf8bbec4862ccad2f905d83430d1ee54f47261d52e055aeadf6f9da", { input: { width: 1024, height: 1024, prompt: "A photo of TOK drifting in the desert, surreal, twilight", refine: "no_refiner", scheduler: "K_EULER", lora_scale: 0.6, num_outputs: 1, guidance_scale: 7.5, apply_watermark: true, high_noise_frac: 0.8, negative_prompt: "Two", prompt_strength: 0.8, num_inference_steps: 50 } } ); console.log(output);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Set theREPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import the client:import replicate
Run hudsongraeme/cybertruck using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "hudsongraeme/cybertruck:4e7b92920cf8bbec4862ccad2f905d83430d1ee54f47261d52e055aeadf6f9da", input={ "width": 1024, "height": 1024, "prompt": "A photo of TOK drifting in the desert, surreal, twilight", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": True, "high_noise_frac": 0.8, "negative_prompt": "Two", "prompt_strength": 0.8, "num_inference_steps": 50 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Set theREPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Run hudsongraeme/cybertruck using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "4e7b92920cf8bbec4862ccad2f905d83430d1ee54f47261d52e055aeadf6f9da", "input": { "width": 1024, "height": 1024, "prompt": "A photo of TOK drifting in the desert, surreal, twilight", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": true, "high_noise_frac": 0.8, "negative_prompt": "Two", "prompt_strength": 0.8, "num_inference_steps": 50 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
You can run this model locally using Cog. First, install Cog:brew install cog
If you don’t have Homebrew, there are other installation options available.
Run this to download the model and run it in your local environment:
cog predict r8.im/hudsongraeme/cybertruck@sha256:4e7b92920cf8bbec4862ccad2f905d83430d1ee54f47261d52e055aeadf6f9da \ -i 'width=1024' \ -i 'height=1024' \ -i 'prompt="A photo of TOK drifting in the desert, surreal, twilight"' \ -i 'refine="no_refiner"' \ -i 'scheduler="K_EULER"' \ -i 'lora_scale=0.6' \ -i 'num_outputs=1' \ -i 'guidance_scale=7.5' \ -i 'apply_watermark=true' \ -i 'high_noise_frac=0.8' \ -i 'negative_prompt="Two"' \ -i 'prompt_strength=0.8' \ -i 'num_inference_steps=50'
To learn more, take a look at the Cog documentation.
Run this to download the model and run it in your local environment:
docker run -d -p 5000:5000 --gpus=all r8.im/hudsongraeme/cybertruck@sha256:4e7b92920cf8bbec4862ccad2f905d83430d1ee54f47261d52e055aeadf6f9da
curl -s -X POST \ -H "Content-Type: application/json" \ -d $'{ "input": { "width": 1024, "height": 1024, "prompt": "A photo of TOK drifting in the desert, surreal, twilight", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": true, "high_noise_frac": 0.8, "negative_prompt": "Two", "prompt_strength": 0.8, "num_inference_steps": 50 } }' \ http://localhost:5000/predictions
To learn more, take a look at the Cog documentation.
Output
{ "completed_at": "2023-10-19T04:28:18.187170Z", "created_at": "2023-10-19T04:28:00.033381Z", "data_removed": false, "error": null, "id": "kzl2ectb3a5ytmbfq22sicbrre", "input": { "width": 1024, "height": 1024, "prompt": "A photo of TOK drifting in the desert, surreal, twilight", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": true, "high_noise_frac": 0.8, "negative_prompt": "Two", "prompt_strength": 0.8, "num_inference_steps": 50 }, "logs": "Using seed: 63783\nEnsuring enough disk space...\nFree disk space: 1519262740480\nDownloading weights: https://pbxt.replicate.delivery/LlSIASeBycWpOS1KVqfe31ZjIKdk7Ho3RblERn4k3PgfGXeNC/trained_model.tar\nb'Downloaded 186 MB bytes in 0.342s (544 MB/s)\\nExtracted 186 MB in 0.073s (2.6 GB/s)\\n'\nDownloaded weights in 0.5161697864532471 seconds\nLoading fine-tuned model\nDoes not have Unet. assume we are using LoRA\nLoading Unet LoRA\nPrompt: A photo of <s0><s1> drifting in the desert, surreal, twilight\ntxt2img mode\n 0%| | 0/50 [00:00<?, ?it/s]\n 2%|▏ | 1/50 [00:00<00:13, 3.68it/s]\n 4%|▍ | 2/50 [00:00<00:13, 3.67it/s]\n 6%|▌ | 3/50 [00:00<00:12, 3.68it/s]\n 8%|▊ | 4/50 [00:01<00:12, 3.68it/s]\n 10%|█ | 5/50 [00:01<00:12, 3.68it/s]\n 12%|█▏ | 6/50 [00:01<00:11, 3.69it/s]\n 14%|█▍ | 7/50 [00:01<00:11, 3.69it/s]\n 16%|█▌ | 8/50 [00:02<00:11, 3.68it/s]\n 18%|█▊ | 9/50 [00:02<00:11, 3.68it/s]\n 20%|██ | 10/50 [00:02<00:10, 3.68it/s]\n 22%|██▏ | 11/50 [00:02<00:10, 3.68it/s]\n 24%|██▍ | 12/50 [00:03<00:10, 3.68it/s]\n 26%|██▌ | 13/50 [00:03<00:10, 3.68it/s]\n 28%|██▊ | 14/50 [00:03<00:09, 3.68it/s]\n 30%|███ | 15/50 [00:04<00:09, 3.68it/s]\n 32%|███▏ | 16/50 [00:04<00:09, 3.68it/s]\n 34%|███▍ | 17/50 [00:04<00:08, 3.68it/s]\n 36%|███▌ | 18/50 [00:04<00:08, 3.68it/s]\n 38%|███▊ | 19/50 [00:05<00:08, 3.68it/s]\n 40%|████ | 20/50 [00:05<00:08, 3.67it/s]\n 42%|████▏ | 21/50 [00:05<00:07, 3.67it/s]\n 44%|████▍ | 22/50 [00:05<00:07, 3.67it/s]\n 46%|████▌ | 23/50 [00:06<00:07, 3.67it/s]\n 48%|████▊ | 24/50 [00:06<00:07, 3.67it/s]\n 50%|█████ | 25/50 [00:06<00:06, 3.67it/s]\n 52%|█████▏ | 26/50 [00:07<00:06, 3.67it/s]\n 54%|█████▍ | 27/50 [00:07<00:06, 3.66it/s]\n 56%|█████▌ | 28/50 [00:07<00:05, 3.67it/s]\n 58%|█████▊ | 29/50 [00:07<00:05, 3.67it/s]\n 60%|██████ | 30/50 [00:08<00:05, 3.67it/s]\n 62%|██████▏ | 31/50 [00:08<00:05, 3.67it/s]\n 64%|██████▍ | 32/50 [00:08<00:04, 3.67it/s]\n 66%|██████▌ | 33/50 [00:08<00:04, 3.67it/s]\n 68%|██████▊ | 34/50 [00:09<00:04, 3.67it/s]\n 70%|███████ | 35/50 [00:09<00:04, 3.67it/s]\n 72%|███████▏ | 36/50 [00:09<00:03, 3.66it/s]\n 74%|███████▍ | 37/50 [00:10<00:03, 3.66it/s]\n 76%|███████▌ | 38/50 [00:10<00:03, 3.67it/s]\n 78%|███████▊ | 39/50 [00:10<00:03, 3.66it/s]\n 80%|████████ | 40/50 [00:10<00:02, 3.67it/s]\n 82%|████████▏ | 41/50 [00:11<00:02, 3.67it/s]\n 84%|████████▍ | 42/50 [00:11<00:02, 3.66it/s]\n 86%|████████▌ | 43/50 [00:11<00:01, 3.66it/s]\n 88%|████████▊ | 44/50 [00:11<00:01, 3.66it/s]\n 90%|█████████ | 45/50 [00:12<00:01, 3.66it/s]\n 92%|█████████▏| 46/50 [00:12<00:01, 3.66it/s]\n 94%|█████████▍| 47/50 [00:12<00:00, 3.66it/s]\n 96%|█████████▌| 48/50 [00:13<00:00, 3.66it/s]\n 98%|█████████▊| 49/50 [00:13<00:00, 3.66it/s]\n100%|██████████| 50/50 [00:13<00:00, 3.66it/s]\n100%|██████████| 50/50 [00:13<00:00, 3.67it/s]", "metrics": { "predict_time": 16.846425, "total_time": 18.153789 }, "output": [ "https://replicate.delivery/pbxt/9gYNUmuzFuaLI9jyPWGw9MfdgqcNnkpfLgwIEL5ePVYDTMfGB/out-0.png" ], "started_at": "2023-10-19T04:28:01.340745Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/kzl2ectb3a5ytmbfq22sicbrre", "cancel": "https://api.replicate.com/v1/predictions/kzl2ectb3a5ytmbfq22sicbrre/cancel" }, "version": "4cb80e2be47c463f65976fdad5f90179e5c613728a7ab30f723dd9c51a0a1ec9" }
Generated inUsing seed: 63783 Ensuring enough disk space... Free disk space: 1519262740480 Downloading weights: https://pbxt.replicate.delivery/LlSIASeBycWpOS1KVqfe31ZjIKdk7Ho3RblERn4k3PgfGXeNC/trained_model.tar b'Downloaded 186 MB bytes in 0.342s (544 MB/s)\nExtracted 186 MB in 0.073s (2.6 GB/s)\n' Downloaded weights in 0.5161697864532471 seconds Loading fine-tuned model Does not have Unet. assume we are using LoRA Loading Unet LoRA Prompt: A photo of <s0><s1> drifting in the desert, surreal, twilight txt2img mode 0%| | 0/50 [00:00<?, ?it/s] 2%|▏ | 1/50 [00:00<00:13, 3.68it/s] 4%|▍ | 2/50 [00:00<00:13, 3.67it/s] 6%|▌ | 3/50 [00:00<00:12, 3.68it/s] 8%|▊ | 4/50 [00:01<00:12, 3.68it/s] 10%|█ | 5/50 [00:01<00:12, 3.68it/s] 12%|█▏ | 6/50 [00:01<00:11, 3.69it/s] 14%|█▍ | 7/50 [00:01<00:11, 3.69it/s] 16%|█▌ | 8/50 [00:02<00:11, 3.68it/s] 18%|█▊ | 9/50 [00:02<00:11, 3.68it/s] 20%|██ | 10/50 [00:02<00:10, 3.68it/s] 22%|██▏ | 11/50 [00:02<00:10, 3.68it/s] 24%|██▍ | 12/50 [00:03<00:10, 3.68it/s] 26%|██▌ | 13/50 [00:03<00:10, 3.68it/s] 28%|██▊ | 14/50 [00:03<00:09, 3.68it/s] 30%|███ | 15/50 [00:04<00:09, 3.68it/s] 32%|███▏ | 16/50 [00:04<00:09, 3.68it/s] 34%|███▍ | 17/50 [00:04<00:08, 3.68it/s] 36%|███▌ | 18/50 [00:04<00:08, 3.68it/s] 38%|███▊ | 19/50 [00:05<00:08, 3.68it/s] 40%|████ | 20/50 [00:05<00:08, 3.67it/s] 42%|████▏ | 21/50 [00:05<00:07, 3.67it/s] 44%|████▍ | 22/50 [00:05<00:07, 3.67it/s] 46%|████▌ | 23/50 [00:06<00:07, 3.67it/s] 48%|████▊ | 24/50 [00:06<00:07, 3.67it/s] 50%|█████ | 25/50 [00:06<00:06, 3.67it/s] 52%|█████▏ | 26/50 [00:07<00:06, 3.67it/s] 54%|█████▍ | 27/50 [00:07<00:06, 3.66it/s] 56%|█████▌ | 28/50 [00:07<00:05, 3.67it/s] 58%|█████▊ | 29/50 [00:07<00:05, 3.67it/s] 60%|██████ | 30/50 [00:08<00:05, 3.67it/s] 62%|██████▏ | 31/50 [00:08<00:05, 3.67it/s] 64%|██████▍ | 32/50 [00:08<00:04, 3.67it/s] 66%|██████▌ | 33/50 [00:08<00:04, 3.67it/s] 68%|██████▊ | 34/50 [00:09<00:04, 3.67it/s] 70%|███████ | 35/50 [00:09<00:04, 3.67it/s] 72%|███████▏ | 36/50 [00:09<00:03, 3.66it/s] 74%|███████▍ | 37/50 [00:10<00:03, 3.66it/s] 76%|███████▌ | 38/50 [00:10<00:03, 3.67it/s] 78%|███████▊ | 39/50 [00:10<00:03, 3.66it/s] 80%|████████ | 40/50 [00:10<00:02, 3.67it/s] 82%|████████▏ | 41/50 [00:11<00:02, 3.67it/s] 84%|████████▍ | 42/50 [00:11<00:02, 3.66it/s] 86%|████████▌ | 43/50 [00:11<00:01, 3.66it/s] 88%|████████▊ | 44/50 [00:11<00:01, 3.66it/s] 90%|█████████ | 45/50 [00:12<00:01, 3.66it/s] 92%|█████████▏| 46/50 [00:12<00:01, 3.66it/s] 94%|█████████▍| 47/50 [00:12<00:00, 3.66it/s] 96%|█████████▌| 48/50 [00:13<00:00, 3.66it/s] 98%|█████████▊| 49/50 [00:13<00:00, 3.66it/s] 100%|██████████| 50/50 [00:13<00:00, 3.66it/s] 100%|██████████| 50/50 [00:13<00:00, 3.67it/s]
Prediction
hudsongraeme/cybertruck:4e7b9292ID7v2bhctbcqteuewzvmf4zjv6hyStatusSucceededSourceWebHardwareA40 (Large)Total durationCreatedInput
- width
- 1024
- height
- 1024
- prompt
- A photo of TOK driving in deep water
- refine
- no_refiner
- scheduler
- K_EULER
- lora_scale
- 0.66
- num_outputs
- 1
- guidance_scale
- 7.5
- apply_watermark
- high_noise_frac
- 0.8
- negative_prompt
- prompt_strength
- 0.8
- num_inference_steps
- 50
{ "width": 1024, "height": 1024, "prompt": "A photo of TOK driving in deep water", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.66, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": true, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 50 }
Install Replicate’s Node.js client library:npm install replicate
Set theREPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run hudsongraeme/cybertruck using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "hudsongraeme/cybertruck:4e7b92920cf8bbec4862ccad2f905d83430d1ee54f47261d52e055aeadf6f9da", { input: { width: 1024, height: 1024, prompt: "A photo of TOK driving in deep water", refine: "no_refiner", scheduler: "K_EULER", lora_scale: 0.66, num_outputs: 1, guidance_scale: 7.5, apply_watermark: true, high_noise_frac: 0.8, negative_prompt: "", prompt_strength: 0.8, num_inference_steps: 50 } } ); console.log(output);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Set theREPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import the client:import replicate
Run hudsongraeme/cybertruck using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "hudsongraeme/cybertruck:4e7b92920cf8bbec4862ccad2f905d83430d1ee54f47261d52e055aeadf6f9da", input={ "width": 1024, "height": 1024, "prompt": "A photo of TOK driving in deep water", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.66, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": True, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 50 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Set theREPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Run hudsongraeme/cybertruck using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "4e7b92920cf8bbec4862ccad2f905d83430d1ee54f47261d52e055aeadf6f9da", "input": { "width": 1024, "height": 1024, "prompt": "A photo of TOK driving in deep water", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.66, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": true, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 50 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
You can run this model locally using Cog. First, install Cog:brew install cog
If you don’t have Homebrew, there are other installation options available.
Run this to download the model and run it in your local environment:
cog predict r8.im/hudsongraeme/cybertruck@sha256:4e7b92920cf8bbec4862ccad2f905d83430d1ee54f47261d52e055aeadf6f9da \ -i 'width=1024' \ -i 'height=1024' \ -i 'prompt="A photo of TOK driving in deep water"' \ -i 'refine="no_refiner"' \ -i 'scheduler="K_EULER"' \ -i 'lora_scale=0.66' \ -i 'num_outputs=1' \ -i 'guidance_scale=7.5' \ -i 'apply_watermark=true' \ -i 'high_noise_frac=0.8' \ -i 'negative_prompt=""' \ -i 'prompt_strength=0.8' \ -i 'num_inference_steps=50'
To learn more, take a look at the Cog documentation.
Run this to download the model and run it in your local environment:
docker run -d -p 5000:5000 --gpus=all r8.im/hudsongraeme/cybertruck@sha256:4e7b92920cf8bbec4862ccad2f905d83430d1ee54f47261d52e055aeadf6f9da
curl -s -X POST \ -H "Content-Type: application/json" \ -d $'{ "input": { "width": 1024, "height": 1024, "prompt": "A photo of TOK driving in deep water", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.66, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": true, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 50 } }' \ http://localhost:5000/predictions
To learn more, take a look at the Cog documentation.
Output
{ "completed_at": "2023-10-19T04:32:16.858001Z", "created_at": "2023-10-19T04:32:00.158306Z", "data_removed": false, "error": null, "id": "7v2bhctbcqteuewzvmf4zjv6hy", "input": { "width": 1024, "height": 1024, "prompt": "A photo of TOK driving in deep water", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.66, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": true, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 50 }, "logs": "Using seed: 40328\nEnsuring enough disk space...\nFree disk space: 2440714092544\nDownloading weights: https://pbxt.replicate.delivery/LlSIASeBycWpOS1KVqfe31ZjIKdk7Ho3RblERn4k3PgfGXeNC/trained_model.tar\nb'Downloaded 186 MB bytes in 0.229s (812 MB/s)\\nExtracted 186 MB in 0.049s (3.8 GB/s)\\n'\nDownloaded weights in 0.38337254524230957 seconds\nLoading fine-tuned model\nDoes not have Unet. assume we are using LoRA\nLoading Unet LoRA\nPrompt: A photo of <s0><s1> driving in deep water\ntxt2img mode\n 0%| | 0/50 [00:00<?, ?it/s]\n 2%|▏ | 1/50 [00:00<00:13, 3.66it/s]\n 4%|▍ | 2/50 [00:00<00:13, 3.65it/s]\n 6%|▌ | 3/50 [00:00<00:12, 3.65it/s]\n 8%|▊ | 4/50 [00:01<00:12, 3.65it/s]\n 10%|█ | 5/50 [00:01<00:12, 3.65it/s]\n 12%|█▏ | 6/50 [00:01<00:12, 3.65it/s]\n 14%|█▍ | 7/50 [00:01<00:11, 3.65it/s]\n 16%|█▌ | 8/50 [00:02<00:11, 3.64it/s]\n 18%|█▊ | 9/50 [00:02<00:11, 3.64it/s]\n 20%|██ | 10/50 [00:02<00:10, 3.64it/s]\n 22%|██▏ | 11/50 [00:03<00:10, 3.65it/s]\n 24%|██▍ | 12/50 [00:03<00:10, 3.65it/s]\n 26%|██▌ | 13/50 [00:03<00:10, 3.65it/s]\n 28%|██▊ | 14/50 [00:03<00:09, 3.65it/s]\n 30%|███ | 15/50 [00:04<00:09, 3.64it/s]\n 32%|███▏ | 16/50 [00:04<00:09, 3.64it/s]\n 34%|███▍ | 17/50 [00:04<00:09, 3.64it/s]\n 36%|███▌ | 18/50 [00:04<00:08, 3.64it/s]\n 38%|███▊ | 19/50 [00:05<00:08, 3.64it/s]\n 40%|████ | 20/50 [00:05<00:08, 3.64it/s]\n 42%|████▏ | 21/50 [00:05<00:07, 3.64it/s]\n 44%|████▍ | 22/50 [00:06<00:07, 3.64it/s]\n 46%|████▌ | 23/50 [00:06<00:07, 3.64it/s]\n 48%|████▊ | 24/50 [00:06<00:07, 3.64it/s]\n 50%|█████ | 25/50 [00:06<00:06, 3.64it/s]\n 52%|█████▏ | 26/50 [00:07<00:06, 3.64it/s]\n 54%|█████▍ | 27/50 [00:07<00:06, 3.64it/s]\n 56%|█████▌ | 28/50 [00:07<00:06, 3.64it/s]\n 58%|█████▊ | 29/50 [00:07<00:05, 3.64it/s]\n 60%|██████ | 30/50 [00:08<00:05, 3.64it/s]\n 62%|██████▏ | 31/50 [00:08<00:05, 3.64it/s]\n 64%|██████▍ | 32/50 [00:08<00:04, 3.64it/s]\n 66%|██████▌ | 33/50 [00:09<00:04, 3.63it/s]\n 68%|██████▊ | 34/50 [00:09<00:04, 3.64it/s]\n 70%|███████ | 35/50 [00:09<00:04, 3.64it/s]\n 72%|███████▏ | 36/50 [00:09<00:03, 3.64it/s]\n 74%|███████▍ | 37/50 [00:10<00:03, 3.63it/s]\n 76%|███████▌ | 38/50 [00:10<00:03, 3.64it/s]\n 78%|███████▊ | 39/50 [00:10<00:03, 3.64it/s]\n 80%|████████ | 40/50 [00:10<00:02, 3.64it/s]\n 82%|████████▏ | 41/50 [00:11<00:02, 3.63it/s]\n 84%|████████▍ | 42/50 [00:11<00:02, 3.64it/s]\n 86%|████████▌ | 43/50 [00:11<00:01, 3.64it/s]\n 88%|████████▊ | 44/50 [00:12<00:01, 3.64it/s]\n 90%|█████████ | 45/50 [00:12<00:01, 3.63it/s]\n 92%|█████████▏| 46/50 [00:12<00:01, 3.64it/s]\n 94%|█████████▍| 47/50 [00:12<00:00, 3.64it/s]\n 96%|█████████▌| 48/50 [00:13<00:00, 3.64it/s]\n 98%|█████████▊| 49/50 [00:13<00:00, 3.63it/s]\n100%|██████████| 50/50 [00:13<00:00, 3.63it/s]\n100%|██████████| 50/50 [00:13<00:00, 3.64it/s]", "metrics": { "predict_time": 15.954989, "total_time": 16.699695 }, "output": [ "https://replicate.delivery/pbxt/4dguZeeVeaEfAS0ioqAL0H8shwFaPopeejdApxfCmHsfQNmvRA/out-0.png" ], "started_at": "2023-10-19T04:32:00.903012Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/7v2bhctbcqteuewzvmf4zjv6hy", "cancel": "https://api.replicate.com/v1/predictions/7v2bhctbcqteuewzvmf4zjv6hy/cancel" }, "version": "4cb80e2be47c463f65976fdad5f90179e5c613728a7ab30f723dd9c51a0a1ec9" }
Generated inUsing seed: 40328 Ensuring enough disk space... Free disk space: 2440714092544 Downloading weights: https://pbxt.replicate.delivery/LlSIASeBycWpOS1KVqfe31ZjIKdk7Ho3RblERn4k3PgfGXeNC/trained_model.tar b'Downloaded 186 MB bytes in 0.229s (812 MB/s)\nExtracted 186 MB in 0.049s (3.8 GB/s)\n' Downloaded weights in 0.38337254524230957 seconds Loading fine-tuned model Does not have Unet. assume we are using LoRA Loading Unet LoRA Prompt: A photo of <s0><s1> driving in deep water txt2img mode 0%| | 0/50 [00:00<?, ?it/s] 2%|▏ | 1/50 [00:00<00:13, 3.66it/s] 4%|▍ | 2/50 [00:00<00:13, 3.65it/s] 6%|▌ | 3/50 [00:00<00:12, 3.65it/s] 8%|▊ | 4/50 [00:01<00:12, 3.65it/s] 10%|█ | 5/50 [00:01<00:12, 3.65it/s] 12%|█▏ | 6/50 [00:01<00:12, 3.65it/s] 14%|█▍ | 7/50 [00:01<00:11, 3.65it/s] 16%|█▌ | 8/50 [00:02<00:11, 3.64it/s] 18%|█▊ | 9/50 [00:02<00:11, 3.64it/s] 20%|██ | 10/50 [00:02<00:10, 3.64it/s] 22%|██▏ | 11/50 [00:03<00:10, 3.65it/s] 24%|██▍ | 12/50 [00:03<00:10, 3.65it/s] 26%|██▌ | 13/50 [00:03<00:10, 3.65it/s] 28%|██▊ | 14/50 [00:03<00:09, 3.65it/s] 30%|███ | 15/50 [00:04<00:09, 3.64it/s] 32%|███▏ | 16/50 [00:04<00:09, 3.64it/s] 34%|███▍ | 17/50 [00:04<00:09, 3.64it/s] 36%|███▌ | 18/50 [00:04<00:08, 3.64it/s] 38%|███▊ | 19/50 [00:05<00:08, 3.64it/s] 40%|████ | 20/50 [00:05<00:08, 3.64it/s] 42%|████▏ | 21/50 [00:05<00:07, 3.64it/s] 44%|████▍ | 22/50 [00:06<00:07, 3.64it/s] 46%|████▌ | 23/50 [00:06<00:07, 3.64it/s] 48%|████▊ | 24/50 [00:06<00:07, 3.64it/s] 50%|█████ | 25/50 [00:06<00:06, 3.64it/s] 52%|█████▏ | 26/50 [00:07<00:06, 3.64it/s] 54%|█████▍ | 27/50 [00:07<00:06, 3.64it/s] 56%|█████▌ | 28/50 [00:07<00:06, 3.64it/s] 58%|█████▊ | 29/50 [00:07<00:05, 3.64it/s] 60%|██████ | 30/50 [00:08<00:05, 3.64it/s] 62%|██████▏ | 31/50 [00:08<00:05, 3.64it/s] 64%|██████▍ | 32/50 [00:08<00:04, 3.64it/s] 66%|██████▌ | 33/50 [00:09<00:04, 3.63it/s] 68%|██████▊ | 34/50 [00:09<00:04, 3.64it/s] 70%|███████ | 35/50 [00:09<00:04, 3.64it/s] 72%|███████▏ | 36/50 [00:09<00:03, 3.64it/s] 74%|███████▍ | 37/50 [00:10<00:03, 3.63it/s] 76%|███████▌ | 38/50 [00:10<00:03, 3.64it/s] 78%|███████▊ | 39/50 [00:10<00:03, 3.64it/s] 80%|████████ | 40/50 [00:10<00:02, 3.64it/s] 82%|████████▏ | 41/50 [00:11<00:02, 3.63it/s] 84%|████████▍ | 42/50 [00:11<00:02, 3.64it/s] 86%|████████▌ | 43/50 [00:11<00:01, 3.64it/s] 88%|████████▊ | 44/50 [00:12<00:01, 3.64it/s] 90%|█████████ | 45/50 [00:12<00:01, 3.63it/s] 92%|█████████▏| 46/50 [00:12<00:01, 3.64it/s] 94%|█████████▍| 47/50 [00:12<00:00, 3.64it/s] 96%|█████████▌| 48/50 [00:13<00:00, 3.64it/s] 98%|█████████▊| 49/50 [00:13<00:00, 3.63it/s] 100%|██████████| 50/50 [00:13<00:00, 3.63it/s] 100%|██████████| 50/50 [00:13<00:00, 3.64it/s]
Prediction
hudsongraeme/cybertruck:4e7b9292IDez7rob3b7327waexiqmcwrq7qeStatusSucceededSourceWebHardwareA40 (Large)Total durationCreatedInput
- width
- 1024
- height
- 1024
- prompt
- A photo of a TOK driving extremely fast off road, twilight
- refine
- no_refiner
- scheduler
- K_EULER
- lora_scale
- 0.8
- num_outputs
- 1
- guidance_scale
- 7.5
- apply_watermark
- high_noise_frac
- 0.6
- negative_prompt
- prompt_strength
- 0.8
- num_inference_steps
- 50
{ "width": 1024, "height": 1024, "prompt": "A photo of a TOK driving extremely fast off road, twilight", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.8, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": true, "high_noise_frac": 0.6, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 50 }
Install Replicate’s Node.js client library:npm install replicate
Set theREPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run hudsongraeme/cybertruck using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "hudsongraeme/cybertruck:4e7b92920cf8bbec4862ccad2f905d83430d1ee54f47261d52e055aeadf6f9da", { input: { width: 1024, height: 1024, prompt: "A photo of a TOK driving extremely fast off road, twilight", refine: "no_refiner", scheduler: "K_EULER", lora_scale: 0.8, num_outputs: 1, guidance_scale: 7.5, apply_watermark: true, high_noise_frac: 0.6, negative_prompt: "", prompt_strength: 0.8, num_inference_steps: 50 } } ); console.log(output);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Set theREPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import the client:import replicate
Run hudsongraeme/cybertruck using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "hudsongraeme/cybertruck:4e7b92920cf8bbec4862ccad2f905d83430d1ee54f47261d52e055aeadf6f9da", input={ "width": 1024, "height": 1024, "prompt": "A photo of a TOK driving extremely fast off road, twilight", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.8, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": True, "high_noise_frac": 0.6, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 50 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Set theREPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Run hudsongraeme/cybertruck using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "4e7b92920cf8bbec4862ccad2f905d83430d1ee54f47261d52e055aeadf6f9da", "input": { "width": 1024, "height": 1024, "prompt": "A photo of a TOK driving extremely fast off road, twilight", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.8, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": true, "high_noise_frac": 0.6, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 50 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
You can run this model locally using Cog. First, install Cog:brew install cog
If you don’t have Homebrew, there are other installation options available.
Run this to download the model and run it in your local environment:
cog predict r8.im/hudsongraeme/cybertruck@sha256:4e7b92920cf8bbec4862ccad2f905d83430d1ee54f47261d52e055aeadf6f9da \ -i 'width=1024' \ -i 'height=1024' \ -i 'prompt="A photo of a TOK driving extremely fast off road, twilight"' \ -i 'refine="no_refiner"' \ -i 'scheduler="K_EULER"' \ -i 'lora_scale=0.8' \ -i 'num_outputs=1' \ -i 'guidance_scale=7.5' \ -i 'apply_watermark=true' \ -i 'high_noise_frac=0.6' \ -i 'negative_prompt=""' \ -i 'prompt_strength=0.8' \ -i 'num_inference_steps=50'
To learn more, take a look at the Cog documentation.
Run this to download the model and run it in your local environment:
docker run -d -p 5000:5000 --gpus=all r8.im/hudsongraeme/cybertruck@sha256:4e7b92920cf8bbec4862ccad2f905d83430d1ee54f47261d52e055aeadf6f9da
curl -s -X POST \ -H "Content-Type: application/json" \ -d $'{ "input": { "width": 1024, "height": 1024, "prompt": "A photo of a TOK driving extremely fast off road, twilight", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.8, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": true, "high_noise_frac": 0.6, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 50 } }' \ http://localhost:5000/predictions
To learn more, take a look at the Cog documentation.
Output
{ "completed_at": "2023-10-19T04:44:57.472564Z", "created_at": "2023-10-19T04:44:39.496140Z", "data_removed": false, "error": null, "id": "ez7rob3b7327waexiqmcwrq7qe", "input": { "width": 1024, "height": 1024, "prompt": "A photo of a TOK driving extremely fast off road, twilight", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.8, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": true, "high_noise_frac": 0.6, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 50 }, "logs": "Using seed: 46920\nLoading fine-tuned model\nDoes not have Unet. assume we are using LoRA\nLoading Unet LoRA\nPrompt: A photo of a <s0><s1> driving extremely fast off road, twilight\ntxt2img mode\n 0%| | 0/50 [00:00<?, ?it/s]\n 2%|▏ | 1/50 [00:00<00:13, 3.66it/s]\n 4%|▍ | 2/50 [00:00<00:13, 3.65it/s]\n 6%|▌ | 3/50 [00:00<00:12, 3.64it/s]\n 8%|▊ | 4/50 [00:01<00:12, 3.63it/s]\n 10%|█ | 5/50 [00:01<00:12, 3.64it/s]\n 12%|█▏ | 6/50 [00:01<00:12, 3.65it/s]\n 14%|█▍ | 7/50 [00:01<00:11, 3.64it/s]\n 16%|█▌ | 8/50 [00:02<00:11, 3.64it/s]\n 18%|█▊ | 9/50 [00:02<00:11, 3.64it/s]\n 20%|██ | 10/50 [00:02<00:10, 3.65it/s]\n 22%|██▏ | 11/50 [00:03<00:10, 3.65it/s]\n 24%|██▍ | 12/50 [00:03<00:10, 3.65it/s]\n 26%|██▌ | 13/50 [00:03<00:10, 3.64it/s]\n 28%|██▊ | 14/50 [00:03<00:09, 3.65it/s]\n 30%|███ | 15/50 [00:04<00:09, 3.65it/s]\n 32%|███▏ | 16/50 [00:04<00:09, 3.64it/s]\n 34%|███▍ | 17/50 [00:04<00:09, 3.64it/s]\n 36%|███▌ | 18/50 [00:04<00:08, 3.65it/s]\n 38%|███▊ | 19/50 [00:05<00:08, 3.65it/s]\n 40%|████ | 20/50 [00:05<00:08, 3.65it/s]\n 42%|████▏ | 21/50 [00:05<00:07, 3.65it/s]\n 44%|████▍ | 22/50 [00:06<00:07, 3.65it/s]\n 46%|████▌ | 23/50 [00:06<00:07, 3.65it/s]\n 48%|████▊ | 24/50 [00:06<00:07, 3.65it/s]\n 50%|█████ | 25/50 [00:06<00:06, 3.65it/s]\n 52%|█████▏ | 26/50 [00:07<00:06, 3.64it/s]\n 54%|█████▍ | 27/50 [00:07<00:06, 3.64it/s]\n 56%|█████▌ | 28/50 [00:07<00:06, 3.64it/s]\n 58%|█████▊ | 29/50 [00:07<00:05, 3.64it/s]\n 60%|██████ | 30/50 [00:08<00:05, 3.65it/s]\n 62%|██████▏ | 31/50 [00:08<00:05, 3.64it/s]\n 64%|██████▍ | 32/50 [00:08<00:04, 3.64it/s]\n 66%|██████▌ | 33/50 [00:09<00:04, 3.64it/s]\n 68%|██████▊ | 34/50 [00:09<00:04, 3.64it/s]\n 70%|███████ | 35/50 [00:09<00:04, 3.64it/s]\n 72%|███████▏ | 36/50 [00:09<00:03, 3.64it/s]\n 74%|███████▍ | 37/50 [00:10<00:03, 3.64it/s]\n 76%|███████▌ | 38/50 [00:10<00:03, 3.65it/s]\n 78%|███████▊ | 39/50 [00:10<00:03, 3.64it/s]\n 80%|████████ | 40/50 [00:10<00:02, 3.64it/s]\n 82%|████████▏ | 41/50 [00:11<00:02, 3.64it/s]\n 84%|████████▍ | 42/50 [00:11<00:02, 3.64it/s]\n 86%|████████▌ | 43/50 [00:11<00:01, 3.64it/s]\n 88%|████████▊ | 44/50 [00:12<00:01, 3.64it/s]\n 90%|█████████ | 45/50 [00:12<00:01, 3.64it/s]\n 92%|█████████▏| 46/50 [00:12<00:01, 3.64it/s]\n 94%|█████████▍| 47/50 [00:12<00:00, 3.64it/s]\n 96%|█████████▌| 48/50 [00:13<00:00, 3.64it/s]\n 98%|█████████▊| 49/50 [00:13<00:00, 3.64it/s]\n100%|██████████| 50/50 [00:13<00:00, 3.64it/s]\n100%|██████████| 50/50 [00:13<00:00, 3.64it/s]", "metrics": { "predict_time": 16.411233, "total_time": 17.976424 }, "output": [ "https://replicate.delivery/pbxt/zXSBarRHjAZmOF28DK50UepzqILT5t8nIK2y3weziJzIZmvRA/out-0.png" ], "started_at": "2023-10-19T04:44:41.061331Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/ez7rob3b7327waexiqmcwrq7qe", "cancel": "https://api.replicate.com/v1/predictions/ez7rob3b7327waexiqmcwrq7qe/cancel" }, "version": "4cb80e2be47c463f65976fdad5f90179e5c613728a7ab30f723dd9c51a0a1ec9" }
Generated inUsing seed: 46920 Loading fine-tuned model Does not have Unet. assume we are using LoRA Loading Unet LoRA Prompt: A photo of a <s0><s1> driving extremely fast off road, twilight txt2img mode 0%| | 0/50 [00:00<?, ?it/s] 2%|▏ | 1/50 [00:00<00:13, 3.66it/s] 4%|▍ | 2/50 [00:00<00:13, 3.65it/s] 6%|▌ | 3/50 [00:00<00:12, 3.64it/s] 8%|▊ | 4/50 [00:01<00:12, 3.63it/s] 10%|█ | 5/50 [00:01<00:12, 3.64it/s] 12%|█▏ | 6/50 [00:01<00:12, 3.65it/s] 14%|█▍ | 7/50 [00:01<00:11, 3.64it/s] 16%|█▌ | 8/50 [00:02<00:11, 3.64it/s] 18%|█▊ | 9/50 [00:02<00:11, 3.64it/s] 20%|██ | 10/50 [00:02<00:10, 3.65it/s] 22%|██▏ | 11/50 [00:03<00:10, 3.65it/s] 24%|██▍ | 12/50 [00:03<00:10, 3.65it/s] 26%|██▌ | 13/50 [00:03<00:10, 3.64it/s] 28%|██▊ | 14/50 [00:03<00:09, 3.65it/s] 30%|███ | 15/50 [00:04<00:09, 3.65it/s] 32%|███▏ | 16/50 [00:04<00:09, 3.64it/s] 34%|███▍ | 17/50 [00:04<00:09, 3.64it/s] 36%|███▌ | 18/50 [00:04<00:08, 3.65it/s] 38%|███▊ | 19/50 [00:05<00:08, 3.65it/s] 40%|████ | 20/50 [00:05<00:08, 3.65it/s] 42%|████▏ | 21/50 [00:05<00:07, 3.65it/s] 44%|████▍ | 22/50 [00:06<00:07, 3.65it/s] 46%|████▌ | 23/50 [00:06<00:07, 3.65it/s] 48%|████▊ | 24/50 [00:06<00:07, 3.65it/s] 50%|█████ | 25/50 [00:06<00:06, 3.65it/s] 52%|█████▏ | 26/50 [00:07<00:06, 3.64it/s] 54%|█████▍ | 27/50 [00:07<00:06, 3.64it/s] 56%|█████▌ | 28/50 [00:07<00:06, 3.64it/s] 58%|█████▊ | 29/50 [00:07<00:05, 3.64it/s] 60%|██████ | 30/50 [00:08<00:05, 3.65it/s] 62%|██████▏ | 31/50 [00:08<00:05, 3.64it/s] 64%|██████▍ | 32/50 [00:08<00:04, 3.64it/s] 66%|██████▌ | 33/50 [00:09<00:04, 3.64it/s] 68%|██████▊ | 34/50 [00:09<00:04, 3.64it/s] 70%|███████ | 35/50 [00:09<00:04, 3.64it/s] 72%|███████▏ | 36/50 [00:09<00:03, 3.64it/s] 74%|███████▍ | 37/50 [00:10<00:03, 3.64it/s] 76%|███████▌ | 38/50 [00:10<00:03, 3.65it/s] 78%|███████▊ | 39/50 [00:10<00:03, 3.64it/s] 80%|████████ | 40/50 [00:10<00:02, 3.64it/s] 82%|████████▏ | 41/50 [00:11<00:02, 3.64it/s] 84%|████████▍ | 42/50 [00:11<00:02, 3.64it/s] 86%|████████▌ | 43/50 [00:11<00:01, 3.64it/s] 88%|████████▊ | 44/50 [00:12<00:01, 3.64it/s] 90%|█████████ | 45/50 [00:12<00:01, 3.64it/s] 92%|█████████▏| 46/50 [00:12<00:01, 3.64it/s] 94%|█████████▍| 47/50 [00:12<00:00, 3.64it/s] 96%|█████████▌| 48/50 [00:13<00:00, 3.64it/s] 98%|█████████▊| 49/50 [00:13<00:00, 3.64it/s] 100%|██████████| 50/50 [00:13<00:00, 3.64it/s] 100%|██████████| 50/50 [00:13<00:00, 3.64it/s]
Prediction
hudsongraeme/cybertruck:4e7b9292IDfth6ndlbcusfcykjavzip56pyyStatusSucceededSourceWebHardwareA40 (Large)Total durationCreatedInput
- width
- 1024
- height
- 1024
- prompt
- A photo of black TOK parked in a dark garage, headlights
- refine
- no_refiner
- scheduler
- K_EULER
- lora_scale
- 0.75
- num_outputs
- 1
- guidance_scale
- 7.5
- apply_watermark
- high_noise_frac
- 0.8
- negative_prompt
- prompt_strength
- 0.8
- num_inference_steps
- 50
{ "image": "https://replicate.delivery/pbxt/Jj0KGYKURkm5OQYujRDCqTF2w24KSdXQ2cJfjnsXA4B4nPt6/1d052dc9dc5f0479b2aad37b283bf9dc.jpg", "width": 1024, "height": 1024, "prompt": "A photo of black TOK parked in a dark garage, headlights", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.75, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": true, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 50 }
Install Replicate’s Node.js client library:npm install replicate
Set theREPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run hudsongraeme/cybertruck using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "hudsongraeme/cybertruck:4e7b92920cf8bbec4862ccad2f905d83430d1ee54f47261d52e055aeadf6f9da", { input: { image: "https://replicate.delivery/pbxt/Jj0KGYKURkm5OQYujRDCqTF2w24KSdXQ2cJfjnsXA4B4nPt6/1d052dc9dc5f0479b2aad37b283bf9dc.jpg", width: 1024, height: 1024, prompt: "A photo of black TOK parked in a dark garage, headlights", refine: "no_refiner", scheduler: "K_EULER", lora_scale: 0.75, num_outputs: 1, guidance_scale: 7.5, apply_watermark: true, high_noise_frac: 0.8, negative_prompt: "", prompt_strength: 0.8, num_inference_steps: 50 } } ); console.log(output);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Set theREPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import the client:import replicate
Run hudsongraeme/cybertruck using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "hudsongraeme/cybertruck:4e7b92920cf8bbec4862ccad2f905d83430d1ee54f47261d52e055aeadf6f9da", input={ "image": "https://replicate.delivery/pbxt/Jj0KGYKURkm5OQYujRDCqTF2w24KSdXQ2cJfjnsXA4B4nPt6/1d052dc9dc5f0479b2aad37b283bf9dc.jpg", "width": 1024, "height": 1024, "prompt": "A photo of black TOK parked in a dark garage, headlights", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.75, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": True, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 50 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Set theREPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Run hudsongraeme/cybertruck using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "4e7b92920cf8bbec4862ccad2f905d83430d1ee54f47261d52e055aeadf6f9da", "input": { "image": "https://replicate.delivery/pbxt/Jj0KGYKURkm5OQYujRDCqTF2w24KSdXQ2cJfjnsXA4B4nPt6/1d052dc9dc5f0479b2aad37b283bf9dc.jpg", "width": 1024, "height": 1024, "prompt": "A photo of black TOK parked in a dark garage, headlights", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.75, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": true, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 50 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
You can run this model locally using Cog. First, install Cog:brew install cog
If you don’t have Homebrew, there are other installation options available.
Run this to download the model and run it in your local environment:
cog predict r8.im/hudsongraeme/cybertruck@sha256:4e7b92920cf8bbec4862ccad2f905d83430d1ee54f47261d52e055aeadf6f9da \ -i 'image="https://replicate.delivery/pbxt/Jj0KGYKURkm5OQYujRDCqTF2w24KSdXQ2cJfjnsXA4B4nPt6/1d052dc9dc5f0479b2aad37b283bf9dc.jpg"' \ -i 'width=1024' \ -i 'height=1024' \ -i 'prompt="A photo of black TOK parked in a dark garage, headlights"' \ -i 'refine="no_refiner"' \ -i 'scheduler="K_EULER"' \ -i 'lora_scale=0.75' \ -i 'num_outputs=1' \ -i 'guidance_scale=7.5' \ -i 'apply_watermark=true' \ -i 'high_noise_frac=0.8' \ -i 'negative_prompt=""' \ -i 'prompt_strength=0.8' \ -i 'num_inference_steps=50'
To learn more, take a look at the Cog documentation.
Run this to download the model and run it in your local environment:
docker run -d -p 5000:5000 --gpus=all r8.im/hudsongraeme/cybertruck@sha256:4e7b92920cf8bbec4862ccad2f905d83430d1ee54f47261d52e055aeadf6f9da
curl -s -X POST \ -H "Content-Type: application/json" \ -d $'{ "input": { "image": "https://replicate.delivery/pbxt/Jj0KGYKURkm5OQYujRDCqTF2w24KSdXQ2cJfjnsXA4B4nPt6/1d052dc9dc5f0479b2aad37b283bf9dc.jpg", "width": 1024, "height": 1024, "prompt": "A photo of black TOK parked in a dark garage, headlights", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.75, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": true, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 50 } }' \ http://localhost:5000/predictions
To learn more, take a look at the Cog documentation.
Output
{ "completed_at": "2023-10-19T05:06:47.158295Z", "created_at": "2023-10-19T05:06:35.851119Z", "data_removed": false, "error": null, "id": "fth6ndlbcusfcykjavzip56pyy", "input": { "image": "https://replicate.delivery/pbxt/Jj0KGYKURkm5OQYujRDCqTF2w24KSdXQ2cJfjnsXA4B4nPt6/1d052dc9dc5f0479b2aad37b283bf9dc.jpg", "width": 1024, "height": 1024, "prompt": "A photo of black TOK parked in a dark garage, headlights", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.75, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": true, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 50 }, "logs": "Using seed: 29052\nEnsuring enough disk space...\nFree disk space: 2127912267776\nDownloading weights: https://pbxt.replicate.delivery/LlSIASeBycWpOS1KVqfe31ZjIKdk7Ho3RblERn4k3PgfGXeNC/trained_model.tar\nb'Downloaded 186 MB bytes in 1.610s (116 MB/s)\\nExtracted 186 MB in 0.067s (2.8 GB/s)\\n'\nDownloaded weights in 1.8020985126495361 seconds\nLoading fine-tuned model\nDoes not have Unet. assume we are using LoRA\nLoading Unet LoRA\nPrompt: A photo of black <s0><s1> parked in a dark garage, headlights\nimg2img mode\n 0%| | 0/40 [00:00<?, ?it/s]\n 2%|▎ | 1/40 [00:00<00:07, 5.31it/s]\n 5%|▌ | 2/40 [00:00<00:05, 7.10it/s]\n 8%|▊ | 3/40 [00:00<00:04, 8.00it/s]\n 10%|█ | 4/40 [00:00<00:04, 8.52it/s]\n 12%|█▎ | 5/40 [00:00<00:03, 8.86it/s]\n 15%|█▌ | 6/40 [00:00<00:03, 9.07it/s]\n 18%|█▊ | 7/40 [00:00<00:03, 9.24it/s]\n 20%|██ | 8/40 [00:00<00:03, 9.36it/s]\n 22%|██▎ | 9/40 [00:01<00:03, 9.42it/s]\n 25%|██▌ | 10/40 [00:01<00:03, 9.38it/s]\n 28%|██▊ | 11/40 [00:01<00:03, 9.45it/s]\n 30%|███ | 12/40 [00:01<00:02, 9.50it/s]\n 32%|███▎ | 13/40 [00:01<00:02, 9.53it/s]\n 35%|███▌ | 14/40 [00:01<00:02, 9.57it/s]\n 38%|███▊ | 15/40 [00:01<00:02, 9.59it/s]\n 40%|████ | 16/40 [00:01<00:02, 9.61it/s]\n 42%|████▎ | 17/40 [00:01<00:02, 9.61it/s]\n 45%|████▌ | 18/40 [00:01<00:02, 9.62it/s]\n 48%|████▊ | 19/40 [00:02<00:02, 9.63it/s]\n 50%|█████ | 20/40 [00:02<00:02, 9.65it/s]\n 52%|█████▎ | 21/40 [00:02<00:01, 9.62it/s]\n 55%|█████▌ | 22/40 [00:02<00:01, 9.58it/s]\n 57%|█████▊ | 23/40 [00:02<00:01, 9.52it/s]\n 60%|██████ | 24/40 [00:02<00:01, 9.50it/s]\n 62%|██████▎ | 25/40 [00:02<00:01, 9.51it/s]\n 65%|██████▌ | 26/40 [00:02<00:01, 9.55it/s]\n 68%|██████▊ | 27/40 [00:02<00:01, 9.58it/s]\n 70%|███████ | 28/40 [00:03<00:01, 9.60it/s]\n 72%|███████▎ | 29/40 [00:03<00:01, 9.60it/s]\n 75%|███████▌ | 30/40 [00:03<00:01, 9.61it/s]\n 78%|███████▊ | 31/40 [00:03<00:00, 9.61it/s]\n 80%|████████ | 32/40 [00:03<00:00, 9.61it/s]\n 82%|████████▎ | 33/40 [00:03<00:00, 9.62it/s]\n 85%|████████▌ | 34/40 [00:03<00:00, 9.63it/s]\n 88%|████████▊ | 35/40 [00:03<00:00, 9.29it/s]\n 90%|█████████ | 36/40 [00:03<00:00, 9.38it/s]\n 92%|█████████▎| 37/40 [00:03<00:00, 9.28it/s]\n 95%|█████████▌| 38/40 [00:04<00:00, 9.38it/s]\n 98%|█████████▊| 39/40 [00:04<00:00, 9.47it/s]\n100%|██████████| 40/40 [00:04<00:00, 9.35it/s]\n100%|██████████| 40/40 [00:04<00:00, 9.33it/s]", "metrics": { "predict_time": 7.886112, "total_time": 11.307176 }, "output": [ "https://replicate.delivery/pbxt/nAjQUFG36mLjERZhZ6FozpcQPf8mZG0jFHt1nI2Wf9mmtmvRA/out-0.png" ], "started_at": "2023-10-19T05:06:39.272183Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/fth6ndlbcusfcykjavzip56pyy", "cancel": "https://api.replicate.com/v1/predictions/fth6ndlbcusfcykjavzip56pyy/cancel" }, "version": "4cb80e2be47c463f65976fdad5f90179e5c613728a7ab30f723dd9c51a0a1ec9" }
Generated inUsing seed: 29052 Ensuring enough disk space... Free disk space: 2127912267776 Downloading weights: https://pbxt.replicate.delivery/LlSIASeBycWpOS1KVqfe31ZjIKdk7Ho3RblERn4k3PgfGXeNC/trained_model.tar b'Downloaded 186 MB bytes in 1.610s (116 MB/s)\nExtracted 186 MB in 0.067s (2.8 GB/s)\n' Downloaded weights in 1.8020985126495361 seconds Loading fine-tuned model Does not have Unet. assume we are using LoRA Loading Unet LoRA Prompt: A photo of black <s0><s1> parked in a dark garage, headlights img2img mode 0%| | 0/40 [00:00<?, ?it/s] 2%|▎ | 1/40 [00:00<00:07, 5.31it/s] 5%|▌ | 2/40 [00:00<00:05, 7.10it/s] 8%|▊ | 3/40 [00:00<00:04, 8.00it/s] 10%|█ | 4/40 [00:00<00:04, 8.52it/s] 12%|█▎ | 5/40 [00:00<00:03, 8.86it/s] 15%|█▌ | 6/40 [00:00<00:03, 9.07it/s] 18%|█▊ | 7/40 [00:00<00:03, 9.24it/s] 20%|██ | 8/40 [00:00<00:03, 9.36it/s] 22%|██▎ | 9/40 [00:01<00:03, 9.42it/s] 25%|██▌ | 10/40 [00:01<00:03, 9.38it/s] 28%|██▊ | 11/40 [00:01<00:03, 9.45it/s] 30%|███ | 12/40 [00:01<00:02, 9.50it/s] 32%|███▎ | 13/40 [00:01<00:02, 9.53it/s] 35%|███▌ | 14/40 [00:01<00:02, 9.57it/s] 38%|███▊ | 15/40 [00:01<00:02, 9.59it/s] 40%|████ | 16/40 [00:01<00:02, 9.61it/s] 42%|████▎ | 17/40 [00:01<00:02, 9.61it/s] 45%|████▌ | 18/40 [00:01<00:02, 9.62it/s] 48%|████▊ | 19/40 [00:02<00:02, 9.63it/s] 50%|█████ | 20/40 [00:02<00:02, 9.65it/s] 52%|█████▎ | 21/40 [00:02<00:01, 9.62it/s] 55%|█████▌ | 22/40 [00:02<00:01, 9.58it/s] 57%|█████▊ | 23/40 [00:02<00:01, 9.52it/s] 60%|██████ | 24/40 [00:02<00:01, 9.50it/s] 62%|██████▎ | 25/40 [00:02<00:01, 9.51it/s] 65%|██████▌ | 26/40 [00:02<00:01, 9.55it/s] 68%|██████▊ | 27/40 [00:02<00:01, 9.58it/s] 70%|███████ | 28/40 [00:03<00:01, 9.60it/s] 72%|███████▎ | 29/40 [00:03<00:01, 9.60it/s] 75%|███████▌ | 30/40 [00:03<00:01, 9.61it/s] 78%|███████▊ | 31/40 [00:03<00:00, 9.61it/s] 80%|████████ | 32/40 [00:03<00:00, 9.61it/s] 82%|████████▎ | 33/40 [00:03<00:00, 9.62it/s] 85%|████████▌ | 34/40 [00:03<00:00, 9.63it/s] 88%|████████▊ | 35/40 [00:03<00:00, 9.29it/s] 90%|█████████ | 36/40 [00:03<00:00, 9.38it/s] 92%|█████████▎| 37/40 [00:03<00:00, 9.28it/s] 95%|█████████▌| 38/40 [00:04<00:00, 9.38it/s] 98%|█████████▊| 39/40 [00:04<00:00, 9.47it/s] 100%|██████████| 40/40 [00:04<00:00, 9.35it/s] 100%|██████████| 40/40 [00:04<00:00, 9.33it/s]
Prediction
hudsongraeme/cybertruck:4e7b9292IDoytbs4tbvyzcr4a7tlacbupkveStatusSucceededSourceWebHardwareA40 (Large)Total durationCreatedInput
- width
- 1024
- height
- 1024
- prompt
- A photo of TOK parked near autumn trees, bright
- refine
- no_refiner
- scheduler
- K_EULER
- lora_scale
- 0.6
- num_outputs
- 1
- guidance_scale
- 7.5
- apply_watermark
- high_noise_frac
- 0.8
- negative_prompt
- prompt_strength
- 0.9
- num_inference_steps
- 50
{ "image": "https://replicate.delivery/pbxt/Jj0cCRNS9EyPSZXLyACPoZ4nB3pmKfd0wK6czgkwVk3EGq6J/empty-autumn-road-with-trees-in-a-row-on-the-edges-photo.jpg", "width": 1024, "height": 1024, "prompt": "A photo of TOK parked near autumn trees, bright", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": true, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.9, "num_inference_steps": 50 }
Install Replicate’s Node.js client library:npm install replicate
Set theREPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run hudsongraeme/cybertruck using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "hudsongraeme/cybertruck:4e7b92920cf8bbec4862ccad2f905d83430d1ee54f47261d52e055aeadf6f9da", { input: { image: "https://replicate.delivery/pbxt/Jj0cCRNS9EyPSZXLyACPoZ4nB3pmKfd0wK6czgkwVk3EGq6J/empty-autumn-road-with-trees-in-a-row-on-the-edges-photo.jpg", width: 1024, height: 1024, prompt: "A photo of TOK parked near autumn trees, bright", refine: "no_refiner", scheduler: "K_EULER", lora_scale: 0.6, num_outputs: 1, guidance_scale: 7.5, apply_watermark: true, high_noise_frac: 0.8, negative_prompt: "", prompt_strength: 0.9, num_inference_steps: 50 } } ); console.log(output);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Set theREPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import the client:import replicate
Run hudsongraeme/cybertruck using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "hudsongraeme/cybertruck:4e7b92920cf8bbec4862ccad2f905d83430d1ee54f47261d52e055aeadf6f9da", input={ "image": "https://replicate.delivery/pbxt/Jj0cCRNS9EyPSZXLyACPoZ4nB3pmKfd0wK6czgkwVk3EGq6J/empty-autumn-road-with-trees-in-a-row-on-the-edges-photo.jpg", "width": 1024, "height": 1024, "prompt": "A photo of TOK parked near autumn trees, bright", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": True, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.9, "num_inference_steps": 50 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Set theREPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Run hudsongraeme/cybertruck using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "4e7b92920cf8bbec4862ccad2f905d83430d1ee54f47261d52e055aeadf6f9da", "input": { "image": "https://replicate.delivery/pbxt/Jj0cCRNS9EyPSZXLyACPoZ4nB3pmKfd0wK6czgkwVk3EGq6J/empty-autumn-road-with-trees-in-a-row-on-the-edges-photo.jpg", "width": 1024, "height": 1024, "prompt": "A photo of TOK parked near autumn trees, bright", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": true, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.9, "num_inference_steps": 50 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
You can run this model locally using Cog. First, install Cog:brew install cog
If you don’t have Homebrew, there are other installation options available.
Run this to download the model and run it in your local environment:
cog predict r8.im/hudsongraeme/cybertruck@sha256:4e7b92920cf8bbec4862ccad2f905d83430d1ee54f47261d52e055aeadf6f9da \ -i 'image="https://replicate.delivery/pbxt/Jj0cCRNS9EyPSZXLyACPoZ4nB3pmKfd0wK6czgkwVk3EGq6J/empty-autumn-road-with-trees-in-a-row-on-the-edges-photo.jpg"' \ -i 'width=1024' \ -i 'height=1024' \ -i 'prompt="A photo of TOK parked near autumn trees, bright"' \ -i 'refine="no_refiner"' \ -i 'scheduler="K_EULER"' \ -i 'lora_scale=0.6' \ -i 'num_outputs=1' \ -i 'guidance_scale=7.5' \ -i 'apply_watermark=true' \ -i 'high_noise_frac=0.8' \ -i 'negative_prompt=""' \ -i 'prompt_strength=0.9' \ -i 'num_inference_steps=50'
To learn more, take a look at the Cog documentation.
Run this to download the model and run it in your local environment:
docker run -d -p 5000:5000 --gpus=all r8.im/hudsongraeme/cybertruck@sha256:4e7b92920cf8bbec4862ccad2f905d83430d1ee54f47261d52e055aeadf6f9da
curl -s -X POST \ -H "Content-Type: application/json" \ -d $'{ "input": { "image": "https://replicate.delivery/pbxt/Jj0cCRNS9EyPSZXLyACPoZ4nB3pmKfd0wK6czgkwVk3EGq6J/empty-autumn-road-with-trees-in-a-row-on-the-edges-photo.jpg", "width": 1024, "height": 1024, "prompt": "A photo of TOK parked near autumn trees, bright", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": true, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.9, "num_inference_steps": 50 } }' \ http://localhost:5000/predictions
To learn more, take a look at the Cog documentation.
Output
{ "completed_at": "2023-10-19T05:26:24.598509Z", "created_at": "2023-10-19T05:25:31.421448Z", "data_removed": false, "error": null, "id": "oytbs4tbvyzcr4a7tlacbupkve", "input": { "image": "https://replicate.delivery/pbxt/Jj0cCRNS9EyPSZXLyACPoZ4nB3pmKfd0wK6czgkwVk3EGq6J/empty-autumn-road-with-trees-in-a-row-on-the-edges-photo.jpg", "width": 1024, "height": 1024, "prompt": "A photo of TOK parked near autumn trees, bright", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": true, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.9, "num_inference_steps": 50 }, "logs": "Using seed: 60623\nEnsuring enough disk space...\nFree disk space: 2073072680960\nDownloading weights: https://pbxt.replicate.delivery/LlSIASeBycWpOS1KVqfe31ZjIKdk7Ho3RblERn4k3PgfGXeNC/trained_model.tar\nb'Downloaded 186 MB bytes in 0.295s (630 MB/s)\\nExtracted 186 MB in 0.074s (2.5 GB/s)\\n'\nDownloaded weights in 0.5329921245574951 seconds\nLoading fine-tuned model\nDoes not have Unet. assume we are using LoRA\nLoading Unet LoRA\nPrompt: A photo of <s0><s1> parked near autumn trees, bright\nimg2img mode\n/usr/local/lib/python3.9/site-packages/torch/nn/modules/conv.py:459: UserWarning: Applied workaround for CuDNN issue, install nvrtc.so (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:80.)\nreturn F.conv2d(input, weight, bias, self.stride,\n 0%| | 0/45 [00:00<?, ?it/s]\n 2%|▏ | 1/45 [00:00<00:09, 4.53it/s]\n 4%|▍ | 2/45 [00:00<00:09, 4.72it/s]\n 7%|▋ | 3/45 [00:00<00:08, 4.80it/s]\n 9%|▉ | 4/45 [00:00<00:08, 4.83it/s]\n 11%|█ | 5/45 [00:01<00:08, 4.85it/s]\n 13%|█▎ | 6/45 [00:01<00:08, 4.86it/s]\n 16%|█▌ | 7/45 [00:01<00:07, 4.87it/s]\n 18%|█▊ | 8/45 [00:01<00:07, 4.87it/s]\n 20%|██ | 9/45 [00:01<00:07, 4.87it/s]\n 22%|██▏ | 10/45 [00:02<00:07, 4.87it/s]\n 24%|██▍ | 11/45 [00:02<00:06, 4.87it/s]\n 27%|██▋ | 12/45 [00:02<00:06, 4.87it/s]\n 29%|██▉ | 13/45 [00:02<00:06, 4.87it/s]\n 31%|███ | 14/45 [00:02<00:06, 4.88it/s]\n 33%|███▎ | 15/45 [00:03<00:06, 4.88it/s]\n 36%|███▌ | 16/45 [00:03<00:05, 4.88it/s]\n 38%|███▊ | 17/45 [00:03<00:05, 4.88it/s]\n 40%|████ | 18/45 [00:03<00:05, 4.88it/s]\n 42%|████▏ | 19/45 [00:03<00:05, 4.88it/s]\n 44%|████▍ | 20/45 [00:04<00:05, 4.88it/s]\n 47%|████▋ | 21/45 [00:04<00:04, 4.87it/s]\n 49%|████▉ | 22/45 [00:04<00:04, 4.87it/s]\n 51%|█████ | 23/45 [00:04<00:04, 4.87it/s]\n 53%|█████▎ | 24/45 [00:04<00:04, 4.86it/s]\n 56%|█████▌ | 25/45 [00:05<00:04, 4.86it/s]\n 58%|█████▊ | 26/45 [00:05<00:03, 4.86it/s]\n 60%|██████ | 27/45 [00:05<00:03, 4.86it/s]\n 62%|██████▏ | 28/45 [00:05<00:03, 4.86it/s]\n 64%|██████▍ | 29/45 [00:05<00:03, 4.87it/s]\n 67%|██████▋ | 30/45 [00:06<00:03, 4.87it/s]\n 69%|██████▉ | 31/45 [00:06<00:02, 4.87it/s]\n 71%|███████ | 32/45 [00:06<00:02, 4.87it/s]\n 73%|███████▎ | 33/45 [00:06<00:02, 4.87it/s]\n 76%|███████▌ | 34/45 [00:06<00:02, 4.86it/s]\n 78%|███████▊ | 35/45 [00:07<00:02, 4.86it/s]\n 80%|████████ | 36/45 [00:07<00:01, 4.87it/s]\n 82%|████████▏ | 37/45 [00:07<00:01, 4.87it/s]\n 84%|████████▍ | 38/45 [00:07<00:01, 4.86it/s]\n 87%|████████▋ | 39/45 [00:08<00:01, 4.86it/s]\n 89%|████████▉ | 40/45 [00:08<00:01, 4.86it/s]\n 91%|█████████ | 41/45 [00:08<00:00, 4.86it/s]\n 93%|█████████▎| 42/45 [00:08<00:00, 4.86it/s]\n 96%|█████████▌| 43/45 [00:08<00:00, 4.86it/s]\n 98%|█████████▊| 44/45 [00:09<00:00, 4.86it/s]\n100%|██████████| 45/45 [00:09<00:00, 4.86it/s]\n100%|██████████| 45/45 [00:09<00:00, 4.86it/s]", "metrics": { "predict_time": 19.455914, "total_time": 53.177061 }, "output": [ "https://replicate.delivery/pbxt/fZZVI6wfNdp0TkfZI9fphfjCqZbOGY16fXzZNSTY77ZHAw5bE/out-0.png" ], "started_at": "2023-10-19T05:26:05.142595Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/oytbs4tbvyzcr4a7tlacbupkve", "cancel": "https://api.replicate.com/v1/predictions/oytbs4tbvyzcr4a7tlacbupkve/cancel" }, "version": "4cb80e2be47c463f65976fdad5f90179e5c613728a7ab30f723dd9c51a0a1ec9" }
Generated inUsing seed: 60623 Ensuring enough disk space... Free disk space: 2073072680960 Downloading weights: https://pbxt.replicate.delivery/LlSIASeBycWpOS1KVqfe31ZjIKdk7Ho3RblERn4k3PgfGXeNC/trained_model.tar b'Downloaded 186 MB bytes in 0.295s (630 MB/s)\nExtracted 186 MB in 0.074s (2.5 GB/s)\n' Downloaded weights in 0.5329921245574951 seconds Loading fine-tuned model Does not have Unet. assume we are using LoRA Loading Unet LoRA Prompt: A photo of <s0><s1> parked near autumn trees, bright img2img mode /usr/local/lib/python3.9/site-packages/torch/nn/modules/conv.py:459: UserWarning: Applied workaround for CuDNN issue, install nvrtc.so (Triggered internally at ../aten/src/ATen/native/cudnn/Conv_v8.cpp:80.) return F.conv2d(input, weight, bias, self.stride, 0%| | 0/45 [00:00<?, ?it/s] 2%|▏ | 1/45 [00:00<00:09, 4.53it/s] 4%|▍ | 2/45 [00:00<00:09, 4.72it/s] 7%|▋ | 3/45 [00:00<00:08, 4.80it/s] 9%|▉ | 4/45 [00:00<00:08, 4.83it/s] 11%|█ | 5/45 [00:01<00:08, 4.85it/s] 13%|█▎ | 6/45 [00:01<00:08, 4.86it/s] 16%|█▌ | 7/45 [00:01<00:07, 4.87it/s] 18%|█▊ | 8/45 [00:01<00:07, 4.87it/s] 20%|██ | 9/45 [00:01<00:07, 4.87it/s] 22%|██▏ | 10/45 [00:02<00:07, 4.87it/s] 24%|██▍ | 11/45 [00:02<00:06, 4.87it/s] 27%|██▋ | 12/45 [00:02<00:06, 4.87it/s] 29%|██▉ | 13/45 [00:02<00:06, 4.87it/s] 31%|███ | 14/45 [00:02<00:06, 4.88it/s] 33%|███▎ | 15/45 [00:03<00:06, 4.88it/s] 36%|███▌ | 16/45 [00:03<00:05, 4.88it/s] 38%|███▊ | 17/45 [00:03<00:05, 4.88it/s] 40%|████ | 18/45 [00:03<00:05, 4.88it/s] 42%|████▏ | 19/45 [00:03<00:05, 4.88it/s] 44%|████▍ | 20/45 [00:04<00:05, 4.88it/s] 47%|████▋ | 21/45 [00:04<00:04, 4.87it/s] 49%|████▉ | 22/45 [00:04<00:04, 4.87it/s] 51%|█████ | 23/45 [00:04<00:04, 4.87it/s] 53%|█████▎ | 24/45 [00:04<00:04, 4.86it/s] 56%|█████▌ | 25/45 [00:05<00:04, 4.86it/s] 58%|█████▊ | 26/45 [00:05<00:03, 4.86it/s] 60%|██████ | 27/45 [00:05<00:03, 4.86it/s] 62%|██████▏ | 28/45 [00:05<00:03, 4.86it/s] 64%|██████▍ | 29/45 [00:05<00:03, 4.87it/s] 67%|██████▋ | 30/45 [00:06<00:03, 4.87it/s] 69%|██████▉ | 31/45 [00:06<00:02, 4.87it/s] 71%|███████ | 32/45 [00:06<00:02, 4.87it/s] 73%|███████▎ | 33/45 [00:06<00:02, 4.87it/s] 76%|███████▌ | 34/45 [00:06<00:02, 4.86it/s] 78%|███████▊ | 35/45 [00:07<00:02, 4.86it/s] 80%|████████ | 36/45 [00:07<00:01, 4.87it/s] 82%|████████▏ | 37/45 [00:07<00:01, 4.87it/s] 84%|████████▍ | 38/45 [00:07<00:01, 4.86it/s] 87%|████████▋ | 39/45 [00:08<00:01, 4.86it/s] 89%|████████▉ | 40/45 [00:08<00:01, 4.86it/s] 91%|█████████ | 41/45 [00:08<00:00, 4.86it/s] 93%|█████████▎| 42/45 [00:08<00:00, 4.86it/s] 96%|█████████▌| 43/45 [00:08<00:00, 4.86it/s] 98%|█████████▊| 44/45 [00:09<00:00, 4.86it/s] 100%|██████████| 45/45 [00:09<00:00, 4.86it/s] 100%|██████████| 45/45 [00:09<00:00, 4.86it/s]
Prediction
hudsongraeme/cybertruck:4e7b9292IDie7exjdbst5s7glm4ztmphl4raStatusSucceededSourceWebHardwareA40 (Large)Total durationCreatedInput
- width
- 1024
- height
- 1024
- prompt
- A photo of TOK parked near autumn trees, bright
- refine
- no_refiner
- scheduler
- K_EULER
- lora_scale
- 0.6
- num_outputs
- 1
- guidance_scale
- 7.5
- apply_watermark
- high_noise_frac
- 0.8
- negative_prompt
- prompt_strength
- 0.9
- num_inference_steps
- 50
{ "image": "https://replicate.delivery/pbxt/Jj0dqHETJuYPLKBcgHXzVU4rnxrImP7qlCo1PByeoC7UzXht/empty-autumn-road-with-trees-in-a-row-on-the-edges-photo.jpg", "width": 1024, "height": 1024, "prompt": "A photo of TOK parked near autumn trees, bright", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": true, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.9, "num_inference_steps": 50 }
Install Replicate’s Node.js client library:npm install replicate
Set theREPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run hudsongraeme/cybertruck using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "hudsongraeme/cybertruck:4e7b92920cf8bbec4862ccad2f905d83430d1ee54f47261d52e055aeadf6f9da", { input: { image: "https://replicate.delivery/pbxt/Jj0dqHETJuYPLKBcgHXzVU4rnxrImP7qlCo1PByeoC7UzXht/empty-autumn-road-with-trees-in-a-row-on-the-edges-photo.jpg", width: 1024, height: 1024, prompt: "A photo of TOK parked near autumn trees, bright", refine: "no_refiner", scheduler: "K_EULER", lora_scale: 0.6, num_outputs: 1, guidance_scale: 7.5, apply_watermark: true, high_noise_frac: 0.8, negative_prompt: "", prompt_strength: 0.9, num_inference_steps: 50 } } ); console.log(output);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Set theREPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import the client:import replicate
Run hudsongraeme/cybertruck using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "hudsongraeme/cybertruck:4e7b92920cf8bbec4862ccad2f905d83430d1ee54f47261d52e055aeadf6f9da", input={ "image": "https://replicate.delivery/pbxt/Jj0dqHETJuYPLKBcgHXzVU4rnxrImP7qlCo1PByeoC7UzXht/empty-autumn-road-with-trees-in-a-row-on-the-edges-photo.jpg", "width": 1024, "height": 1024, "prompt": "A photo of TOK parked near autumn trees, bright", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": True, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.9, "num_inference_steps": 50 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Set theREPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Run hudsongraeme/cybertruck using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "4e7b92920cf8bbec4862ccad2f905d83430d1ee54f47261d52e055aeadf6f9da", "input": { "image": "https://replicate.delivery/pbxt/Jj0dqHETJuYPLKBcgHXzVU4rnxrImP7qlCo1PByeoC7UzXht/empty-autumn-road-with-trees-in-a-row-on-the-edges-photo.jpg", "width": 1024, "height": 1024, "prompt": "A photo of TOK parked near autumn trees, bright", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": true, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.9, "num_inference_steps": 50 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
You can run this model locally using Cog. First, install Cog:brew install cog
If you don’t have Homebrew, there are other installation options available.
Run this to download the model and run it in your local environment:
cog predict r8.im/hudsongraeme/cybertruck@sha256:4e7b92920cf8bbec4862ccad2f905d83430d1ee54f47261d52e055aeadf6f9da \ -i 'image="https://replicate.delivery/pbxt/Jj0dqHETJuYPLKBcgHXzVU4rnxrImP7qlCo1PByeoC7UzXht/empty-autumn-road-with-trees-in-a-row-on-the-edges-photo.jpg"' \ -i 'width=1024' \ -i 'height=1024' \ -i 'prompt="A photo of TOK parked near autumn trees, bright"' \ -i 'refine="no_refiner"' \ -i 'scheduler="K_EULER"' \ -i 'lora_scale=0.6' \ -i 'num_outputs=1' \ -i 'guidance_scale=7.5' \ -i 'apply_watermark=true' \ -i 'high_noise_frac=0.8' \ -i 'negative_prompt=""' \ -i 'prompt_strength=0.9' \ -i 'num_inference_steps=50'
To learn more, take a look at the Cog documentation.
Run this to download the model and run it in your local environment:
docker run -d -p 5000:5000 --gpus=all r8.im/hudsongraeme/cybertruck@sha256:4e7b92920cf8bbec4862ccad2f905d83430d1ee54f47261d52e055aeadf6f9da
curl -s -X POST \ -H "Content-Type: application/json" \ -d $'{ "input": { "image": "https://replicate.delivery/pbxt/Jj0dqHETJuYPLKBcgHXzVU4rnxrImP7qlCo1PByeoC7UzXht/empty-autumn-road-with-trees-in-a-row-on-the-edges-photo.jpg", "width": 1024, "height": 1024, "prompt": "A photo of TOK parked near autumn trees, bright", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": true, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.9, "num_inference_steps": 50 } }' \ http://localhost:5000/predictions
To learn more, take a look at the Cog documentation.
Output
{ "completed_at": "2023-10-19T05:27:28.520545Z", "created_at": "2023-10-19T05:27:15.715537Z", "data_removed": false, "error": null, "id": "ie7exjdbst5s7glm4ztmphl4ra", "input": { "image": "https://replicate.delivery/pbxt/Jj0dqHETJuYPLKBcgHXzVU4rnxrImP7qlCo1PByeoC7UzXht/empty-autumn-road-with-trees-in-a-row-on-the-edges-photo.jpg", "width": 1024, "height": 1024, "prompt": "A photo of TOK parked near autumn trees, bright", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": true, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.9, "num_inference_steps": 50 }, "logs": "Using seed: 45283\nLoading fine-tuned model\nDoes not have Unet. assume we are using LoRA\nLoading Unet LoRA\nPrompt: A photo of <s0><s1> parked near autumn trees, bright\nimg2img mode\n 0%| | 0/45 [00:00<?, ?it/s]\n 2%|▏ | 1/45 [00:00<00:09, 4.87it/s]\n 4%|▍ | 2/45 [00:00<00:08, 4.89it/s]\n 7%|▋ | 3/45 [00:00<00:08, 4.90it/s]\n 9%|▉ | 4/45 [00:00<00:08, 4.90it/s]\n 11%|█ | 5/45 [00:01<00:08, 4.91it/s]\n 13%|█▎ | 6/45 [00:01<00:07, 4.91it/s]\n 16%|█▌ | 7/45 [00:01<00:07, 4.91it/s]\n 18%|█▊ | 8/45 [00:01<00:07, 4.91it/s]\n 20%|██ | 9/45 [00:01<00:07, 4.91it/s]\n 22%|██▏ | 10/45 [00:02<00:07, 4.91it/s]\n 24%|██▍ | 11/45 [00:02<00:06, 4.91it/s]\n 27%|██▋ | 12/45 [00:02<00:06, 4.90it/s]\n 29%|██▉ | 13/45 [00:02<00:06, 4.90it/s]\n 31%|███ | 14/45 [00:02<00:06, 4.90it/s]\n 33%|███▎ | 15/45 [00:03<00:06, 4.90it/s]\n 36%|███▌ | 16/45 [00:03<00:05, 4.90it/s]\n 38%|███▊ | 17/45 [00:03<00:05, 4.90it/s]\n 40%|████ | 18/45 [00:03<00:05, 4.90it/s]\n 42%|████▏ | 19/45 [00:03<00:05, 4.90it/s]\n 44%|████▍ | 20/45 [00:04<00:05, 4.90it/s]\n 47%|████▋ | 21/45 [00:04<00:04, 4.90it/s]\n 49%|████▉ | 22/45 [00:04<00:04, 4.90it/s]\n 51%|█████ | 23/45 [00:04<00:04, 4.90it/s]\n 53%|█████▎ | 24/45 [00:04<00:04, 4.90it/s]\n 56%|█████▌ | 25/45 [00:05<00:04, 4.90it/s]\n 58%|█████▊ | 26/45 [00:05<00:03, 4.90it/s]\n 60%|██████ | 27/45 [00:05<00:03, 4.90it/s]\n 62%|██████▏ | 28/45 [00:05<00:03, 4.90it/s]\n 64%|██████▍ | 29/45 [00:05<00:03, 4.90it/s]\n 67%|██████▋ | 30/45 [00:06<00:03, 4.90it/s]\n 69%|██████▉ | 31/45 [00:06<00:02, 4.90it/s]\n 71%|███████ | 32/45 [00:06<00:02, 4.90it/s]\n 73%|███████▎ | 33/45 [00:06<00:02, 4.89it/s]\n 76%|███████▌ | 34/45 [00:06<00:02, 4.89it/s]\n 78%|███████▊ | 35/45 [00:07<00:02, 4.89it/s]\n 80%|████████ | 36/45 [00:07<00:01, 4.90it/s]\n 82%|████████▏ | 37/45 [00:07<00:01, 4.89it/s]\n 84%|████████▍ | 38/45 [00:07<00:01, 4.89it/s]\n 87%|████████▋ | 39/45 [00:07<00:01, 4.89it/s]\n 89%|████████▉ | 40/45 [00:08<00:01, 4.89it/s]\n 91%|█████████ | 41/45 [00:08<00:00, 4.89it/s]\n 93%|█████████▎| 42/45 [00:08<00:00, 4.89it/s]\n 96%|█████████▌| 43/45 [00:08<00:00, 4.88it/s]\n 98%|█████████▊| 44/45 [00:08<00:00, 4.88it/s]\n100%|██████████| 45/45 [00:09<00:00, 4.88it/s]\n100%|██████████| 45/45 [00:09<00:00, 4.90it/s]", "metrics": { "predict_time": 11.500505, "total_time": 12.805008 }, "output": [ "https://replicate.delivery/pbxt/ROReWlr8OEyWRaPwhXje7g7P0eaB7RTzAXpxUAH94kGACOfGB/out-0.png" ], "started_at": "2023-10-19T05:27:17.020040Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/ie7exjdbst5s7glm4ztmphl4ra", "cancel": "https://api.replicate.com/v1/predictions/ie7exjdbst5s7glm4ztmphl4ra/cancel" }, "version": "4cb80e2be47c463f65976fdad5f90179e5c613728a7ab30f723dd9c51a0a1ec9" }
Generated inUsing seed: 45283 Loading fine-tuned model Does not have Unet. assume we are using LoRA Loading Unet LoRA Prompt: A photo of <s0><s1> parked near autumn trees, bright img2img mode 0%| | 0/45 [00:00<?, ?it/s] 2%|▏ | 1/45 [00:00<00:09, 4.87it/s] 4%|▍ | 2/45 [00:00<00:08, 4.89it/s] 7%|▋ | 3/45 [00:00<00:08, 4.90it/s] 9%|▉ | 4/45 [00:00<00:08, 4.90it/s] 11%|█ | 5/45 [00:01<00:08, 4.91it/s] 13%|█▎ | 6/45 [00:01<00:07, 4.91it/s] 16%|█▌ | 7/45 [00:01<00:07, 4.91it/s] 18%|█▊ | 8/45 [00:01<00:07, 4.91it/s] 20%|██ | 9/45 [00:01<00:07, 4.91it/s] 22%|██▏ | 10/45 [00:02<00:07, 4.91it/s] 24%|██▍ | 11/45 [00:02<00:06, 4.91it/s] 27%|██▋ | 12/45 [00:02<00:06, 4.90it/s] 29%|██▉ | 13/45 [00:02<00:06, 4.90it/s] 31%|███ | 14/45 [00:02<00:06, 4.90it/s] 33%|███▎ | 15/45 [00:03<00:06, 4.90it/s] 36%|███▌ | 16/45 [00:03<00:05, 4.90it/s] 38%|███▊ | 17/45 [00:03<00:05, 4.90it/s] 40%|████ | 18/45 [00:03<00:05, 4.90it/s] 42%|████▏ | 19/45 [00:03<00:05, 4.90it/s] 44%|████▍ | 20/45 [00:04<00:05, 4.90it/s] 47%|████▋ | 21/45 [00:04<00:04, 4.90it/s] 49%|████▉ | 22/45 [00:04<00:04, 4.90it/s] 51%|█████ | 23/45 [00:04<00:04, 4.90it/s] 53%|█████▎ | 24/45 [00:04<00:04, 4.90it/s] 56%|█████▌ | 25/45 [00:05<00:04, 4.90it/s] 58%|█████▊ | 26/45 [00:05<00:03, 4.90it/s] 60%|██████ | 27/45 [00:05<00:03, 4.90it/s] 62%|██████▏ | 28/45 [00:05<00:03, 4.90it/s] 64%|██████▍ | 29/45 [00:05<00:03, 4.90it/s] 67%|██████▋ | 30/45 [00:06<00:03, 4.90it/s] 69%|██████▉ | 31/45 [00:06<00:02, 4.90it/s] 71%|███████ | 32/45 [00:06<00:02, 4.90it/s] 73%|███████▎ | 33/45 [00:06<00:02, 4.89it/s] 76%|███████▌ | 34/45 [00:06<00:02, 4.89it/s] 78%|███████▊ | 35/45 [00:07<00:02, 4.89it/s] 80%|████████ | 36/45 [00:07<00:01, 4.90it/s] 82%|████████▏ | 37/45 [00:07<00:01, 4.89it/s] 84%|████████▍ | 38/45 [00:07<00:01, 4.89it/s] 87%|████████▋ | 39/45 [00:07<00:01, 4.89it/s] 89%|████████▉ | 40/45 [00:08<00:01, 4.89it/s] 91%|█████████ | 41/45 [00:08<00:00, 4.89it/s] 93%|█████████▎| 42/45 [00:08<00:00, 4.89it/s] 96%|█████████▌| 43/45 [00:08<00:00, 4.88it/s] 98%|█████████▊| 44/45 [00:08<00:00, 4.88it/s] 100%|██████████| 45/45 [00:09<00:00, 4.88it/s] 100%|██████████| 45/45 [00:09<00:00, 4.90it/s]
Prediction
hudsongraeme/cybertruck:4e7b9292ID6wryzfdbq4kav5qnnoe5j7fntmStatusSucceededSourceWebHardwareA40 (Large)Total durationCreatedInput
- width
- 1168
- height
- 1456
- prompt
- A photo of TOK driving uphill on a fall road, small
- refine
- no_refiner
- scheduler
- K_EULER
- lora_scale
- 0.6
- num_outputs
- 1
- guidance_scale
- 7.5
- apply_watermark
- high_noise_frac
- 0.8
- negative_prompt
- prompt_strength
- 0.8
- num_inference_steps
- 50
{ "mask": "https://replicate.delivery/pbxt/JmX4xSsEBHuHuTQfTbPvxGmx67IchCjnhzZ2izPY3t1vSItu/colorado%20mask.jpeg", "image": "https://replicate.delivery/pbxt/JmX4xpB2uxVcsdICC4VGkpjZtrhVMrxhuLkXu7mz0BRi48kV/colorado.jpeg", "width": 1168, "height": 1456, "prompt": "A photo of TOK driving uphill on a fall road, small", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": true, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 50 }
Install Replicate’s Node.js client library:npm install replicate
Set theREPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import and set up the client:import Replicate from "replicate"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run hudsongraeme/cybertruck using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "hudsongraeme/cybertruck:4e7b92920cf8bbec4862ccad2f905d83430d1ee54f47261d52e055aeadf6f9da", { input: { mask: "https://replicate.delivery/pbxt/JmX4xSsEBHuHuTQfTbPvxGmx67IchCjnhzZ2izPY3t1vSItu/colorado%20mask.jpeg", image: "https://replicate.delivery/pbxt/JmX4xpB2uxVcsdICC4VGkpjZtrhVMrxhuLkXu7mz0BRi48kV/colorado.jpeg", width: 1168, height: 1456, prompt: "A photo of TOK driving uphill on a fall road, small", refine: "no_refiner", scheduler: "K_EULER", lora_scale: 0.6, num_outputs: 1, guidance_scale: 7.5, apply_watermark: true, high_noise_frac: 0.8, negative_prompt: "", prompt_strength: 0.8, num_inference_steps: 50 } } ); console.log(output);
To learn more, take a look at the guide on getting started with Node.js.
Install Replicate’s Python client library:pip install replicate
Set theREPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Import the client:import replicate
Run hudsongraeme/cybertruck using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run( "hudsongraeme/cybertruck:4e7b92920cf8bbec4862ccad2f905d83430d1ee54f47261d52e055aeadf6f9da", input={ "mask": "https://replicate.delivery/pbxt/JmX4xSsEBHuHuTQfTbPvxGmx67IchCjnhzZ2izPY3t1vSItu/colorado%20mask.jpeg", "image": "https://replicate.delivery/pbxt/JmX4xpB2uxVcsdICC4VGkpjZtrhVMrxhuLkXu7mz0BRi48kV/colorado.jpeg", "width": 1168, "height": 1456, "prompt": "A photo of TOK driving uphill on a fall road, small", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": True, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 50 } ) print(output)
To learn more, take a look at the guide on getting started with Python.
Set theREPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Run hudsongraeme/cybertruck using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "4e7b92920cf8bbec4862ccad2f905d83430d1ee54f47261d52e055aeadf6f9da", "input": { "mask": "https://replicate.delivery/pbxt/JmX4xSsEBHuHuTQfTbPvxGmx67IchCjnhzZ2izPY3t1vSItu/colorado%20mask.jpeg", "image": "https://replicate.delivery/pbxt/JmX4xpB2uxVcsdICC4VGkpjZtrhVMrxhuLkXu7mz0BRi48kV/colorado.jpeg", "width": 1168, "height": 1456, "prompt": "A photo of TOK driving uphill on a fall road, small", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": true, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 50 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
You can run this model locally using Cog. First, install Cog:brew install cog
If you don’t have Homebrew, there are other installation options available.
Run this to download the model and run it in your local environment:
cog predict r8.im/hudsongraeme/cybertruck@sha256:4e7b92920cf8bbec4862ccad2f905d83430d1ee54f47261d52e055aeadf6f9da \ -i 'mask="https://replicate.delivery/pbxt/JmX4xSsEBHuHuTQfTbPvxGmx67IchCjnhzZ2izPY3t1vSItu/colorado%20mask.jpeg"' \ -i 'image="https://replicate.delivery/pbxt/JmX4xpB2uxVcsdICC4VGkpjZtrhVMrxhuLkXu7mz0BRi48kV/colorado.jpeg"' \ -i 'width=1168' \ -i 'height=1456' \ -i 'prompt="A photo of TOK driving uphill on a fall road, small"' \ -i 'refine="no_refiner"' \ -i 'scheduler="K_EULER"' \ -i 'lora_scale=0.6' \ -i 'num_outputs=1' \ -i 'guidance_scale=7.5' \ -i 'apply_watermark=true' \ -i 'high_noise_frac=0.8' \ -i 'negative_prompt=""' \ -i 'prompt_strength=0.8' \ -i 'num_inference_steps=50'
To learn more, take a look at the Cog documentation.
Run this to download the model and run it in your local environment:
docker run -d -p 5000:5000 --gpus=all r8.im/hudsongraeme/cybertruck@sha256:4e7b92920cf8bbec4862ccad2f905d83430d1ee54f47261d52e055aeadf6f9da
curl -s -X POST \ -H "Content-Type: application/json" \ -d $'{ "input": { "mask": "https://replicate.delivery/pbxt/JmX4xSsEBHuHuTQfTbPvxGmx67IchCjnhzZ2izPY3t1vSItu/colorado%20mask.jpeg", "image": "https://replicate.delivery/pbxt/JmX4xpB2uxVcsdICC4VGkpjZtrhVMrxhuLkXu7mz0BRi48kV/colorado.jpeg", "width": 1168, "height": 1456, "prompt": "A photo of TOK driving uphill on a fall road, small", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": true, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 50 } }' \ http://localhost:5000/predictions
To learn more, take a look at the Cog documentation.
Output
{ "completed_at": "2023-10-29T03:40:42.728031Z", "created_at": "2023-10-29T03:40:15.344044Z", "data_removed": false, "error": null, "id": "6wryzfdbq4kav5qnnoe5j7fntm", "input": { "mask": "https://replicate.delivery/pbxt/JmX4xSsEBHuHuTQfTbPvxGmx67IchCjnhzZ2izPY3t1vSItu/colorado%20mask.jpeg", "image": "https://replicate.delivery/pbxt/JmX4xpB2uxVcsdICC4VGkpjZtrhVMrxhuLkXu7mz0BRi48kV/colorado.jpeg", "width": 1168, "height": 1456, "prompt": "A photo of TOK driving uphill on a fall road, small", "refine": "no_refiner", "scheduler": "K_EULER", "lora_scale": 0.6, "num_outputs": 1, "guidance_scale": 7.5, "apply_watermark": true, "high_noise_frac": 0.8, "negative_prompt": "", "prompt_strength": 0.8, "num_inference_steps": 50 }, "logs": "Using seed: 26024\nEnsuring enough disk space...\nFree disk space: 1836465987584\nDownloading weights: https://pbxt.replicate.delivery/HmTiYrm88X4NE1eYUAFfH6lq3paI09mBt3rKQsmNYlf1JdijA/trained_model.tar\nb'Downloaded 186 MB bytes in 0.340s (547 MB/s)\\nExtracted 186 MB in 0.058s (3.2 GB/s)\\n'\nDownloaded weights in 0.5388760566711426 seconds\nLoading fine-tuned model\nDoes not have Unet. assume we are using LoRA\nLoading Unet LoRA\nPrompt: A photo of <s0><s1> driving uphill on a fall road, small\ninpainting mode\n 0%| | 0/40 [00:00<?, ?it/s]\n 2%|▎ | 1/40 [00:00<00:18, 2.11it/s]\n 5%|▌ | 2/40 [00:00<00:18, 2.11it/s]\n 8%|▊ | 3/40 [00:01<00:17, 2.11it/s]\n 10%|█ | 4/40 [00:01<00:17, 2.11it/s]\n 12%|█▎ | 5/40 [00:02<00:16, 2.11it/s]\n 15%|█▌ | 6/40 [00:02<00:16, 2.11it/s]\n 18%|█▊ | 7/40 [00:03<00:15, 2.11it/s]\n 20%|██ | 8/40 [00:03<00:15, 2.11it/s]\n 22%|██▎ | 9/40 [00:04<00:14, 2.11it/s]\n 25%|██▌ | 10/40 [00:04<00:14, 2.11it/s]\n 28%|██▊ | 11/40 [00:05<00:13, 2.11it/s]\n 30%|███ | 12/40 [00:05<00:13, 2.11it/s]\n 32%|███▎ | 13/40 [00:06<00:12, 2.11it/s]\n 35%|███▌ | 14/40 [00:06<00:12, 2.11it/s]\n 38%|███▊ | 15/40 [00:07<00:11, 2.11it/s]\n 40%|████ | 16/40 [00:07<00:11, 2.11it/s]\n 42%|████▎ | 17/40 [00:08<00:10, 2.10it/s]\n 45%|████▌ | 18/40 [00:08<00:10, 2.10it/s]\n 48%|████▊ | 19/40 [00:09<00:09, 2.10it/s]\n 50%|█████ | 20/40 [00:09<00:09, 2.10it/s]\n 52%|█████▎ | 21/40 [00:09<00:09, 2.10it/s]\n 55%|█████▌ | 22/40 [00:10<00:08, 2.10it/s]\n 57%|█████▊ | 23/40 [00:10<00:08, 2.10it/s]\n 60%|██████ | 24/40 [00:11<00:07, 2.10it/s]\n 62%|██████▎ | 25/40 [00:11<00:07, 2.10it/s]\n 65%|██████▌ | 26/40 [00:12<00:06, 2.10it/s]\n 68%|██████▊ | 27/40 [00:12<00:06, 2.10it/s]\n 70%|███████ | 28/40 [00:13<00:05, 2.10it/s]\n 72%|███████▎ | 29/40 [00:13<00:05, 2.10it/s]\n 75%|███████▌ | 30/40 [00:14<00:04, 2.10it/s]\n 78%|███████▊ | 31/40 [00:14<00:04, 2.10it/s]\n 80%|████████ | 32/40 [00:15<00:03, 2.09it/s]\n 82%|████████▎ | 33/40 [00:15<00:03, 2.09it/s]\n 85%|████████▌ | 34/40 [00:16<00:02, 2.09it/s]\n 88%|████████▊ | 35/40 [00:16<00:02, 2.09it/s]\n 90%|█████████ | 36/40 [00:17<00:01, 2.09it/s]\n 92%|█████████▎| 37/40 [00:17<00:01, 2.09it/s]\n 95%|█████████▌| 38/40 [00:18<00:00, 2.09it/s]\n 98%|█████████▊| 39/40 [00:18<00:00, 2.09it/s]\n100%|██████████| 40/40 [00:19<00:00, 2.09it/s]\n100%|██████████| 40/40 [00:19<00:00, 2.10it/s]", "metrics": { "predict_time": 25.348003, "total_time": 27.383987 }, "output": [ "https://pbxt.replicate.delivery/H9139fZwtlwqBaa43VFIbmHSbZ9sdeypfuUfCBNP3K4ljhLHB/out-0.png" ], "started_at": "2023-10-29T03:40:17.380028Z", "status": "succeeded", "urls": { "get": "https://api.replicate.com/v1/predictions/6wryzfdbq4kav5qnnoe5j7fntm", "cancel": "https://api.replicate.com/v1/predictions/6wryzfdbq4kav5qnnoe5j7fntm/cancel" }, "version": "4e7b92920cf8bbec4862ccad2f905d83430d1ee54f47261d52e055aeadf6f9da" }
Generated inUsing seed: 26024 Ensuring enough disk space... Free disk space: 1836465987584 Downloading weights: https://pbxt.replicate.delivery/HmTiYrm88X4NE1eYUAFfH6lq3paI09mBt3rKQsmNYlf1JdijA/trained_model.tar b'Downloaded 186 MB bytes in 0.340s (547 MB/s)\nExtracted 186 MB in 0.058s (3.2 GB/s)\n' Downloaded weights in 0.5388760566711426 seconds Loading fine-tuned model Does not have Unet. assume we are using LoRA Loading Unet LoRA Prompt: A photo of <s0><s1> driving uphill on a fall road, small inpainting mode 0%| | 0/40 [00:00<?, ?it/s] 2%|▎ | 1/40 [00:00<00:18, 2.11it/s] 5%|▌ | 2/40 [00:00<00:18, 2.11it/s] 8%|▊ | 3/40 [00:01<00:17, 2.11it/s] 10%|█ | 4/40 [00:01<00:17, 2.11it/s] 12%|█▎ | 5/40 [00:02<00:16, 2.11it/s] 15%|█▌ | 6/40 [00:02<00:16, 2.11it/s] 18%|█▊ | 7/40 [00:03<00:15, 2.11it/s] 20%|██ | 8/40 [00:03<00:15, 2.11it/s] 22%|██▎ | 9/40 [00:04<00:14, 2.11it/s] 25%|██▌ | 10/40 [00:04<00:14, 2.11it/s] 28%|██▊ | 11/40 [00:05<00:13, 2.11it/s] 30%|███ | 12/40 [00:05<00:13, 2.11it/s] 32%|███▎ | 13/40 [00:06<00:12, 2.11it/s] 35%|███▌ | 14/40 [00:06<00:12, 2.11it/s] 38%|███▊ | 15/40 [00:07<00:11, 2.11it/s] 40%|████ | 16/40 [00:07<00:11, 2.11it/s] 42%|████▎ | 17/40 [00:08<00:10, 2.10it/s] 45%|████▌ | 18/40 [00:08<00:10, 2.10it/s] 48%|████▊ | 19/40 [00:09<00:09, 2.10it/s] 50%|█████ | 20/40 [00:09<00:09, 2.10it/s] 52%|█████▎ | 21/40 [00:09<00:09, 2.10it/s] 55%|█████▌ | 22/40 [00:10<00:08, 2.10it/s] 57%|█████▊ | 23/40 [00:10<00:08, 2.10it/s] 60%|██████ | 24/40 [00:11<00:07, 2.10it/s] 62%|██████▎ | 25/40 [00:11<00:07, 2.10it/s] 65%|██████▌ | 26/40 [00:12<00:06, 2.10it/s] 68%|██████▊ | 27/40 [00:12<00:06, 2.10it/s] 70%|███████ | 28/40 [00:13<00:05, 2.10it/s] 72%|███████▎ | 29/40 [00:13<00:05, 2.10it/s] 75%|███████▌ | 30/40 [00:14<00:04, 2.10it/s] 78%|███████▊ | 31/40 [00:14<00:04, 2.10it/s] 80%|████████ | 32/40 [00:15<00:03, 2.09it/s] 82%|████████▎ | 33/40 [00:15<00:03, 2.09it/s] 85%|████████▌ | 34/40 [00:16<00:02, 2.09it/s] 88%|████████▊ | 35/40 [00:16<00:02, 2.09it/s] 90%|█████████ | 36/40 [00:17<00:01, 2.09it/s] 92%|█████████▎| 37/40 [00:17<00:01, 2.09it/s] 95%|█████████▌| 38/40 [00:18<00:00, 2.09it/s] 98%|█████████▊| 39/40 [00:18<00:00, 2.09it/s] 100%|██████████| 40/40 [00:19<00:00, 2.09it/s] 100%|██████████| 40/40 [00:19<00:00, 2.10it/s]
Want to make some of these yourself?
Run this model