Readme
…
Model description
…
Intended use
…
Ethical considerations
…
Caveats and recommendations
…
Lora & openjourney V4
Run this model in Node.js with one line of code:
npm install replicate
REPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
import Replicate from "replicate";
import fs from "node:fs";
const replicate = new Replicate({
auth: process.env.REPLICATE_API_TOKEN,
});
Run zhouzhengjun/lora_openjourney_v4 using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run(
"zhouzhengjun/lora_openjourney_v4:f8e5074f993f6852679bdac9f604590827f11698fdbfc3f68a1f0c3395b46db6",
{
input: {
width: 512,
height: 512,
prompt: "(((masterpiece))),(((bestquality))),((ultra-detailed)),(illustration),((anextremelydelicateandbeautiful)),dynamicangle,floating,(beautifuldetailedeyes),(detailedlight) (1girl), solo , floating_hair,glowingeyes,green hair,greeneyes <1>, twintails, halterdress,",
lora_urls: "https://replicate.delivery/pbxt/Mf7QBwNXrehQ3k6GwMPpi8bqy0cer9x1NqogXVWylWC9l6YhA/tmp28kwa2ceclexz5tc90001zun1iy5b8x3wzip.safetensors",
scheduler: "K_EULER_ANCESTRAL",
lora_scales: "0.6",
num_outputs: 1,
guidance_scale: 3.25,
negative_prompt: "easynegative, bad-picture-chill-75v",
prompt_strength: 0.9,
num_inference_steps: 31
}
}
);
// To access the file URL:
console.log(output[0].url()); //=> "http://example.com"
// To write the file to disk:
fs.writeFile("my-image.png", output[0]);
To learn more, take a look at the guide on getting started with Node.js.
pip install replicate
REPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
import replicate
Run zhouzhengjun/lora_openjourney_v4 using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run(
"zhouzhengjun/lora_openjourney_v4:f8e5074f993f6852679bdac9f604590827f11698fdbfc3f68a1f0c3395b46db6",
input={
"width": 512,
"height": 512,
"prompt": "(((masterpiece))),(((bestquality))),((ultra-detailed)),(illustration),((anextremelydelicateandbeautiful)),dynamicangle,floating,(beautifuldetailedeyes),(detailedlight) (1girl), solo , floating_hair,glowingeyes,green hair,greeneyes <1>, twintails, halterdress,",
"lora_urls": "https://replicate.delivery/pbxt/Mf7QBwNXrehQ3k6GwMPpi8bqy0cer9x1NqogXVWylWC9l6YhA/tmp28kwa2ceclexz5tc90001zun1iy5b8x3wzip.safetensors",
"scheduler": "K_EULER_ANCESTRAL",
"lora_scales": "0.6",
"num_outputs": 1,
"guidance_scale": 3.25,
"negative_prompt": "easynegative, bad-picture-chill-75v",
"prompt_strength": 0.9,
"num_inference_steps": 31
}
)
# To access the file URL:
print(output[0].url())
#=> "http://example.com"
# To write the file to disk:
with open("my-image.png", "wb") as file:
file.write(output[0].read())
To learn more, take a look at the guide on getting started with Python.
REPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Run zhouzhengjun/lora_openjourney_v4 using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \
-H "Authorization: Bearer $REPLICATE_API_TOKEN" \
-H "Content-Type: application/json" \
-H "Prefer: wait" \
-d $'{
"version": "zhouzhengjun/lora_openjourney_v4:f8e5074f993f6852679bdac9f604590827f11698fdbfc3f68a1f0c3395b46db6",
"input": {
"width": 512,
"height": 512,
"prompt": "(((masterpiece))),(((bestquality))),((ultra-detailed)),(illustration),((anextremelydelicateandbeautiful)),dynamicangle,floating,(beautifuldetailedeyes),(detailedlight) (1girl), solo , floating_hair,glowingeyes,green hair,greeneyes <1>, twintails, halterdress,",
"lora_urls": "https://replicate.delivery/pbxt/Mf7QBwNXrehQ3k6GwMPpi8bqy0cer9x1NqogXVWylWC9l6YhA/tmp28kwa2ceclexz5tc90001zun1iy5b8x3wzip.safetensors",
"scheduler": "K_EULER_ANCESTRAL",
"lora_scales": "0.6",
"num_outputs": 1,
"guidance_scale": 3.25,
"negative_prompt": "easynegative, bad-picture-chill-75v",
"prompt_strength": 0.9,
"num_inference_steps": 31
}
}' \
https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
brew install cog
If you don’t have Homebrew, there are other installation options available.
Run this to download the model and run it in your local environment:
cog predict r8.im/zhouzhengjun/lora_openjourney_v4@sha256:f8e5074f993f6852679bdac9f604590827f11698fdbfc3f68a1f0c3395b46db6 \
-i 'width=512' \
-i 'height=512' \
-i 'prompt="(((masterpiece))),(((bestquality))),((ultra-detailed)),(illustration),((anextremelydelicateandbeautiful)),dynamicangle,floating,(beautifuldetailedeyes),(detailedlight) (1girl), solo , floating_hair,glowingeyes,green hair,greeneyes <1>, twintails, halterdress,"' \
-i 'lora_urls="https://replicate.delivery/pbxt/Mf7QBwNXrehQ3k6GwMPpi8bqy0cer9x1NqogXVWylWC9l6YhA/tmp28kwa2ceclexz5tc90001zun1iy5b8x3wzip.safetensors"' \
-i 'scheduler="K_EULER_ANCESTRAL"' \
-i 'lora_scales="0.6"' \
-i 'num_outputs=1' \
-i 'guidance_scale=3.25' \
-i 'negative_prompt="easynegative, bad-picture-chill-75v"' \
-i 'prompt_strength=0.9' \
-i 'num_inference_steps=31'
To learn more, take a look at the Cog documentation.
Run this to download the model and run it in your local environment:
docker run -d -p 5000:5000 --gpus=all r8.im/zhouzhengjun/lora_openjourney_v4@sha256:f8e5074f993f6852679bdac9f604590827f11698fdbfc3f68a1f0c3395b46db6
curl -s -X POST \ -H "Content-Type: application/json" \ -d $'{ "input": { "width": 512, "height": 512, "prompt": "(((masterpiece))),(((bestquality))),((ultra-detailed)),(illustration),((anextremelydelicateandbeautiful)),dynamicangle,floating,(beautifuldetailedeyes),(detailedlight) (1girl), solo , floating_hair,glowingeyes,green hair,greeneyes <1>, twintails, halterdress,", "lora_urls": "https://replicate.delivery/pbxt/Mf7QBwNXrehQ3k6GwMPpi8bqy0cer9x1NqogXVWylWC9l6YhA/tmp28kwa2ceclexz5tc90001zun1iy5b8x3wzip.safetensors", "scheduler": "K_EULER_ANCESTRAL", "lora_scales": "0.6", "num_outputs": 1, "guidance_scale": 3.25, "negative_prompt": "easynegative, bad-picture-chill-75v", "prompt_strength": 0.9, "num_inference_steps": 31 } }' \ http://localhost:5000/predictions
To learn more, take a look at the Cog documentation.
Add a payment method to run this model.
Each run costs approximately $0.011. Alternatively, try out our featured models for free.
By signing in, you agree to our
terms of service and privacy policy
{
"completed_at": "2023-04-20T10:51:38.537121Z",
"created_at": "2023-04-20T10:51:34.616238Z",
"data_removed": false,
"error": null,
"id": "xbwykzzdrzaczdkydodvcqmkka",
"input": {
"width": "512",
"height": "512",
"prompt": "(((masterpiece))),(((bestquality))),((ultra-detailed)),(illustration),((anextremelydelicateandbeautiful)),dynamicangle,floating,(beautifuldetailedeyes),(detailedlight) (1girl), solo , floating_hair,glowingeyes,green hair,greeneyes <1>, twintails, halterdress,",
"lora_urls": "https://replicate.delivery/pbxt/Mf7QBwNXrehQ3k6GwMPpi8bqy0cer9x1NqogXVWylWC9l6YhA/tmp28kwa2ceclexz5tc90001zun1iy5b8x3wzip.safetensors",
"scheduler": "K_EULER_ANCESTRAL",
"lora_scales": "0.6",
"num_outputs": 1,
"guidance_scale": 3.25,
"negative_prompt": "easynegative, bad-picture-chill-75v",
"prompt_strength": 0.9,
"num_inference_steps": 31
},
"logs": "Using seed: 8651\nGenerating image of 512 x 512 with prompt: (((masterpiece))),(((bestquality))),((ultra-detailed)),(illustration),((anextremelydelicateandbeautiful)),dynamicangle,floating,(beautifuldetailedeyes),(detailedlight) (1girl), solo , floating_hair,glowingeyes,green hair,greeneyes <1>, twintails, halterdress,\nThe requested LoRAs are loaded.\nThe config attributes {'clip_sample_range': 1.0} were passed to EulerDiscreteScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.\nThe config attributes {'clip_sample_range': 1.0} were passed to EulerAncestralDiscreteScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.\nThe following part of your input was truncated because CLIP can only handle sequences up to 77 tokens: ['twintails, halterdress,']\n 0%| | 0/31 [00:00<?, ?it/s]\n 6%|▋ | 2/31 [00:00<00:02, 11.13it/s]\n 13%|█▎ | 4/31 [00:00<00:02, 11.14it/s]\n 19%|█▉ | 6/31 [00:00<00:02, 11.23it/s]\n 26%|██▌ | 8/31 [00:00<00:02, 11.25it/s]\n 32%|███▏ | 10/31 [00:00<00:01, 11.30it/s]\n 39%|███▊ | 12/31 [00:01<00:01, 11.33it/s]\n 45%|████▌ | 14/31 [00:01<00:01, 11.02it/s]\n 52%|█████▏ | 16/31 [00:01<00:01, 11.10it/s]\n 58%|█████▊ | 18/31 [00:01<00:01, 11.18it/s]\n 65%|██████▍ | 20/31 [00:01<00:00, 11.23it/s]\n 71%|███████ | 22/31 [00:01<00:00, 11.22it/s]\n 77%|███████▋ | 24/31 [00:02<00:00, 11.21it/s]\n 84%|████████▍ | 26/31 [00:02<00:00, 11.11it/s]\n 90%|█████████ | 28/31 [00:02<00:00, 11.04it/s]\n 97%|█████████▋| 30/31 [00:02<00:00, 11.11it/s]\n100%|██████████| 31/31 [00:02<00:00, 11.16it/s]",
"metrics": {
"predict_time": 3.802227,
"total_time": 3.920883
},
"output": [
"https://replicate.delivery/pbxt/xSvYgfGIXEzHOK2fUfIkksiIhanhqOm5xDt3TJQ1VkCyZZnhA/out-0.png"
],
"started_at": "2023-04-20T10:51:34.734894Z",
"status": "succeeded",
"urls": {
"get": "https://api.replicate.com/v1/predictions/xbwykzzdrzaczdkydodvcqmkka",
"cancel": "https://api.replicate.com/v1/predictions/xbwykzzdrzaczdkydodvcqmkka/cancel"
},
"version": "f8e5074f993f6852679bdac9f604590827f11698fdbfc3f68a1f0c3395b46db6"
}
Using seed: 8651
Generating image of 512 x 512 with prompt: (((masterpiece))),(((bestquality))),((ultra-detailed)),(illustration),((anextremelydelicateandbeautiful)),dynamicangle,floating,(beautifuldetailedeyes),(detailedlight) (1girl), solo , floating_hair,glowingeyes,green hair,greeneyes <1>, twintails, halterdress,
The requested LoRAs are loaded.
The config attributes {'clip_sample_range': 1.0} were passed to EulerDiscreteScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.
The config attributes {'clip_sample_range': 1.0} were passed to EulerAncestralDiscreteScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.
The following part of your input was truncated because CLIP can only handle sequences up to 77 tokens: ['twintails, halterdress,']
0%| | 0/31 [00:00<?, ?it/s]
6%|▋ | 2/31 [00:00<00:02, 11.13it/s]
13%|█▎ | 4/31 [00:00<00:02, 11.14it/s]
19%|█▉ | 6/31 [00:00<00:02, 11.23it/s]
26%|██▌ | 8/31 [00:00<00:02, 11.25it/s]
32%|███▏ | 10/31 [00:00<00:01, 11.30it/s]
39%|███▊ | 12/31 [00:01<00:01, 11.33it/s]
45%|████▌ | 14/31 [00:01<00:01, 11.02it/s]
52%|█████▏ | 16/31 [00:01<00:01, 11.10it/s]
58%|█████▊ | 18/31 [00:01<00:01, 11.18it/s]
65%|██████▍ | 20/31 [00:01<00:00, 11.23it/s]
71%|███████ | 22/31 [00:01<00:00, 11.22it/s]
77%|███████▋ | 24/31 [00:02<00:00, 11.21it/s]
84%|████████▍ | 26/31 [00:02<00:00, 11.11it/s]
90%|█████████ | 28/31 [00:02<00:00, 11.04it/s]
97%|█████████▋| 30/31 [00:02<00:00, 11.11it/s]
100%|██████████| 31/31 [00:02<00:00, 11.16it/s]
This model costs approximately $0.011 to run on Replicate, or 90 runs per $1, but this varies depending on your inputs. It is also open source and you can run it on your own computer with Docker.
This model runs on Nvidia A100 (80GB) GPU hardware. Predictions typically complete within 9 seconds.
…
…
…
…
…
This model is cold. You'll get a fast response if the model is warm and already running, and a slower response if the model is cold and starting up.
This model costs approximately $0.011 to run on Replicate, but this varies depending on your inputs. View more.
Choose a file from your machine
Hint: you can also drag files onto the input
Choose a file from your machine
Hint: you can also drag files onto the input
Using seed: 8651
Generating image of 512 x 512 with prompt: (((masterpiece))),(((bestquality))),((ultra-detailed)),(illustration),((anextremelydelicateandbeautiful)),dynamicangle,floating,(beautifuldetailedeyes),(detailedlight) (1girl), solo , floating_hair,glowingeyes,green hair,greeneyes <1>, twintails, halterdress,
The requested LoRAs are loaded.
The config attributes {'clip_sample_range': 1.0} were passed to EulerDiscreteScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.
The config attributes {'clip_sample_range': 1.0} were passed to EulerAncestralDiscreteScheduler, but are not expected and will be ignored. Please verify your scheduler_config.json configuration file.
The following part of your input was truncated because CLIP can only handle sequences up to 77 tokens: ['twintails, halterdress,']
0%| | 0/31 [00:00<?, ?it/s]
6%|▋ | 2/31 [00:00<00:02, 11.13it/s]
13%|█▎ | 4/31 [00:00<00:02, 11.14it/s]
19%|█▉ | 6/31 [00:00<00:02, 11.23it/s]
26%|██▌ | 8/31 [00:00<00:02, 11.25it/s]
32%|███▏ | 10/31 [00:00<00:01, 11.30it/s]
39%|███▊ | 12/31 [00:01<00:01, 11.33it/s]
45%|████▌ | 14/31 [00:01<00:01, 11.02it/s]
52%|█████▏ | 16/31 [00:01<00:01, 11.10it/s]
58%|█████▊ | 18/31 [00:01<00:01, 11.18it/s]
65%|██████▍ | 20/31 [00:01<00:00, 11.23it/s]
71%|███████ | 22/31 [00:01<00:00, 11.22it/s]
77%|███████▋ | 24/31 [00:02<00:00, 11.21it/s]
84%|████████▍ | 26/31 [00:02<00:00, 11.11it/s]
90%|█████████ | 28/31 [00:02<00:00, 11.04it/s]
97%|█████████▋| 30/31 [00:02<00:00, 11.11it/s]
100%|██████████| 31/31 [00:02<00:00, 11.16it/s]