Readme
This model doesn't have a readme.
Make pictures of an AI character named 0_1.webp
Run this model in Node.js with one line of code:
npm install replicate
REPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
import Replicate from "replicate";
const replicate = new Replicate({
auth: process.env.REPLICATE_API_TOKEN,
});
Run fofr/0_1-webp using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run(
"fofr/0_1-webp:e927742a5f430e7e36f3a646ced840cebb4c59e00e1bd1993e068a7f97a85fd7",
{
input: {
model: "dev",
prompt: "a portrait photo of 0_1 as a charismatic female speaker at conference, captured gesturing mid-speech on stage. She is wearing a light grey sweater. She's also wearing a simple black lanyard hanging around her neck. The lanyard badge has the text \"Replicate\". She has perfect eyes.\n\nBehind her, there is a blurred background with a white banner containing logos and text (including replicate), a professional conference setting.",
go_fast: false,
lora_scale: 0.9,
megapixels: "1",
num_outputs: 1,
aspect_ratio: "3:4",
output_format: "webp",
guidance_scale: 2.5,
output_quality: 90,
prompt_strength: 0.8,
extra_lora_scale: 1,
num_inference_steps: 28
}
}
);
// To access the file URL:
console.log(output[0].url()); //=> "http://example.com"
// To write the file to disk:
fs.writeFile("my-image.png", output[0]);
To learn more, take a look at the guide on getting started with Node.js.
pip install replicate
REPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
import replicate
Run fofr/0_1-webp using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run(
"fofr/0_1-webp:e927742a5f430e7e36f3a646ced840cebb4c59e00e1bd1993e068a7f97a85fd7",
input={
"model": "dev",
"prompt": "a portrait photo of 0_1 as a charismatic female speaker at conference, captured gesturing mid-speech on stage. She is wearing a light grey sweater. She's also wearing a simple black lanyard hanging around her neck. The lanyard badge has the text \"Replicate\". She has perfect eyes.\n\nBehind her, there is a blurred background with a white banner containing logos and text (including replicate), a professional conference setting.",
"go_fast": False,
"lora_scale": 0.9,
"megapixels": "1",
"num_outputs": 1,
"aspect_ratio": "3:4",
"output_format": "webp",
"guidance_scale": 2.5,
"output_quality": 90,
"prompt_strength": 0.8,
"extra_lora_scale": 1,
"num_inference_steps": 28
}
)
print(output)
To learn more, take a look at the guide on getting started with Python.
REPLICATE_API_TOKEN
environment variable:export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
Run fofr/0_1-webp using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \
-H "Authorization: Bearer $REPLICATE_API_TOKEN" \
-H "Content-Type: application/json" \
-H "Prefer: wait" \
-d $'{
"version": "e927742a5f430e7e36f3a646ced840cebb4c59e00e1bd1993e068a7f97a85fd7",
"input": {
"model": "dev",
"prompt": "a portrait photo of 0_1 as a charismatic female speaker at conference, captured gesturing mid-speech on stage. She is wearing a light grey sweater. She\'s also wearing a simple black lanyard hanging around her neck. The lanyard badge has the text \\"Replicate\\". She has perfect eyes.\\n\\nBehind her, there is a blurred background with a white banner containing logos and text (including replicate), a professional conference setting.",
"go_fast": false,
"lora_scale": 0.9,
"megapixels": "1",
"num_outputs": 1,
"aspect_ratio": "3:4",
"output_format": "webp",
"guidance_scale": 2.5,
"output_quality": 90,
"prompt_strength": 0.8,
"extra_lora_scale": 1,
"num_inference_steps": 28
}
}' \
https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
brew install cog
If you don’t have Homebrew, there are other installation options available.
Run this to download the model and run it in your local environment:
cog predict r8.im/fofr/0_1-webp@sha256:e927742a5f430e7e36f3a646ced840cebb4c59e00e1bd1993e068a7f97a85fd7 \
-i 'model="dev"' \
-i $'prompt="a portrait photo of 0_1 as a charismatic female speaker at conference, captured gesturing mid-speech on stage. She is wearing a light grey sweater. She\'s also wearing a simple black lanyard hanging around her neck. The lanyard badge has the text \\"Replicate\\". She has perfect eyes.\\n\\nBehind her, there is a blurred background with a white banner containing logos and text (including replicate), a professional conference setting."' \
-i 'go_fast=false' \
-i 'lora_scale=0.9' \
-i 'megapixels="1"' \
-i 'num_outputs=1' \
-i 'aspect_ratio="3:4"' \
-i 'output_format="webp"' \
-i 'guidance_scale=2.5' \
-i 'output_quality=90' \
-i 'prompt_strength=0.8' \
-i 'extra_lora_scale=1' \
-i 'num_inference_steps=28'
To learn more, take a look at the Cog documentation.
Run this to download the model and run it in your local environment:
docker run -d -p 5000:5000 --gpus=all r8.im/fofr/0_1-webp@sha256:e927742a5f430e7e36f3a646ced840cebb4c59e00e1bd1993e068a7f97a85fd7
curl -s -X POST \ -H "Content-Type: application/json" \ -d $'{ "input": { "model": "dev", "prompt": "a portrait photo of 0_1 as a charismatic female speaker at conference, captured gesturing mid-speech on stage. She is wearing a light grey sweater. She\'s also wearing a simple black lanyard hanging around her neck. The lanyard badge has the text \\"Replicate\\". She has perfect eyes.\\n\\nBehind her, there is a blurred background with a white banner containing logos and text (including replicate), a professional conference setting.", "go_fast": false, "lora_scale": 0.9, "megapixels": "1", "num_outputs": 1, "aspect_ratio": "3:4", "output_format": "webp", "guidance_scale": 2.5, "output_quality": 90, "prompt_strength": 0.8, "extra_lora_scale": 1, "num_inference_steps": 28 } }' \ http://localhost:5000/predictions
To learn more, take a look at the Cog documentation.
Add a payment method to run this model.
Each run costs approximately $0.015. Alternatively, try out our featured models for free.
By signing in, you agree to our
terms of service and privacy policy
{
"completed_at": "2024-08-20T14:40:07.946202Z",
"created_at": "2024-08-20T14:39:47.586000Z",
"data_removed": false,
"error": null,
"id": "sthe3aedg9rm00che0xbk475xw",
"input": {
"model": "dev",
"prompt": "a portrait photo of 0_1 as a charismatic female speaker at conference, captured gesturing mid-speech on stage. She is wearing a light grey sweater. She's also wearing a simple black lanyard hanging around her neck. The lanyard badge has the text \"Replicate\". She has perfect eyes.\n\nBehind her, there is a blurred background with a white banner containing logos and text (including replicate), a professional conference setting.",
"lora_scale": 0.9,
"num_outputs": 1,
"aspect_ratio": "3:4",
"output_format": "webp",
"guidance_scale": 2.5,
"output_quality": 90,
"num_inference_steps": 28
},
"logs": "Using seed: 2697\nPrompt: a portrait photo of 0_1 as a charismatic female speaker at conference, captured gesturing mid-speech on stage. She is wearing a light grey sweater. She's also wearing a simple black lanyard hanging around her neck. The lanyard badge has the text \"Replicate\". She has perfect eyes.\nBehind her, there is a blurred background with a white banner containing logos and text (including replicate), a professional conference setting.\ntxt2img mode\nUsing dev model\nLoading LoRA weights\nLoRA weights loaded successfully\nThe following part of your input was truncated because CLIP can only handle sequences up to 77 tokens: ['text ( including replicate ), a professional conference setting.']\n 0%| | 0/28 [00:00<?, ?it/s]\n 4%|▎ | 1/28 [00:00<00:07, 3.55it/s]\n 7%|▋ | 2/28 [00:00<00:06, 3.97it/s]\n 11%|█ | 3/28 [00:00<00:06, 3.77it/s]\n 14%|█▍ | 4/28 [00:01<00:06, 3.68it/s]\n 18%|█▊ | 5/28 [00:01<00:06, 3.64it/s]\n 21%|██▏ | 6/28 [00:01<00:06, 3.61it/s]\n 25%|██▌ | 7/28 [00:01<00:05, 3.59it/s]\n 29%|██▊ | 8/28 [00:02<00:05, 3.58it/s]\n 32%|███▏ | 9/28 [00:02<00:05, 3.58it/s]\n 36%|███▌ | 10/28 [00:02<00:05, 3.57it/s]\n 39%|███▉ | 11/28 [00:03<00:04, 3.57it/s]\n 43%|████▎ | 12/28 [00:03<00:04, 3.56it/s]\n 46%|████▋ | 13/28 [00:03<00:04, 3.57it/s]\n 50%|█████ | 14/28 [00:03<00:03, 3.56it/s]\n 54%|█████▎ | 15/28 [00:04<00:03, 3.56it/s]\n 57%|█████▋ | 16/28 [00:04<00:03, 3.56it/s]\n 61%|██████ | 17/28 [00:04<00:03, 3.56it/s]\n 64%|██████▍ | 18/28 [00:05<00:02, 3.55it/s]\n 68%|██████▊ | 19/28 [00:05<00:02, 3.55it/s]\n 71%|███████▏ | 20/28 [00:05<00:02, 3.56it/s]\n 75%|███████▌ | 21/28 [00:05<00:01, 3.56it/s]\n 79%|███████▊ | 22/28 [00:06<00:01, 3.56it/s]\n 82%|████████▏ | 23/28 [00:06<00:01, 3.56it/s]\n 86%|████████▌ | 24/28 [00:06<00:01, 3.56it/s]\n 89%|████████▉ | 25/28 [00:06<00:00, 3.56it/s]\n 93%|█████████▎| 26/28 [00:07<00:00, 3.56it/s]\n 96%|█████████▋| 27/28 [00:07<00:00, 3.56it/s]\n100%|██████████| 28/28 [00:07<00:00, 3.56it/s]\n100%|██████████| 28/28 [00:07<00:00, 3.58it/s]",
"metrics": {
"predict_time": 16.173941447,
"total_time": 20.360202
},
"output": [
"https://replicate.delivery/yhqm/zNGay29l8SqjGBbhaPhXxfW3txrcpUbFcEgGPws6eBXHzlUTA/out-0.webp"
],
"started_at": "2024-08-20T14:39:51.772260Z",
"status": "succeeded",
"urls": {
"get": "https://api.replicate.com/v1/predictions/sthe3aedg9rm00che0xbk475xw",
"cancel": "https://api.replicate.com/v1/predictions/sthe3aedg9rm00che0xbk475xw/cancel"
},
"version": "e927742a5f430e7e36f3a646ced840cebb4c59e00e1bd1993e068a7f97a85fd7"
}
Using seed: 2697
Prompt: a portrait photo of 0_1 as a charismatic female speaker at conference, captured gesturing mid-speech on stage. She is wearing a light grey sweater. She's also wearing a simple black lanyard hanging around her neck. The lanyard badge has the text "Replicate". She has perfect eyes.
Behind her, there is a blurred background with a white banner containing logos and text (including replicate), a professional conference setting.
txt2img mode
Using dev model
Loading LoRA weights
LoRA weights loaded successfully
The following part of your input was truncated because CLIP can only handle sequences up to 77 tokens: ['text ( including replicate ), a professional conference setting.']
0%| | 0/28 [00:00<?, ?it/s]
4%|▎ | 1/28 [00:00<00:07, 3.55it/s]
7%|▋ | 2/28 [00:00<00:06, 3.97it/s]
11%|█ | 3/28 [00:00<00:06, 3.77it/s]
14%|█▍ | 4/28 [00:01<00:06, 3.68it/s]
18%|█▊ | 5/28 [00:01<00:06, 3.64it/s]
21%|██▏ | 6/28 [00:01<00:06, 3.61it/s]
25%|██▌ | 7/28 [00:01<00:05, 3.59it/s]
29%|██▊ | 8/28 [00:02<00:05, 3.58it/s]
32%|███▏ | 9/28 [00:02<00:05, 3.58it/s]
36%|███▌ | 10/28 [00:02<00:05, 3.57it/s]
39%|███▉ | 11/28 [00:03<00:04, 3.57it/s]
43%|████▎ | 12/28 [00:03<00:04, 3.56it/s]
46%|████▋ | 13/28 [00:03<00:04, 3.57it/s]
50%|█████ | 14/28 [00:03<00:03, 3.56it/s]
54%|█████▎ | 15/28 [00:04<00:03, 3.56it/s]
57%|█████▋ | 16/28 [00:04<00:03, 3.56it/s]
61%|██████ | 17/28 [00:04<00:03, 3.56it/s]
64%|██████▍ | 18/28 [00:05<00:02, 3.55it/s]
68%|██████▊ | 19/28 [00:05<00:02, 3.55it/s]
71%|███████▏ | 20/28 [00:05<00:02, 3.56it/s]
75%|███████▌ | 21/28 [00:05<00:01, 3.56it/s]
79%|███████▊ | 22/28 [00:06<00:01, 3.56it/s]
82%|████████▏ | 23/28 [00:06<00:01, 3.56it/s]
86%|████████▌ | 24/28 [00:06<00:01, 3.56it/s]
89%|████████▉ | 25/28 [00:06<00:00, 3.56it/s]
93%|█████████▎| 26/28 [00:07<00:00, 3.56it/s]
96%|█████████▋| 27/28 [00:07<00:00, 3.56it/s]
100%|██████████| 28/28 [00:07<00:00, 3.56it/s]
100%|██████████| 28/28 [00:07<00:00, 3.58it/s]
This model costs approximately $0.015 to run on Replicate, or 66 runs per $1, but this varies depending on your inputs. It is also open source and you can run it on your own computer with Docker.
This model runs on Nvidia H100 GPU hardware. Predictions typically complete within 10 seconds.
This model doesn't have a readme.
This model is warm. You'll get a fast response if the model is warm and already running, and a slower response if the model is cold and starting up.
Choose a file from your machine
Hint: you can also drag files onto the input
Choose a file from your machine
Hint: you can also drag files onto the input
Using seed: 2697
Prompt: a portrait photo of 0_1 as a charismatic female speaker at conference, captured gesturing mid-speech on stage. She is wearing a light grey sweater. She's also wearing a simple black lanyard hanging around her neck. The lanyard badge has the text "Replicate". She has perfect eyes.
Behind her, there is a blurred background with a white banner containing logos and text (including replicate), a professional conference setting.
txt2img mode
Using dev model
Loading LoRA weights
LoRA weights loaded successfully
The following part of your input was truncated because CLIP can only handle sequences up to 77 tokens: ['text ( including replicate ), a professional conference setting.']
0%| | 0/28 [00:00<?, ?it/s]
4%|▎ | 1/28 [00:00<00:07, 3.55it/s]
7%|▋ | 2/28 [00:00<00:06, 3.97it/s]
11%|█ | 3/28 [00:00<00:06, 3.77it/s]
14%|█▍ | 4/28 [00:01<00:06, 3.68it/s]
18%|█▊ | 5/28 [00:01<00:06, 3.64it/s]
21%|██▏ | 6/28 [00:01<00:06, 3.61it/s]
25%|██▌ | 7/28 [00:01<00:05, 3.59it/s]
29%|██▊ | 8/28 [00:02<00:05, 3.58it/s]
32%|███▏ | 9/28 [00:02<00:05, 3.58it/s]
36%|███▌ | 10/28 [00:02<00:05, 3.57it/s]
39%|███▉ | 11/28 [00:03<00:04, 3.57it/s]
43%|████▎ | 12/28 [00:03<00:04, 3.56it/s]
46%|████▋ | 13/28 [00:03<00:04, 3.57it/s]
50%|█████ | 14/28 [00:03<00:03, 3.56it/s]
54%|█████▎ | 15/28 [00:04<00:03, 3.56it/s]
57%|█████▋ | 16/28 [00:04<00:03, 3.56it/s]
61%|██████ | 17/28 [00:04<00:03, 3.56it/s]
64%|██████▍ | 18/28 [00:05<00:02, 3.55it/s]
68%|██████▊ | 19/28 [00:05<00:02, 3.55it/s]
71%|███████▏ | 20/28 [00:05<00:02, 3.56it/s]
75%|███████▌ | 21/28 [00:05<00:01, 3.56it/s]
79%|███████▊ | 22/28 [00:06<00:01, 3.56it/s]
82%|████████▏ | 23/28 [00:06<00:01, 3.56it/s]
86%|████████▌ | 24/28 [00:06<00:01, 3.56it/s]
89%|████████▉ | 25/28 [00:06<00:00, 3.56it/s]
93%|█████████▎| 26/28 [00:07<00:00, 3.56it/s]
96%|█████████▋| 27/28 [00:07<00:00, 3.56it/s]
100%|██████████| 28/28 [00:07<00:00, 3.56it/s]
100%|██████████| 28/28 [00:07<00:00, 3.58it/s]