Custom model for yagyesh4
Prompt for generated image. If you include the `trigger_word` used in the training process you are more likely to activate the trained object, style, or concept in the resulting image.
Input image for image to image or inpainting mode. If provided, aspect_ratio, width, and height inputs are ignored.
Image mask for image inpainting mode. If provided, aspect_ratio, width, and height inputs are ignored.
Aspect ratio for the generated image. If custom is selected, uses height and width below & will run in bf16 mode
Default: "1:1"
Height of generated image. Only works if `aspect_ratio` is set to custom. Will be rounded to nearest multiple of 16. Incompatible with fast generation
Width of generated image. Only works if `aspect_ratio` is set to custom. Will be rounded to nearest multiple of 16. Incompatible with fast generation
Prompt strength when using img2img. 1.0 corresponds to full destruction of information in image
Default: 0.8
Which model to run inference with. The dev model performs best with around 28 inference steps but the schnell model only needs 4 steps.
Default: "dev"
Number of outputs to generate
Default: 1
Number of denoising steps. More steps can give more detailed images, but take longer.
Default: 28
Guidance scale for the diffusion process. Lower values can give more realistic images. Good values to try are 2, 2.5, 3 and 3.5
Default: 3
Random seed. Set for reproducible generation
Format of the output images
Default: "webp"
Quality when saving the output images, from 0 to 100. 100 is best quality, 0 is lowest quality. Not relevant for .png outputs
Default: 80
This model’s safety checker can’t be disabled when running on the website. Learn more about platform safety on Replicate.
Disable safety checker for generated images.
Default: false
Run faster predictions with model optimized for speed (currently fp8 quantized); disable to run in original bf16
Approximate number of megapixels for generated image
Default: "1"
Determines how strongly the main LoRA should be applied. Sane results between 0 and 1 for base inference. For go_fast we apply a 1.5x multiplier to this value; we've generally seen good performance when scaling the base value by that amount. You may still need to experiment to find the best value for your particular lora.
Load LoRA weights. Supports Replicate models in the format <owner>/<username> or <owner>/<username>/<version>, HuggingFace URLs in the format huggingface.co/<owner>/<model-name>, CivitAI URLs in the format civitai.com/models/<id>[/<model-name>], or arbitrary .safetensors URLs from the Internet. For example, 'fofr/flux-pixar-cars'
Determines how strongly the extra LoRA should be applied. Sane results between 0 and 1 for base inference. For go_fast we apply a 1.5x multiplier to this value; we've generally seen good performance when scaling the base value by that amount. You may still need to experiment to find the best value for your particular lora.
Run this model in Node.js with one line of code:
npm install replicate
REPLICATE_API_TOKEN
export REPLICATE_API_TOKEN=<paste-your-token-here>
Find your API token in your account settings.
import Replicate from "replicate"; import fs from "node:fs"; const replicate = new Replicate({ auth: process.env.REPLICATE_API_TOKEN, });
Run kwiktwikteam/custom_model_4 using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const output = await replicate.run( "kwiktwikteam/custom_model_4:5533e8312a944c68506d27fedfc1c18b5d6e897d1a46bef6643fb3d21c3081a0", { input: { model: "dev", prompt: "give me yagyesh4 ", go_fast: false, lora_scale: 1, megapixels: "1", num_outputs: 1, aspect_ratio: "1:1", output_format: "webp", guidance_scale: 3, output_quality: 80, prompt_strength: 0.8, extra_lora_scale: 1, num_inference_steps: 28 } } ); // To access the file URL: console.log(output[0].url()); //=> "http://example.com" // To write the file to disk: fs.writeFile("my-image.png", output[0]);
To learn more, take a look at the guide on getting started with Node.js.
pip install replicate
import replicate
output = replicate.run( "kwiktwikteam/custom_model_4:5533e8312a944c68506d27fedfc1c18b5d6e897d1a46bef6643fb3d21c3081a0", input={ "model": "dev", "prompt": "give me yagyesh4 ", "go_fast": False, "lora_scale": 1, "megapixels": "1", "num_outputs": 1, "aspect_ratio": "1:1", "output_format": "webp", "guidance_scale": 3, "output_quality": 80, "prompt_strength": 0.8, "extra_lora_scale": 1, "num_inference_steps": 28 } ) # To access the file URL: print(output[0].url()) #=> "http://example.com" # To write the file to disk: with open("my-image.png", "wb") as file: file.write(output[0].read())
To learn more, take a look at the guide on getting started with Python.
curl -s -X POST \ -H "Authorization: Bearer $REPLICATE_API_TOKEN" \ -H "Content-Type: application/json" \ -H "Prefer: wait" \ -d $'{ "version": "kwiktwikteam/custom_model_4:5533e8312a944c68506d27fedfc1c18b5d6e897d1a46bef6643fb3d21c3081a0", "input": { "model": "dev", "prompt": "give me yagyesh4 ", "go_fast": false, "lora_scale": 1, "megapixels": "1", "num_outputs": 1, "aspect_ratio": "1:1", "output_format": "webp", "guidance_scale": 3, "output_quality": 80, "prompt_strength": 0.8, "extra_lora_scale": 1, "num_inference_steps": 28 } }' \ https://api.replicate.com/v1/predictions
To learn more, take a look at Replicate’s HTTP API reference docs.
{ "completed_at": "2025-01-08T16:34:56.382755Z", "created_at": "2025-01-08T16:34:48.135000Z", "data_removed": false, "error": null, "id": "cjrhney4rxrmc0cm8v2aexnryr", "input": { "model": "dev", "prompt": "give me yagyesh4 ", "go_fast": false, "lora_scale": 1, "megapixels": "1", "num_outputs": 1, "aspect_ratio": "1:1", "output_format": "webp", "guidance_scale": 3, "output_quality": 80, "prompt_strength": 0.8, "extra_lora_scale": 1, "num_inference_steps": 28 }, "logs": "2025-01-08 16:34:48.142 | DEBUG | fp8.lora_loading:apply_lora_to_model:574 - Extracting keys\n2025-01-08 16:34:48.142 | DEBUG | fp8.lora_loading:apply_lora_to_model:581 - Keys extracted\nApplying LoRA: 0%| | 0/304 [00:00<?, ?it/s]\nApplying LoRA: 89%|████████▉ | 270/304 [00:00<00:00, 2662.11it/s]\nApplying LoRA: 100%|██████████| 304/304 [00:00<00:00, 2647.43it/s]\n2025-01-08 16:34:48.257 | SUCCESS | fp8.lora_loading:unload_loras:564 - LoRAs unloaded in 0.12s\nfree=28786974556160\nDownloading weights\n2025-01-08T16:34:48Z | INFO | [ Initiating ] chunk_size=150M dest=/tmp/tmpk0jo_xl1/weights url=https://replicate.delivery/xezq/DSywwtPM9eyTQC7KR3rpbbZ7aTiqNfCWmSkpKKfPZJuuL8FoA/trained_model.tar\n2025-01-08T16:34:50Z | INFO | [ Complete ] dest=/tmp/tmpk0jo_xl1/weights size=\"172 MB\" total_elapsed=1.828s url=https://replicate.delivery/xezq/DSywwtPM9eyTQC7KR3rpbbZ7aTiqNfCWmSkpKKfPZJuuL8FoA/trained_model.tar\nDownloaded weights in 1.85s\n2025-01-08 16:34:50.111 | INFO | fp8.lora_loading:convert_lora_weights:498 - Loading LoRA weights for /src/weights-cache/a79b2aceb606d0ae\n2025-01-08 16:34:50.181 | INFO | fp8.lora_loading:convert_lora_weights:519 - LoRA weights loaded\n2025-01-08 16:34:50.181 | DEBUG | fp8.lora_loading:apply_lora_to_model:574 - Extracting keys\n2025-01-08 16:34:50.182 | DEBUG | fp8.lora_loading:apply_lora_to_model:581 - Keys extracted\nApplying LoRA: 0%| | 0/304 [00:00<?, ?it/s]\nApplying LoRA: 89%|████████▉ | 270/304 [00:00<00:00, 2666.71it/s]\nApplying LoRA: 100%|██████████| 304/304 [00:00<00:00, 2651.13it/s]\n2025-01-08 16:34:50.297 | SUCCESS | fp8.lora_loading:load_lora:539 - LoRA applied in 0.19s\nUsing seed: 53873\n0it [00:00, ?it/s]\n1it [00:00, 8.36it/s]\n2it [00:00, 5.86it/s]\n3it [00:00, 5.35it/s]\n4it [00:00, 5.13it/s]\n5it [00:00, 5.02it/s]\n6it [00:01, 4.96it/s]\n7it [00:01, 4.92it/s]\n8it [00:01, 4.89it/s]\n9it [00:01, 4.87it/s]\n10it [00:01, 4.85it/s]\n11it [00:02, 4.84it/s]\n12it [00:02, 4.84it/s]\n13it [00:02, 4.84it/s]\n14it [00:02, 4.83it/s]\n15it [00:03, 4.83it/s]\n16it [00:03, 4.82it/s]\n17it [00:03, 4.82it/s]\n18it [00:03, 4.82it/s]\n19it [00:03, 4.82it/s]\n20it [00:04, 4.82it/s]\n21it [00:04, 4.82it/s]\n22it [00:04, 4.82it/s]\n23it [00:04, 4.82it/s]\n24it [00:04, 4.82it/s]\n25it [00:05, 4.82it/s]\n26it [00:05, 4.82it/s]\n27it [00:05, 4.82it/s]\n28it [00:05, 4.83it/s]\n28it [00:05, 4.90it/s]\nTotal safe images: 1 out of 1", "metrics": { "predict_time": 8.239552857, "total_time": 8.247755 }, "output": [ "https://replicate.delivery/xezq/3G5P47X2N9LFKFT4DtnIosLXCrad4WN0jSXInlNYMOJMbxAF/out-0.webp" ], "started_at": "2025-01-08T16:34:48.143202Z", "status": "succeeded", "urls": { "stream": "https://stream.replicate.com/v1/files/bcwr-ppxkfcucum5qgprne73xfgznjgu6ypymybxfvmzmrkfzkb3yf2ga", "get": "https://api.replicate.com/v1/predictions/cjrhney4rxrmc0cm8v2aexnryr", "cancel": "https://api.replicate.com/v1/predictions/cjrhney4rxrmc0cm8v2aexnryr/cancel" }, "version": "5533e8312a944c68506d27fedfc1c18b5d6e897d1a46bef6643fb3d21c3081a0" }
2025-01-08 16:34:48.142 | DEBUG | fp8.lora_loading:apply_lora_to_model:574 - Extracting keys 2025-01-08 16:34:48.142 | DEBUG | fp8.lora_loading:apply_lora_to_model:581 - Keys extracted Applying LoRA: 0%| | 0/304 [00:00<?, ?it/s] Applying LoRA: 89%|████████▉ | 270/304 [00:00<00:00, 2662.11it/s] Applying LoRA: 100%|██████████| 304/304 [00:00<00:00, 2647.43it/s] 2025-01-08 16:34:48.257 | SUCCESS | fp8.lora_loading:unload_loras:564 - LoRAs unloaded in 0.12s free=28786974556160 Downloading weights 2025-01-08T16:34:48Z | INFO | [ Initiating ] chunk_size=150M dest=/tmp/tmpk0jo_xl1/weights url=https://replicate.delivery/xezq/DSywwtPM9eyTQC7KR3rpbbZ7aTiqNfCWmSkpKKfPZJuuL8FoA/trained_model.tar 2025-01-08T16:34:50Z | INFO | [ Complete ] dest=/tmp/tmpk0jo_xl1/weights size="172 MB" total_elapsed=1.828s url=https://replicate.delivery/xezq/DSywwtPM9eyTQC7KR3rpbbZ7aTiqNfCWmSkpKKfPZJuuL8FoA/trained_model.tar Downloaded weights in 1.85s 2025-01-08 16:34:50.111 | INFO | fp8.lora_loading:convert_lora_weights:498 - Loading LoRA weights for /src/weights-cache/a79b2aceb606d0ae 2025-01-08 16:34:50.181 | INFO | fp8.lora_loading:convert_lora_weights:519 - LoRA weights loaded 2025-01-08 16:34:50.181 | DEBUG | fp8.lora_loading:apply_lora_to_model:574 - Extracting keys 2025-01-08 16:34:50.182 | DEBUG | fp8.lora_loading:apply_lora_to_model:581 - Keys extracted Applying LoRA: 0%| | 0/304 [00:00<?, ?it/s] Applying LoRA: 89%|████████▉ | 270/304 [00:00<00:00, 2666.71it/s] Applying LoRA: 100%|██████████| 304/304 [00:00<00:00, 2651.13it/s] 2025-01-08 16:34:50.297 | SUCCESS | fp8.lora_loading:load_lora:539 - LoRA applied in 0.19s Using seed: 53873 0it [00:00, ?it/s] 1it [00:00, 8.36it/s] 2it [00:00, 5.86it/s] 3it [00:00, 5.35it/s] 4it [00:00, 5.13it/s] 5it [00:00, 5.02it/s] 6it [00:01, 4.96it/s] 7it [00:01, 4.92it/s] 8it [00:01, 4.89it/s] 9it [00:01, 4.87it/s] 10it [00:01, 4.85it/s] 11it [00:02, 4.84it/s] 12it [00:02, 4.84it/s] 13it [00:02, 4.84it/s] 14it [00:02, 4.83it/s] 15it [00:03, 4.83it/s] 16it [00:03, 4.82it/s] 17it [00:03, 4.82it/s] 18it [00:03, 4.82it/s] 19it [00:03, 4.82it/s] 20it [00:04, 4.82it/s] 21it [00:04, 4.82it/s] 22it [00:04, 4.82it/s] 23it [00:04, 4.82it/s] 24it [00:04, 4.82it/s] 25it [00:05, 4.82it/s] 26it [00:05, 4.82it/s] 27it [00:05, 4.82it/s] 28it [00:05, 4.83it/s] 28it [00:05, 4.90it/s] Total safe images: 1 out of 1
View more examples
This model runs on Nvidia H100 GPU hardware. We don't yet have enough runs of this model to provide performance information.
This model doesn't have a readme.
This model is booted and ready for API calls.
This model runs on H100 hardware which costs $0.001525 per second
Choose a file from your machine
Hint: you can also drag files onto the input