Run pipeline-examples/in-context-lora using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
const input = {
prompt: "this logo is applied as a trademark on the shoulder of a chrome robot, the robot is standing on a wet city street, against a distant sunset"
};
const output = await replicate.run("pipeline-examples/in-context-lora", { input });
// To access the file URL:console.log(output.url()); //=> "http://example.com"// To write the file to disk:
fs.writeFile("my-image.png", output);
Run pipeline-examples/in-context-lora using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
output = replicate.run(
"pipeline-examples/in-context-lora",
input={
"prompt": "this logo is applied as a trademark on the shoulder of a chrome robot, the robot is standing on a wet city street, against a distant sunset"
}
)
# To access the file URL:print(output.url())
#=> "http://example.com"# To write the file to disk:withopen("my-image.png", "wb") as file:
file.write(output.read())
Run pipeline-examples/in-context-lora using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.
curl -s -X POST \
-H "Authorization: Bearer $REPLICATE_API_TOKEN" \
-H "Content-Type: application/json" \
-H "Prefer: wait" \
-d $'{
"input": {
"prompt": "this logo is applied as a trademark on the shoulder of a chrome robot, the robot is standing on a wet city street, against a distant sunset"
}
}' \
https://api.replicate.com/v1/models/pipeline-examples/in-context-lora/predictions