[
"Where was this photo taken from? Answer: The Golden Gate Bridge is a suspension bridge spanning"
]
{
"completed_at": "2024-10-04T20:01:40.790704Z",
"created_at": "2024-10-04T19:58:08.882000Z",
"data_removed": false,
"error": null,
"id": "8hq58v3fy9rgp0cjb4e8svy9zm",
"input": {
"image": "https://replicate.delivery/pbxt/LjoVjObT8FOT8vQFsPfOoxr17sRMDQMihn2C4bzMec3BkDHo/IMG_3310.jpeg",
"top_p": 0.95,
"prompt": "Where was this photo taken from?",
"temperature": 0.3
},
"logs": "/root/.pyenv/versions/3.10.15/lib/python3.10/site-packages/transformers/generation/configuration_utils.py:601: UserWarning: `do_sample` is set to `False`. However, `temperature` is set to `0.3` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `temperature`.\nwarnings.warn(\n/root/.pyenv/versions/3.10.15/lib/python3.10/site-packages/transformers/generation/configuration_utils.py:606: UserWarning: `do_sample` is set to `False`. However, `top_p` is set to `0.95` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `top_p`.\nwarnings.warn(\n/root/.pyenv/versions/3.10.15/lib/python3.10/site-packages/transformers/generation/utils.py:1220: UserWarning: Using the model-agnostic default `max_length` (=20) to control the generation length. We recommend setting `max_new_tokens` to control the maximum length of the generation.\nwarnings.warn(",
"metrics": {
"predict_time": 1.9958411040000001,
"total_time": 211.908704
},
"output": [
"Where was this photo taken from? Answer: The Golden Gate Bridge is a suspension bridge spanning"
],
"started_at": "2024-10-04T20:01:38.794863Z",
"status": "succeeded",
"urls": {
"get": "https://api.replicate.com/v1/predictions/8hq58v3fy9rgp0cjb4e8svy9zm",
"cancel": "https://api.replicate.com/v1/predictions/8hq58v3fy9rgp0cjb4e8svy9zm/cancel"
},
"version": "d48ad671cbc5f6e0c848f455ac2ca7280953fe1cf4039a010968f1cb19b0936f"
}
/root/.pyenv/versions/3.10.15/lib/python3.10/site-packages/transformers/generation/configuration_utils.py:601: UserWarning: `do_sample` is set to `False`. However, `temperature` is set to `0.3` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `temperature`.
warnings.warn(
/root/.pyenv/versions/3.10.15/lib/python3.10/site-packages/transformers/generation/configuration_utils.py:606: UserWarning: `do_sample` is set to `False`. However, `top_p` is set to `0.95` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `top_p`.
warnings.warn(
/root/.pyenv/versions/3.10.15/lib/python3.10/site-packages/transformers/generation/utils.py:1220: UserWarning: Using the model-agnostic default `max_length` (=20) to control the generation length. We recommend setting `max_new_tokens` to control the maximum length of the generation.
warnings.warn(
Run time and cost
This model runs on Nvidia L40S GPU hardware.
We don't yet have enough runs of this model to provide performance information.
This model is cold. You'll get a fast response if the model is warm and already running, and a slower response if the model is cold and starting up.
Choose a file from your machine
Hint: you can also drag files onto the input
Take a picture with your webcam
Copy
Show
Copy
Copy
Copy
Copy
Show
Copy
Copy
Copy
Show
Copy
Copy
Copy
Logs (8hq58v3fy9rgp0cjb4e8svy9zm)
Succeeded
/root/.pyenv/versions/3.10.15/lib/python3.10/site-packages/transformers/generation/configuration_utils.py:601: UserWarning: `do_sample` is set to `False`. However, `temperature` is set to `0.3` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `temperature`.
warnings.warn(
/root/.pyenv/versions/3.10.15/lib/python3.10/site-packages/transformers/generation/configuration_utils.py:606: UserWarning: `do_sample` is set to `False`. However, `top_p` is set to `0.95` -- this flag is only used in sample-based generation modes. You should set `do_sample=True` or unset `top_p`.
warnings.warn(
/root/.pyenv/versions/3.10.15/lib/python3.10/site-packages/transformers/generation/utils.py:1220: UserWarning: Using the model-agnostic default `max_length` (=20) to control the generation length. We recommend setting `max_new_tokens` to control the maximum length of the generation.
warnings.warn(