bluematter/basedlabs-watermark-remover
Public
6.9K
runs
Run bluematter/basedlabs-watermark-remover with an API
Use one of our client libraries to get started quickly. Clicking on a library will take you to the Playground tab where you can tweak different inputs, see the results, and copy the corresponding code to use in your own project.
Input schema
The fields you can use to run this model with an API. If you don't give a value for a field its default value will be used.
| Field | Type | Default value | Description |
|---|---|---|---|
| video |
string
|
Input video
|
|
| mask |
string
|
Mask for video inpainting. Can be a static image (jpg, png) or a video (avi, mp4). If not provided, will auto-select based on video aspect ratio (16:9 landscape or 9:16 portrait).
|
|
| return_input_video |
boolean
|
False
|
Return the input video in the output.
|
| resize_ratio |
number
|
1
|
Resize scale for processing video.
|
| height |
integer
|
-1
|
Height of the processing video.
|
| width |
integer
|
-1
|
Width of the processing video.
|
| mask_dilation |
integer
|
4
|
Mask dilation for video and flow masking.
|
| ref_stride |
integer
|
10
|
Stride of global reference frames.
|
| neighbor_length |
integer
|
5
|
Length of local neighboring frames. Lower = faster, higher = smoother.
|
| subvideo_length |
integer
|
20
|
Length of sub-video for long video inference. Lower = faster & less memory.
|
| raft_iter |
integer
|
20
|
Iterations for RAFT inference.
|
| mode |
None
|
video_inpainting
|
Modes: video inpainting / video outpainting. If you want to do video inpainting, you need a mask. For video outpainting, you need to set scale_h and scale_w, and mask is ignored.
|
| scale_h |
number
|
1
|
Outpainting scale of height for video_outpainting mode.
|
| scale_w |
number
|
1
|
Outpainting scale of width for video_outpainting mode.
|
| save_fps |
integer
|
24
|
Frames per second.
|
| fp16 |
boolean
|
True
|
Use fp16 (half precision) during inference. Enabled by default to save memory.
|
{
"type": "object",
"title": "Input",
"required": [
"video"
],
"properties": {
"fp16": {
"type": "boolean",
"title": "Fp16",
"default": true,
"x-order": 15,
"description": "Use fp16 (half precision) during inference. Enabled by default to save memory."
},
"mask": {
"type": "string",
"title": "Mask",
"format": "uri",
"x-order": 1,
"description": "Mask for video inpainting. Can be a static image (jpg, png) or a video (avi, mp4). If not provided, will auto-select based on video aspect ratio (16:9 landscape or 9:16 portrait)."
},
"mode": {
"enum": [
"video_inpainting",
"video_outpainting"
],
"type": "string",
"title": "mode",
"description": "Modes: video inpainting / video outpainting. If you want to do video inpainting, you need a mask. For video outpainting, you need to set scale_h and scale_w, and mask is ignored.",
"default": "video_inpainting",
"x-order": 11
},
"video": {
"type": "string",
"title": "Video",
"format": "uri",
"x-order": 0,
"description": "Input video"
},
"width": {
"type": "integer",
"title": "Width",
"default": -1,
"x-order": 5,
"description": "Width of the processing video."
},
"height": {
"type": "integer",
"title": "Height",
"default": -1,
"x-order": 4,
"description": "Height of the processing video."
},
"scale_h": {
"type": "number",
"title": "Scale H",
"default": 1,
"x-order": 12,
"description": "Outpainting scale of height for video_outpainting mode."
},
"scale_w": {
"type": "number",
"title": "Scale W",
"default": 1,
"x-order": 13,
"description": "Outpainting scale of width for video_outpainting mode."
},
"save_fps": {
"type": "integer",
"title": "Save Fps",
"default": 24,
"x-order": 14,
"description": "Frames per second."
},
"raft_iter": {
"type": "integer",
"title": "Raft Iter",
"default": 20,
"x-order": 10,
"description": "Iterations for RAFT inference."
},
"ref_stride": {
"type": "integer",
"title": "Ref Stride",
"default": 10,
"x-order": 7,
"description": "Stride of global reference frames."
},
"resize_ratio": {
"type": "number",
"title": "Resize Ratio",
"default": 1,
"x-order": 3,
"description": "Resize scale for processing video."
},
"mask_dilation": {
"type": "integer",
"title": "Mask Dilation",
"default": 4,
"x-order": 6,
"description": "Mask dilation for video and flow masking."
},
"neighbor_length": {
"type": "integer",
"title": "Neighbor Length",
"default": 5,
"x-order": 8,
"description": "Length of local neighboring frames. Lower = faster, higher = smoother."
},
"subvideo_length": {
"type": "integer",
"title": "Subvideo Length",
"default": 20,
"x-order": 9,
"description": "Length of sub-video for long video inference. Lower = faster & less memory."
},
"return_input_video": {
"type": "boolean",
"title": "Return Input Video",
"default": false,
"x-order": 2,
"description": "Return the input video in the output."
}
}
}
Output schema
The shape of the response you’ll get when you run this model with an API.
Schema
{
"type": "array",
"items": {
"type": "string",
"format": "uri"
},
"title": "Output"
}