nicolascoutureau/ac
Public
23.8K
runs
Run nicolascoutureau/ac with an API
Use one of our client libraries to get started quickly. Clicking on a library will take you to the Playground tab where you can tweak different inputs, see the results, and copy the corresponding code to use in your own project.
Input schema
The fields you can use to run this model with an API. If you don't give a value for a field its default value will be used.
| Field | Type | Default value | Description |
|---|---|---|---|
| video |
string
|
Input horizontal video to convert to vertical format
|
|
| aspect_ratio |
None
|
9:16
|
Output aspect ratio
|
| speed_preset |
None
|
balanced
|
Processing preset. Controls detection speed plus output encode quality. Fast uses a smaller YOLO model, lighter analysis, and a more compressed output.
|
| detect_speaker |
boolean
|
True
|
Detect and focus on the active speaker using TalkNet ASD (audio-visual neural network). When multiple people are detected, focuses on who is talking. Requires GPU.
|
| tracking_mode |
None
|
smooth
|
Camera tracking mode. Smooth = cinematic OpusClip-like movement. Static = fixed per-scene.
|
| debug_overlay |
boolean
|
False
|
Draw debug info on video (scene, strategy, speaker, person details, zoom).
|
{
"type": "object",
"title": "Input",
"required": [
"video"
],
"properties": {
"video": {
"type": "string",
"title": "Video",
"format": "uri",
"x-order": 0,
"description": "Input horizontal video to convert to vertical format"
},
"aspect_ratio": {
"enum": [
"9:16",
"1:1",
"16:9"
],
"type": "string",
"title": "aspect_ratio",
"description": "Output aspect ratio",
"default": "9:16",
"x-order": 1
},
"speed_preset": {
"enum": [
"quality",
"balanced",
"fast"
],
"type": "string",
"title": "speed_preset",
"description": "Processing preset. Controls detection speed plus output encode quality. Fast uses a smaller YOLO model, lighter analysis, and a more compressed output.",
"default": "balanced",
"x-order": 2
},
"debug_overlay": {
"type": "boolean",
"title": "Debug Overlay",
"default": false,
"x-order": 5,
"description": "Draw debug info on video (scene, strategy, speaker, person details, zoom)."
},
"tracking_mode": {
"enum": [
"smooth",
"static",
"fast"
],
"type": "string",
"title": "tracking_mode",
"description": "Camera tracking mode. Smooth = cinematic OpusClip-like movement. Static = fixed per-scene.",
"default": "smooth",
"x-order": 4
},
"detect_speaker": {
"type": "boolean",
"title": "Detect Speaker",
"default": true,
"x-order": 3,
"description": "Detect and focus on the active speaker using TalkNet ASD (audio-visual neural network). When multiple people are detected, focuses on who is talking. Requires GPU."
}
}
}
Output schema
The shape of the response you’ll get when you run this model with an API.
Schema
{
"type": "string",
"title": "Output",
"format": "uri"
}