cudanexus / tcvc

TCVC-Temporally-Consistent-Video-Colorization

  • Public
  • 23 runs
  • T4
  • GitHub
  • Paper

Input

pip install replicate
Set the REPLICATE_API_TOKEN environment variable:
export REPLICATE_API_TOKEN=<paste-your-token-here>

Find your API token in your account settings.

Import the client:
import replicate

Run cudanexus/tcvc using Replicate’s API. Check out the model's schema for an overview of inputs and outputs.

output = replicate.run(
    "cudanexus/tcvc:c0e982b3d39b516845a5217b211ce8373d698e94ff5662c94c842f4f2f8046fd",
    input={}
)
print(output)

To learn more, take a look at the guide on getting started with Python.

Output

No output yet! Press "Submit" to start a prediction.

Run time and cost

This model runs on Nvidia T4 GPU hardware. We don't yet have enough runs of this model to provide performance information.

Readme

TCVC-Temporally-Consistent-Video-Colorization

Temporally Consistent Video Colorization with Deep Feature Propagation and Self-regularization Learning.

Brief Introduction

Video colorization is a challenging and highly ill-posed problem. Although recent years have witnessed remarkable progress in single image colorization, there is relatively less research effort on video colorization and existing methods always suffer from severe flickering artifacts (temporal inconsistency) or unsatisfying colorization performance. We address this problem from a new perspective, by jointly considering colorization and temporal consistency in a unified framework. Specifically, we propose a novel temporally consistent video colorization framework (TCVC). TCVC effectively propagates frame-level deep features in a bidirectional way to enhance the temporal consistency of colorization. Furthermore, TCVC introduces a self-regularization learning (SRL) scheme to minimize the prediction difference obtained with different time steps. SRL does not require any ground-truth color videos for training and can further improve temporal consistency. Experiments demonstrate that our method can not only obtain visually pleasing colorized video, but also achieve clearly better temporal consistency than state-of-the-art methods.