lucataco / deep3d

Deep3D: Real-Time end-to-end 2D-to-3D Video Conversion, based on deep learning

  • Public
  • 425 runs
  • GitHub
  • License

Run time and cost

This model costs approximately $0.083 to run on Replicate, or 12 runs per $1, but this varies depending on your inputs. It is also open source and you can run it on your own computer with Docker.

This model runs on Nvidia L40S GPU hardware. Predictions typically complete within 86 seconds. The predict time for this model varies significantly based on the inputs.

Readme

Deep3D

Real-Time end-to-end 2D-to-3D Video Conversion, based on deep learning.
Inspired by piiswrong/deep3d, we rebuild the network on pytorch and optimize it in time domain and faster inference speed. So, try it and enjoy your own 3D movies.


Left is input video and right is output video with parallax.

More examples: - wood-1080p - Journey to the West (86)

Inference speed

Plan 360p (FPS) 720p (FPS) 1080p (FPS) 4k (FPS)
GPU (2080ti) 84 87 77 26
CPU (Xeon Platinum 8260) 27.7 14.1 7.2 2.0

Run Deep3D

Prerequisites

Get Pre-Trained Models

You can download pre_trained models from: [Google Drive] [百度云,提取码xxo0 ]
Note: - 360p can get the best result. - The published models are not inference optimized. - Models are still under training, 1080p and 4k models will be uploaded in the future.

Acknowledgements

This code borrows heavily from [deep3d] [DeepMosaics]