zylim0702 / bokeh_prediction

Bokeh Prediction, a hybrid bokeh rendering framework that combines a neural renderer with a classical approach. It generates high-resolution, adjustable bokeh effects from a single image and potentially imperfect disparity maps.

  • Public
  • 453 runs
  • GitHub
  • License

Input

Output

Run time and cost

This model costs approximately $0.079 to run on Replicate, or 12 runs per $1, but this varies depending on your inputs. It is also open source and you can run it on your own computer with Docker.

This model runs on Nvidia A40 GPU hardware. Predictions typically complete within 137 seconds. The predict time for this model varies significantly based on the inputs.

Readme

Experience breathtaking bokeh effects, leveraging the advanced BokehMe model – a fusion of neural and classical rendering techniques. Elevate your photos with adjustable blur, focal plane, and aperture shape, achieving high-resolution, photorealistic results. Explore the future of image enhancement now!

https://github.com/JuewenPeng/BokehMe

Abstract: BokehMe introduces a groundbreaking hybrid bokeh rendering framework that seamlessly integrates neural rendering with classical physically motivated rendering. Designed to enhance single images with potentially imperfect disparity maps, BokehMe produces high-resolution, photo-realistic bokeh effects. Offering adjustable blur size, focal plane, and aperture shape, the framework addresses errors from classical scattering-based methods through a two-stage approach. The classical renderer generates a scattering-based result, while a dynamic multi-scale neural renderer corrects errors efficiently, handling arbitrary blur sizes and imperfect disparity inputs. Experiments demonstrate BokehMe’s superior performance on synthetic and real image data, further validated by a user study.

Method: BokehMe’s innovative framework combines a Classical Renderer and a Neural Renderer. The Classical Renderer utilizes a scattering-based method, and the Neural Renderer employs ARNet to adaptively resize inputs, generating a low-resolution error map and bokeh image. The Neural Renderer, consisting of IUNet, iteratively upsamples the bokeh image without quality loss. Simultaneously, the error map undergoes bilinear upsampling, facilitating a seamless fusion of results from both renderers.

Dataset: The BLB dataset, rendered using Blender 2.93, accompanies BokehMe. Comprising 10 scenes, each includes an all-in-focus image, a disparity map, a stack of bokeh images with varying blur amounts and refocused disparities, and a parameter file. Additionally, 15 corrupted disparity maps per scene, introduced through Gaussian blur, dilation, and erosion, are provided. All 3D Blender models are sourced from Blender Cloud. Access the BLB dataset on Google Drive or Baidu Netdisk.

Citation:

@inproceedings{Peng2022BokehMe,
  title = {BokehMe: When Neural Rendering Meets Classical Rendering},
  author = {Peng, Juewen and Cao, Zhiguo and Luo, Xianrui and Lu, Hao and Xian, Ke and Zhang, Jianming},
  booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision and Pattern Recognition (CVPR)},
  year = {2022}
}