wzx0826/lbnet

Public
Lightweight Bimodal Network for Single-Image Super-Resolution via Symmetric CNN and Recursive Transformer
110 runs
Readme

LBNet-Pytorch: Lightweight Bimodal Network for Single-Image Super-Resolution via Symmetric CNN and Recursive Transformer

Oficial PyTorch implementation of the paper "Lightweight Bimodal Network for Single-Image Super-Resolution via Symmetric CNN and Recursive Transformer".

Performance

Our LBNet is trained on RGB, but as in previous work, we only reported PSNR/SSIM on the Y channel.

Model Scale Params Multi-adds Set5 Set14 B100 Urban100 Manga109
LBNet-T x2 404K 49.0G 37.95/0.9602 33.53/0.9168 32.07/0.8983 31.91/0.9253 38.59/0.9768
LBNet x2 731K 153.2G 38.05/0.9607 33.65/0.9177 32.16/0.8994 32.30/0.9291 38.88/0.9775
LBNet-T x3 407K 22.0G 34.33/0.9264 30.25/0.8402 29.05/0.8042 28.06/0.8485 33.48/0.9433
LBNet x3 736K 68.4G 34.47/0.9277 30.38/0.8417 29.13/0.8061 28.42/0.8559 33.82/0.9460
LBNet-T x4 410K 12.6G 32.08/0.8933 28.54/0.7802 27.54/0.7358 26.00/0.7819 30.37/0.9059
LBNet x4 742K 38.9G 32.29/0.8960 28.68/0.7832 27.62/0.7382 26.27/0.7906 30.76/0.9111

Model complexity

LBNet gains a better trade-off between model size, performance, inference speed, and multi-adds.

Acknowledgements

This code is built on EDSR (PyTorch) and DRN. We thank the authors for sharing their codes.

Citation

If you use any part of this code in your research, please cite our paper:

@article{gao2022lightweight ,
  title={Lightweight Bimodal Network for Single-Image Super-Resolution via Symmetric CNN and Recursive Transformer},
  author={Gao, Guangwei and Wang, Zhengxue and Li, Juncheng and Li, Wenjie and Yu, Yi and Zeng, Tieyong},
  journal={arXiv preprint arXiv:2204.13286},
  year={2022}
}

Replicate Reproducible machine learning