Edit model card

Here is a fully trained model of EDSR (Enhanced Deep Residual Networks for Single Image Super-Resolution) model. This model surpassed the performance of the current available SOTA models.

Spaces link - https://huggingface.co/spaces/keras-io/EDSR

Paper Link - https://arxiv.org/pdf/1707.02921

Keras Example link - https://keras.io/examples/vision/edsr/

It was trained for 500 epochs with 200 steps each.

Enhanced Deep Residual Networks for Single Image Super-Resolution

Introduction

This repository contains a trained model based on the Enhanced Deep Residual Networks for Single Image Super-Resolution paper. The model was trained for 500 epochs with 200 steps each, resulting in a high-quality super-resolution model.

Dataset Used

The model was trained on the DIV2K dataset, which is a newly proposed high-quality (2K resolution) image dataset for image restoration tasks. The DIV2K dataset consists of 800 training images, 100 validation images, and 100 test images.

Architecture

The Enhanced Deep Residual Networks for Single Image Super-Resolution paper presents an enhanced deep super-resolution network (EDSR) and a new multi-scale deep super-resolution system (MDSR) that outperform current state-of-the-art SR methods. The EDSR model optimizes performance by analyzing and removing unnecessary modules to simplify the network architecture. The MDSR system is a multi-scale architecture that shares most of the parameters across different scales, using significantly fewer parameters compared with multiple single-scale models but showing comparable performance.

Metrics

The model was evaluated using the PSNR (Peak Signal-to-Noise Ratio) metric, which measures the quality of the reconstructed image compared to the original image. The model achieved a PSNR of approximately 31, which is a high-quality result.

TODO:

Hack to make this work for any image size. Currently the model takes input of image size 150 x 150. We pad the input image with transparant pixels so that it is a square image, which is a multiple of 150 x 150 Then we chop the image into multiple 150 x 150 sub images Upscale it and stich it together.

The output image might look a bit off, because each sub-image dosent have data about other sub-images. This approach assumes that the sub-image has enough data about its surroundings

Downloads last month
0
Inference Examples
Inference API (serverless) does not yet support keras models for this pipeline type.

Dataset used to train keras-io/EDSR

Spaces using keras-io/EDSR 3