XiangZ commited on
Commit
d5dac52
β€’
1 Parent(s): eaa3be5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -62
README.md CHANGED
@@ -3,10 +3,10 @@ tags:
3
  - HiT-SR
4
  - image super-resolution
5
  - transformer
 
6
  ---
7
-
8
  <h1>
9
- HiT-SR: Hierarchical Transformer <br> for Efficient Image Super-Resolution
10
  </h1>
11
 
12
  <h3><a href="https://github.com/XiangZ-0/HiT-SR">[Github]</a> | <a href="https://1drv.ms/b/c/de821e161e64ce08/EVsrOr1-PFFMsXxiRHEmKeoBSH6DPkTuN2GRmEYsl9bvDQ?e=f9wGUO">[Paper]</a> | <a href="https://1drv.ms/b/c/de821e161e64ce08/EYmRy-QOjPdFsMRT_ElKQqABYzoIIfDtkt9hofZ5YY_GjQ?e=2Iapqf">[Supp]</a> | <a href="https://www.youtube.com/watch?v=9rO0pjmmjZg">[Video]</a> | <a href="https://1drv.ms/f/c/de821e161e64ce08/EuE6xW-sN-hFgkIa6J-Y8gkB9b4vDQZQ01r1ZP1lmzM0vQ?e=aIRfCQ">[Visual Results]</a> </h3>
@@ -14,65 +14,6 @@ tags:
14
 
15
  HiT-SR is a general strategy to improve transformer-based SR methods. We apply our HiT-SR approach to improve [SwinIR-Light](https://github.com/JingyunLiang/SwinIR), [SwinIR-NG](https://github.com/rami0205/NGramSwin) and [SRFormer-Light](https://github.com/HVision-NKU/SRFormer), corresponding to our HiT-SIR, HiT-SNG, and HiT-SRF. Compared with the original structure, our improved models achieve better SR performance while reducing computational burdens.
16
 
17
- ## πŸš€ Models
18
- For each HiT-SR model, we provide 2x, 3x, 4x upscaling versions:
19
- | Repo Name | | Model | | Upscale |
20
- |-------------------|---|---------|---|---------|
21
- | `XiangZ/hit-sir-2x` | | HiT-SIR | | 2x |
22
- | `XiangZ/hit-sir-3x` | | HiT-SIR | | 3x |
23
- | `XiangZ/hit-sir-4x` | | HiT-SIR | | 4x |
24
- | `XiangZ/hit-sng-2x` | | HiT-SNG | | 2x |
25
- | `XiangZ/hit-sng-3x` | | HiT-SNG | | 3x |
26
- | `XiangZ/hit-sng-4x` | | HiT-SNG | | 4x |
27
- | `XiangZ/hit-srf-2x` | | HiT-SNG | | 2x |
28
- | `XiangZ/hit-srf-3x` | | HiT-SRF | | 3x |
29
- | `XiangZ/hit-srf-4x` | | HiT-SRF | | 4x |
30
-
31
-
32
- ## πŸ› οΈ Setup
33
- Install the dependencies under the working directory (use hit-srf-4x as an example):
34
- ```
35
- git clone https://huggingface.co/XiangZ/hit-srf-4x
36
- cd hit-srf-4x
37
- pip install -r requirements.txt
38
- ```
39
-
40
- ## πŸš€ Usage
41
-
42
- To test the model:
43
- ```
44
- from hit_sir_arch import HiT_SIR
45
- from hit_sng_arch import HiT_SNG
46
- from hit_srf_arch import HiT_SRF
47
- import cv2
48
-
49
- # use GPU (True) or CPU (False)
50
- cuda_flag = True
51
-
52
- # initialize model (change model and upscale according to your setting)
53
- model = HiT_SRF(upscale=4)
54
-
55
- # load model (change repo_name according to your setting)
56
- repo_name = "XiangZ/hit-srf-4x"
57
- model = model.from_pretrained(repo_name)
58
- if cuda_flag:
59
- model.cuda()
60
-
61
- ## test and save results
62
- image_path = "path-to-input-image"
63
- sr_results = model.infer_image(image_path, cuda=cuda_flag)
64
- cv2.imwrite("path-to-output-location", sr_results)
65
- ```
66
-
67
- ## πŸ“Ž Citation
68
 
69
- If you find the code helpful in your research or work, please consider citing the following paper.
70
 
71
- ```
72
- @inproceedings{zhang2024hitsr,
73
- title={HiT-SR: Hierarchical Transformer for Efficient Image Super-Resolution},
74
- author={Zhang, Xiang and Zhang, Yulun and Yu, Fisher},
75
- booktitle={ECCV},
76
- year={2024}
77
- }
78
- ```
 
3
  - HiT-SR
4
  - image super-resolution
5
  - transformer
6
+ - efficient transformer
7
  ---
 
8
  <h1>
9
+ HiT-SR: Hierarchical Transformer for Efficient Image Super-Resolution
10
  </h1>
11
 
12
  <h3><a href="https://github.com/XiangZ-0/HiT-SR">[Github]</a> | <a href="https://1drv.ms/b/c/de821e161e64ce08/EVsrOr1-PFFMsXxiRHEmKeoBSH6DPkTuN2GRmEYsl9bvDQ?e=f9wGUO">[Paper]</a> | <a href="https://1drv.ms/b/c/de821e161e64ce08/EYmRy-QOjPdFsMRT_ElKQqABYzoIIfDtkt9hofZ5YY_GjQ?e=2Iapqf">[Supp]</a> | <a href="https://www.youtube.com/watch?v=9rO0pjmmjZg">[Video]</a> | <a href="https://1drv.ms/f/c/de821e161e64ce08/EuE6xW-sN-hFgkIa6J-Y8gkB9b4vDQZQ01r1ZP1lmzM0vQ?e=aIRfCQ">[Visual Results]</a> </h3>
 
14
 
15
  HiT-SR is a general strategy to improve transformer-based SR methods. We apply our HiT-SR approach to improve [SwinIR-Light](https://github.com/JingyunLiang/SwinIR), [SwinIR-NG](https://github.com/rami0205/NGramSwin) and [SRFormer-Light](https://github.com/HVision-NKU/SRFormer), corresponding to our HiT-SIR, HiT-SNG, and HiT-SRF. Compared with the original structure, our improved models achieve better SR performance while reducing computational burdens.
16
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
17
 
18
+ πŸ€— Please refer to https://huggingface.co/XiangZ/hit-sr for usage.
19