diff --git a/.gitattributes b/.gitattributes
index c7d9f3332a950355d5a77d85000f05e6f45435ea..152e3af6d9fead045d4a11ef7bfc92d9a28aab3d 100644
--- a/.gitattributes
+++ b/.gitattributes
@@ -32,3 +32,5 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
*.zip filter=lfs diff=lfs merge=lfs -text
*.zst filter=lfs diff=lfs merge=lfs -text
*tfevents* filter=lfs diff=lfs merge=lfs -text
+pretrained_models/angle_model.hdf5 filter=lfs diff=lfs merge=lfs -text
+pretrained_models/length_model.hdf5 filter=lfs diff=lfs merge=lfs -text
diff --git a/README.md b/README.md
index 0c642d6f1d146e6f661c2c730c2fb6faf69f5d23..f2631885e86ba3ee31bc0fe148fed72346ad618f 100644
--- a/README.md
+++ b/README.md
@@ -1,13 +1,34 @@
----
-title: Deep Blind Motion Deblurring
-emoji: 🐠
-colorFrom: blue
-colorTo: green
-sdk: gradio
-sdk_version: 3.16.2
-app_file: app.py
-pinned: false
-license: apache-2.0
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
+# Blind Motion Deblurring for Legible License Plates using Deep Learning
+
+This project uses deep learning techniques to estimate a length and angle parameter for the point-spread function responsible for motion-deblurring of an image. This estimation is achieved by training a deep CNN model on the fast-fourier transformation of the blurred images. By using enough random examples of motion blurred images, the model learns how to estimate any kind of motion blur (upto a certain blur degree), making this approach a truly blind motion deblurring example. Once a length and angle of motion blur is estimated by the model, one can easily deblur the image using Wiener Deconvolution. This technique can have many applications, but we used it specifically for deblurring and making license plates legible. As seen below, the images demonstrate our model in action. With the introduction of some artifacts, the model manages to deblur the images to a point where the license plates are legible.
+
+
+
+
+
+
+
+## Package Requirements:-
+1. Python3
+2. Numpy
+3. OpenCV 4
+4. Tensorflow 2
+5. H5py
+6. Imutils
+7. Progressbar
+8. Scikit-Learn
+## How to Run Code:-
+
+### Training the length and angle models:-
+
+1. Download the dataset of images from [here](https://cocodataset.org/#download). Download atleast 20000 images to train models optimally. (We used the COCO dataset to train our model. But any other dataset of general images will also suffice)
+2. Use the create_blurred.py to generate the motion blurred dataset as ```python create_blurred.py -i -o [-m ```. The output directory to store images must exist. The script randomly blurs the images using a random blur length and angle. The range of blur length and angle can be changes on lines 38-39. The script also generates a json file to store the labels for blur length and angle. Note that for blur angle we consider all angle over 180 degrees to be cyclic and wrap around (example 240 is 240-180=60) as it doesn't affect the PSF and significantly reduces the number of classes.
+3. Use the create_fft.py to generate the fast-fourier transform images of the blurred images to use for training. Run the script as ```python create_fft.py -i -o ```. The input directory is the folder where the blurred images are stored. The output directory must be created manually.
+4. Use the build_dataset.py to generate the hdf5 dataset to train. We use this to overcome the bottleneck of working with a large number of images in memory. Run the script as ```python build_dataset.py -m -i -to