Compress model method?

#7
by dang3tion - opened

Hi, i want to know what method use compressed your model after model was trained, is quantization or prune method. Because after i trained my model in VTIS the size very large. Thank you

we only use the generator during inference, see: https://huggingface.co/spaces/ntt123/Vietnam-male-voice-TTS/blob/98d4aa0d690fa82f350ef8d9aa01407bdf45f47e/app.py#L190
so what i did is to remove all other data from the checkpoint

import torch
ckpt = torch.load(ckpts_path)
torch.save({"net_g":ckpt["net_g"]}, new_ckpts_path)

Sign up or log in to comment