Post
1830
We compressed SmolLMs to make 135 variations of them (see https://huggingface.co/PrunaAI?search_models=smolLM) with different quantization configurations with
We made a blog to summarize our findings (see https://www.pruna.ai/blog/smollm2-smaller-faster) and small LM can be made smaller! :)
pruna
(https://docs.pruna.ai/en/latest/). We made a blog to summarize our findings (see https://www.pruna.ai/blog/smollm2-smaller-faster) and small LM can be made smaller! :)