chrisc36 commited on
Commit
8d29a4d
1 Parent(s): c106b4d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -27,7 +27,7 @@ Molmo is a family of open vision-language models developed by the Allen Institut
27
  Molmo models are trained on PixMo, a dataset of 1 million, highly-curated image-text pairs.
28
  It has state-of-the-art performance among multimodal models with a similar size while being fully open-source.
29
  You can find all models in the Molmo family [here](https://huggingface.co/collections/allenai/molmo-66f379e6fe3b8ef090a8ca19).
30
- **Learn more** about the Molmo family [in our announcement blog post](https://molmo.allenai.org/blog).
31
 
32
  MolmoE-1B is a multimodal Mixture-of-Experts LLM with 1.5B active and 7.2B total parameters based on [OLMoE-1B-7B-0924](https://huggingface.co/allenai/OLMoE-1B-7B-0924).
33
  It nearly matches the performance of GPT-4V on both academic benchmarks and human evaluation, and achieves state-of-the-art performance among similarly-sized open multimodal models.
 
27
  Molmo models are trained on PixMo, a dataset of 1 million, highly-curated image-text pairs.
28
  It has state-of-the-art performance among multimodal models with a similar size while being fully open-source.
29
  You can find all models in the Molmo family [here](https://huggingface.co/collections/allenai/molmo-66f379e6fe3b8ef090a8ca19).
30
+ **Learn more** about the Molmo family [in our announcement blog post](https://molmo.allenai.org/blog) or the [paper](https://huggingface.co/papers/2409.17146).
31
 
32
  MolmoE-1B is a multimodal Mixture-of-Experts LLM with 1.5B active and 7.2B total parameters based on [OLMoE-1B-7B-0924](https://huggingface.co/allenai/OLMoE-1B-7B-0924).
33
  It nearly matches the performance of GPT-4V on both academic benchmarks and human evaluation, and achieves state-of-the-art performance among similarly-sized open multimodal models.