xurantju commited on
Commit
16e12c6
β€’
1 Parent(s): 5f9c2cd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -16,7 +16,7 @@ In the v1.5 (08/2024) release, we present a series of XGen-MM models including:
16
  - [πŸ€— xGen-MM-instruct](https://huggingface.co/Salesforce/xgen-mm-phi3-mini-instruct-singleimg-r-v1.5): `xgen-mm-phi3-mini-instruct-singleimg-r-v1.5`
17
  - [πŸ€— xGen-MM-instruct-dpo](https://huggingface.co/Salesforce/xgen-mm-phi3-mini-instruct-dpo-r-v1.5): `xgen-mm-phi3-mini-instruct-dpo-r-v1.5`
18
 
19
- In addition to the models, our team also released a series of datasets for multi-modal pre-training, including:
20
  - [πŸƒ MINT-1T: Scaling Open-Source Multimodal Data by 10x: A Multimodal Dataset with One Trillion Tokens](https://arxiv.org/abs/2406.11271)
21
  - [πŸ€— BLIP3-OCR-200M](https://huggingface.co/datasets/Salesforce/blip3-ocr-200m): a dataset with dense OCR annotations.
22
  - [πŸ€— BLIP3-GROUNDING-50M](https://huggingface.co/datasets/Salesforce/blip3-grounding-50m): a dataset for enhancing the ability to ground semantic concepts in images.
@@ -27,7 +27,7 @@ For more details, check out our [tech report](https://arxiv.org/pdf/2408.08872),
27
 
28
  # Data
29
  The instruct model is fine-tuned on a mixture of around 1 million samples from multiple domains. All the fine-tuning data are from public sources, most of which are covered in [The Cauldron](https://huggingface.co/datasets/HuggingFaceM4/the_cauldron).
30
-
31
  # Results
32
 
33
  ### Single-image benchmarks
 
16
  - [πŸ€— xGen-MM-instruct](https://huggingface.co/Salesforce/xgen-mm-phi3-mini-instruct-singleimg-r-v1.5): `xgen-mm-phi3-mini-instruct-singleimg-r-v1.5`
17
  - [πŸ€— xGen-MM-instruct-dpo](https://huggingface.co/Salesforce/xgen-mm-phi3-mini-instruct-dpo-r-v1.5): `xgen-mm-phi3-mini-instruct-dpo-r-v1.5`
18
 
19
+ <! --In addition to the models, our team also released a series of datasets for multi-modal pre-training, including:
20
  - [πŸƒ MINT-1T: Scaling Open-Source Multimodal Data by 10x: A Multimodal Dataset with One Trillion Tokens](https://arxiv.org/abs/2406.11271)
21
  - [πŸ€— BLIP3-OCR-200M](https://huggingface.co/datasets/Salesforce/blip3-ocr-200m): a dataset with dense OCR annotations.
22
  - [πŸ€— BLIP3-GROUNDING-50M](https://huggingface.co/datasets/Salesforce/blip3-grounding-50m): a dataset for enhancing the ability to ground semantic concepts in images.
 
27
 
28
  # Data
29
  The instruct model is fine-tuned on a mixture of around 1 million samples from multiple domains. All the fine-tuning data are from public sources, most of which are covered in [The Cauldron](https://huggingface.co/datasets/HuggingFaceM4/the_cauldron).
30
+ -->
31
  # Results
32
 
33
  ### Single-image benchmarks