1S-Lab, Nanyang Technological University
2Microsoft Research, Redmond
This weight is for **initilizing training for Otter-MPT7B**. It's directly converted from [Openflamingov2](https://huggingface.co/openflamingo/OpenFlamingo-9B-vitl-mpt7b), we added a `
` token for Otter's downstream instruction tuning.
You can load and try this model using
```python
load_bit = "bf16"
precision = {}
if load_bit == "bf16":
precision["torch_dtype"] = torch.bfloat16
elif load_bit == "fp16":
precision["torch_dtype"] = torch.float16
elif load_bit == "fp32":
precision["torch_dtype"] = torch.float32
model = OtterForConditionalGeneration.from_pretrained("luodian/OTTER-9B-LA-InContext", device_map="sequential", **precision)
model.text_tokenizer.padding_side = "left"
tokenizer = model.text_tokenizer
image_processor = transformers.CLIPImageProcessor()
model.eval()
```
Leave us a message if you have any error or question. You can follow [Otter code](https://github.com/Luodian/Otter) (see training section) to further tune your model on top of it.