metadata
license: mit
Bo Li*1
Yuanhan Zhang*,1
Liangyu Chen*,1
Jinghao Wang*,1
Fanyi Pu*,1
Jingkang Yang1 Chunyuan Li2 Ziwei Liu1
Jingkang Yang1 Chunyuan Li2 Ziwei Liu1
1S-Lab, Nanyang Technological University
2Microsoft Research, Redmond
This weight is for initilizing training for Otter-MPT7B. It's directly converted from Openflamingov2, we added a <answer>
token for Otter's downstream instruction tuning.
You can load and try this model using
load_bit = "bf16"
precision = {}
if load_bit == "bf16":
precision["torch_dtype"] = torch.bfloat16
elif load_bit == "fp16":
precision["torch_dtype"] = torch.float16
elif load_bit == "fp32":
precision["torch_dtype"] = torch.float32
model = OtterForConditionalGeneration.from_pretrained("luodian/OTTER-9B-LA-InContext", device_map="sequential", **precision)
model.text_tokenizer.padding_side = "left"
tokenizer = model.text_tokenizer
image_processor = transformers.CLIPImageProcessor()
model.eval()
Leave us a message if you have any error or question. You can follow Otter code (see training section) to further tune your model on top of it.