PandaLM: Reproducible and Automated Language Model Assessment

Our GitHub repo: https://github.com/WeOpenML/PandaLM

Please use AutoTokenizer.from_pretrained('WeOpenML/PandaLM-7B-v1', use_fast=False) if you encounter issues.

Downloads last month
150
Safetensors
Model size
6.74B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for WeOpenML/PandaLM-7B-v1

Adapters
1 model
Merges
2 models
Quantizations
2 models