chujiezheng's picture
Create README.md
344da37 verified
|
raw
history blame
No virus
582 Bytes
metadata
license: llama3
language:
  - en
  - zh

Llama3-8B-Chinese-Chat-ExPO

The extrapolated (ExPO) model based on shenzhi-wang/Llama3-8B-Chinese-Chat and meta-llama/Meta-Llama-3-8B-Instruct, as in the "Weak-to-Strong Extrapolation Expedites Alignment" paper.

Specifically, we obtain this model by extrapolating from the weights of the SFT and DPO/RLHF checkpoints, achieving superior alignment with human preference.