Update generation_config.json
#3
by
alugowski
- opened
Pull in upstream second stop token.
Fixes issue where inference does not stop.
See upstream: https://huggingface.co/meta-llama/Meta-Llama-3-70B-Instruct/blob/main/generation_config.json
casperhansen
changed pull request status to
closed