Text Generation
Transformers
PyTorch
mpt
Composer
MosaicML
llm-foundry
custom_code
text-generation-inference

there are something wrong when the latest code generate longer text

#19
by lvkaokao - opened

when I use the latest code for inference, it can't generate longer text as shown below:
image.png

however, when I revert to the previous version, it works.

I have also similar issues. BTW, how can I revert to the previous version? Where can I find a revision info?

Hi @lvkaokao , could you please share the code and torch version you are using? As a hunch, if you are altering config.max_seq_len, that will initialize the attn_bias to that shape at model init time, and you won't be able to generate sequences longer than that. So if you'd like to generate sequences of size K, make sure that config.max_seq_len is set >K.

The versions, which can be passed using the revision kwarg, are here

@abhi-mosaic and @sam-mosaic can you guys help me to show me example that how to run this model on colab's CPU ?

daking changed discussion status to closed

Sign up or log in to comment