Process number of tokens at a time
#4
by
ashrma
- opened
How many tokens ( max ) can the model consume at a time so that it is able to generate response without breaking up?
For example
GPT-3 can consume 2048 tokens at once
2048
Thanks got it!
ashrma
changed discussion status to
closed