Increasing context length for language input
#39
by
chrishoertnagl
- opened
The model currently supports 4k context window, which for some mutli-turn conversation cases is a bit low. So I have 2 questions:
- Do you plan on increasing the context window in the near future
- How would you recommend (maybe some research or GH repos) to do it, if I have multi-turn text + image conversation data
Thanks :)
Hey @chrishoertnagl
- Yes, we plan to increase the context window in our upcoming release.
- Maybe implement memory mechanism that encodes past images and text responses separately, then feeds a summary or extracted features to the model.