Gemini2 Flash Thinking
Implement Gemini2 Flash Thinking model with Gradio
Hey Nishith, I had one query are you using Mistral inference client for generating the output? If not than how are you able to generate such coherent output on an open weight model?
Hey Nishith I have one doubt, is GPU mandatory for running text generation model inference especially with the Mistral model. I am running on 16GB CPU using spaces but the code just doesn't execute.
Wow bro thank you so much this is really gold
Thank you so much Nishith, that was really helpful. Can you tell me which exact model you are using to satisfy this function.