|
--- |
|
license: gemma |
|
base_model: google/gemma-2-27b-it |
|
--- |
|
GGUF [llama.cpp](https://github.com/ggerganov/llama.cpp) quantized version of: |
|
- Original model: [gemma-2-27b-it](https://huggingface.co/google/gemma-2-27b-it) |
|
- Model creator: [Google](https://huggingface.co/google) |
|
- [License](https://www.kaggle.com/models/google/gemma/license/consent?verifyToken=CfDJ8OV3w-Vr_2dIpZxXY9wVZZnpWKdFS3kJvSU2XkwpfOZICBFcOxoYJFb12HJj1BQs9FHgrjqpbEoqYjxdMwgaew-eH8JJmsLOgj56rjNeDFWaxTA36ggVQ1RJsKmH0mbl74o1qgioqSV5ktl-J5ebL9ep3JmOojU1HdBDSScB6WyGDSIuAcw8MWuy9LEE74Ze) |
|
|
|
## Recommended Prompt Format (Gemma) |
|
``` |
|
<start_of_turn>model |
|
Provide some context and/or instructions to the model.<end_of_turn>model |
|
<start_of_turn>user |
|
The user’s message goes here<end_of_turn> |
|
<start_of_turn>model |
|
AI message goes here<end_of_turn>model |
|
``` |
|
|
|
Quant Version: [b3450](https://github.com/ggerganov/llama.cpp/releases/tag/b3450) with [imatrix](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384) |