|
--- |
|
base_model: BEE-spoke-data/smol_llama-101M-midjourney-messages |
|
datasets: |
|
- pszemraj/midjourney-messages-cleaned |
|
inference: false |
|
license: apache-2.0 |
|
metrics: |
|
- accuracy |
|
model_creator: BEE-spoke-data |
|
model_name: smol_llama-101M-midjourney-messages |
|
pipeline_tag: text-generation |
|
quantized_by: afrideva |
|
tags: |
|
- generated_from_trainer |
|
- gguf |
|
- ggml |
|
- quantized |
|
- q2_k |
|
- q3_k_m |
|
- q4_k_m |
|
- q5_k_m |
|
- q6_k |
|
- q8_0 |
|
widget: |
|
- example_title: avocado chair |
|
text: avocado chair |
|
- example_title: potato |
|
text: A mysterious potato |
|
--- |
|
# BEE-spoke-data/smol_llama-101M-midjourney-messages-GGUF |
|
|
|
Quantized GGUF model files for [smol_llama-101M-midjourney-messages](https://huggingface.co/BEE-spoke-data/smol_llama-101M-midjourney-messages) from [BEE-spoke-data](https://huggingface.co/BEE-spoke-data) |
|
|
|
|
|
| Name | Quant method | Size | |
|
| ---- | ---- | ---- | |
|
| [smol_llama-101m-midjourney-messages.fp16.gguf](https://huggingface.co/afrideva/smol_llama-101M-midjourney-messages-GGUF/resolve/main/smol_llama-101m-midjourney-messages.fp16.gguf) | fp16 | 203.28 MB | |
|
| [smol_llama-101m-midjourney-messages.q2_k.gguf](https://huggingface.co/afrideva/smol_llama-101M-midjourney-messages-GGUF/resolve/main/smol_llama-101m-midjourney-messages.q2_k.gguf) | q2_k | 50.93 MB | |
|
| [smol_llama-101m-midjourney-messages.q3_k_m.gguf](https://huggingface.co/afrideva/smol_llama-101M-midjourney-messages-GGUF/resolve/main/smol_llama-101m-midjourney-messages.q3_k_m.gguf) | q3_k_m | 57.06 MB | |
|
| [smol_llama-101m-midjourney-messages.q4_k_m.gguf](https://huggingface.co/afrideva/smol_llama-101M-midjourney-messages-GGUF/resolve/main/smol_llama-101m-midjourney-messages.q4_k_m.gguf) | q4_k_m | 65.40 MB | |
|
| [smol_llama-101m-midjourney-messages.q5_k_m.gguf](https://huggingface.co/afrideva/smol_llama-101M-midjourney-messages-GGUF/resolve/main/smol_llama-101m-midjourney-messages.q5_k_m.gguf) | q5_k_m | 74.34 MB | |
|
| [smol_llama-101m-midjourney-messages.q6_k.gguf](https://huggingface.co/afrideva/smol_llama-101M-midjourney-messages-GGUF/resolve/main/smol_llama-101m-midjourney-messages.q6_k.gguf) | q6_k | 83.83 MB | |
|
| [smol_llama-101m-midjourney-messages.q8_0.gguf](https://huggingface.co/afrideva/smol_llama-101M-midjourney-messages-GGUF/resolve/main/smol_llama-101m-midjourney-messages.q8_0.gguf) | q8_0 | 108.35 MB | |
|
|
|
|
|
|
|
## Original Model Card: |
|
# smol_llama-101M-midjourney-messages |
|
|
|
Given a 'partial prompt' for a text2image model, this generates additional relevant text to include for a full prompt. |
|
|
|
|
|
![example](https://i.imgur.com/f2hzgq1.png) |
|
|
|
## Model description |
|
|
|
This model is a fine-tuned version of [BEE-spoke-data/smol_llama-101M-GQA](https://huggingface.co/BEE-spoke-data/smol_llama-101M-GQA) on the `pszemraj/midjourney-messages-cleaned` dataset. |
|
It achieves the following results on the evaluation set: |
|
- Loss: 2.8431 |
|
- Accuracy: 0.4682 |
|
|
|
|
|
## Training procedure |
|
|
|
### Training hyperparameters |
|
|
|
The following hyperparameters were used during training: |
|
- learning_rate: 0.00025 |
|
- train_batch_size: 4 |
|
- eval_batch_size: 4 |
|
- seed: 17056 |
|
- gradient_accumulation_steps: 16 |
|
- total_train_batch_size: 64 |
|
- optimizer: Adam with betas=(0.9,0.95) and epsilon=1e-08 |
|
- lr_scheduler_type: inverse_sqrt |
|
- lr_scheduler_warmup_ratio: 0.05 |
|
- num_epochs: 1.0 |