modelId
stringlengths 5
134
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
223M
| likes
int64 0
8.08k
| library_name
stringclasses 351
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 53
values | createdAt
unknown | card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
roa7n/gpt2-human_nontata_promoters-randomized_10_layers_3e-05_lr_2_e | roa7n | "2023-09-29T19:54:02Z" | 0 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-09-29T19:54:00Z" | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
abenius/1e1dfb6d-e2b4-45d5-ae4b-1ae6fdbfe2c8 | abenius | "2025-02-07T23:19:40Z" | 9 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:Orenguteng/Llama-3-8B-Lexi-Uncensored",
"base_model:adapter:Orenguteng/Llama-3-8B-Lexi-Uncensored",
"license:llama3",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-02-07T22:19:04Z" | ---
library_name: peft
license: llama3
base_model: Orenguteng/Llama-3-8B-Lexi-Uncensored
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 1e1dfb6d-e2b4-45d5-ae4b-1ae6fdbfe2c8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Orenguteng/Llama-3-8B-Lexi-Uncensored
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 27e56d84165570db_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/27e56d84165570db_train_data.json
type:
field_input: language
field_instruction: url
field_output: text
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: true
hub_model_id: abenius/1e1dfb6d-e2b4-45d5-ae4b-1ae6fdbfe2c8
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0002
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 10
lora_alpha: 16
lora_dropout: 0.2
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 600
micro_batch_size: 2
mlflow_experiment_name: /tmp/27e56d84165570db_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 150
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: 25193e97-cead-4165-a46d-2d7f31533a7b
wandb_project: Gradients-On-12
wandb_run: your_name
wandb_runid: 25193e97-cead-4165-a46d-2d7f31533a7b
warmup_steps: 5
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# 1e1dfb6d-e2b4-45d5-ae4b-1ae6fdbfe2c8
This model is a fine-tuned version of [Orenguteng/Llama-3-8B-Lexi-Uncensored](https://huggingface.co/Orenguteng/Llama-3-8B-Lexi-Uncensored) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0291
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 600
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.0566 | 0.5156 | 600 | 2.0291 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mognog/learn_hf_food_not_food_text_classifier-distilbert-base-uncased | mognog | "2025-02-03T11:34:30Z" | 22 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-01-27T13:21:35Z" | ---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: learn_hf_food_not_food_text_classifier-distilbert-base-uncased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# learn_hf_food_not_food_text_classifier-distilbert-base-uncased
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0001
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0389 | 1.0 | 7 | 0.0039 | 1.0 |
| 0.0026 | 2.0 | 14 | 0.0008 | 1.0 |
| 0.0008 | 3.0 | 21 | 0.0004 | 1.0 |
| 0.0004 | 4.0 | 28 | 0.0002 | 1.0 |
| 0.0003 | 5.0 | 35 | 0.0002 | 1.0 |
| 0.0002 | 6.0 | 42 | 0.0002 | 1.0 |
| 0.0002 | 7.0 | 49 | 0.0001 | 1.0 |
| 0.0002 | 8.0 | 56 | 0.0001 | 1.0 |
| 0.0002 | 9.0 | 63 | 0.0001 | 1.0 |
| 0.0002 | 10.0 | 70 | 0.0001 | 1.0 |
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
molto/ft_0112_korean | molto | "2024-01-15T00:45:15Z" | 54 | 1 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/wav2vec2-xls-r-300m",
"base_model:finetune:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-01-12T01:05:01Z" | ---
license: apache-2.0
base_model: facebook/wav2vec2-xls-r-300m
tags:
- generated_from_trainer
model-index:
- name: ft_0112_korean
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ft_0112_korean
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6163
- Cer: 0.1655
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 66.0473 | 0.03 | 100 | 126.2500 | 1.0 |
| 39.2751 | 0.05 | 200 | 76.4439 | 1.0 |
| 24.2617 | 0.07 | 300 | 36.6274 | 1.0 |
| 10.2253 | 0.1 | 400 | 7.8025 | 1.0 |
| 4.9219 | 0.12 | 500 | 5.8257 | 1.0 |
| 4.7709 | 0.15 | 600 | 5.2597 | 1.0 |
| 4.7545 | 0.17 | 700 | 5.3516 | 1.0 |
| 4.701 | 0.2 | 800 | 5.2238 | 1.0 |
| 4.6753 | 0.23 | 900 | 5.1713 | 1.0 |
| 4.6339 | 0.25 | 1000 | 5.1546 | 1.0 |
| 4.6107 | 0.28 | 1100 | 5.0488 | 1.0 |
| 4.6086 | 0.3 | 1200 | 4.8149 | 1.0 |
| 4.5324 | 0.33 | 1300 | 4.7533 | 1.0 |
| 4.4797 | 0.35 | 1400 | 4.6892 | 1.0 |
| 4.4485 | 0.38 | 1500 | 4.5327 | 1.0 |
| 4.3794 | 0.4 | 1600 | 4.3797 | 0.9999 |
| 4.1549 | 0.42 | 1700 | 4.2075 | 0.9838 |
| 3.9647 | 0.45 | 1800 | 3.8729 | 0.9647 |
| 3.621 | 0.47 | 1900 | 3.3229 | 0.6854 |
| 3.3163 | 0.5 | 2000 | 2.9646 | 0.5646 |
| 3.0668 | 0.53 | 2100 | 2.7178 | 0.5608 |
| 2.8248 | 0.55 | 2200 | 2.4843 | 0.4937 |
| 2.7238 | 0.57 | 2300 | 2.3321 | 0.4736 |
| 2.614 | 0.6 | 2400 | 2.2513 | 0.4650 |
| 2.4994 | 0.62 | 2500 | 2.1655 | 0.4538 |
| 2.4431 | 0.65 | 2600 | 2.0785 | 0.4355 |
| 2.3307 | 0.68 | 2700 | 1.9603 | 0.4169 |
| 2.2495 | 0.7 | 2800 | 1.9026 | 0.4134 |
| 2.1647 | 0.72 | 2900 | 1.8152 | 0.4009 |
| 2.1075 | 0.75 | 3000 | 1.7521 | 0.3849 |
| 2.0577 | 0.78 | 3100 | 1.7004 | 0.3781 |
| 1.9935 | 0.8 | 3200 | 1.6226 | 0.3666 |
| 1.9391 | 0.82 | 3300 | 1.6097 | 0.3604 |
| 1.9295 | 0.85 | 3400 | 1.5416 | 0.3526 |
| 1.8759 | 0.88 | 3500 | 1.5227 | 0.3583 |
| 1.8316 | 0.9 | 3600 | 1.4791 | 0.3484 |
| 1.7531 | 0.93 | 3700 | 1.4472 | 0.3415 |
| 1.7413 | 0.95 | 3800 | 1.4178 | 0.3363 |
| 1.6609 | 0.97 | 3900 | 1.3587 | 0.3256 |
| 1.6986 | 1.0 | 4000 | 1.3396 | 0.3208 |
| 1.6189 | 1.02 | 4100 | 1.3253 | 0.3187 |
| 1.5853 | 1.05 | 4200 | 1.2929 | 0.3109 |
| 1.5153 | 1.07 | 4300 | 1.2691 | 0.3106 |
| 1.5259 | 1.1 | 4400 | 1.2500 | 0.3012 |
| 1.4916 | 1.12 | 4500 | 1.2151 | 0.2977 |
| 1.4113 | 1.15 | 4600 | 1.1796 | 0.2930 |
| 1.452 | 1.18 | 4700 | 1.1857 | 0.2928 |
| 1.3879 | 1.2 | 4800 | 1.1830 | 0.2915 |
| 1.4164 | 1.23 | 4900 | 1.1725 | 0.2920 |
| 1.4692 | 1.25 | 5000 | 1.1171 | 0.2794 |
| 1.346 | 1.27 | 5100 | 1.0858 | 0.2745 |
| 1.3964 | 1.3 | 5200 | 1.0644 | 0.2712 |
| 1.3359 | 1.32 | 5300 | 1.0585 | 0.2694 |
| 1.2769 | 1.35 | 5400 | 1.0290 | 0.2614 |
| 1.2741 | 1.38 | 5500 | 1.0356 | 0.2604 |
| 1.2257 | 1.4 | 5600 | 1.0167 | 0.2607 |
| 1.2416 | 1.43 | 5700 | 1.0074 | 0.2558 |
| 1.2376 | 1.45 | 5800 | 0.9889 | 0.2524 |
| 1.2048 | 1.48 | 5900 | 0.9649 | 0.2464 |
| 1.1335 | 1.5 | 6000 | 0.9580 | 0.2488 |
| 1.1946 | 1.52 | 6100 | 0.9503 | 0.2471 |
| 1.1926 | 1.55 | 6200 | 0.9467 | 0.2494 |
| 1.1451 | 1.57 | 6300 | 0.9202 | 0.2408 |
| 1.1426 | 1.6 | 6400 | 0.9018 | 0.2359 |
| 1.1569 | 1.62 | 6500 | 0.9216 | 0.2362 |
| 1.1093 | 1.65 | 6600 | 0.9433 | 0.2414 |
| 1.1258 | 1.68 | 6700 | 0.8986 | 0.2291 |
| 1.1024 | 1.7 | 6800 | 0.8838 | 0.2305 |
| 1.0567 | 1.73 | 6900 | 0.8916 | 0.2298 |
| 1.0928 | 1.75 | 7000 | 0.8855 | 0.2294 |
| 1.0526 | 1.77 | 7100 | 0.8592 | 0.2237 |
| 1.0236 | 1.8 | 7200 | 0.8433 | 0.2209 |
| 1.0454 | 1.82 | 7300 | 0.8382 | 0.2214 |
| 1.0252 | 1.85 | 7400 | 0.8252 | 0.2173 |
| 1.0404 | 1.88 | 7500 | 0.8190 | 0.2148 |
| 1.0326 | 1.9 | 7600 | 0.8067 | 0.2155 |
| 1.0008 | 1.93 | 7700 | 0.8081 | 0.2161 |
| 0.9814 | 1.95 | 7800 | 0.8061 | 0.2152 |
| 0.9664 | 1.98 | 7900 | 0.8147 | 0.2155 |
| 1.0032 | 2.0 | 8000 | 0.8232 | 0.2128 |
| 0.9274 | 2.02 | 8100 | 0.7951 | 0.2118 |
| 0.9115 | 2.05 | 8200 | 0.7857 | 0.2105 |
| 0.9339 | 2.08 | 8300 | 0.7722 | 0.2069 |
| 0.8553 | 2.1 | 8400 | 0.7603 | 0.2070 |
| 0.8671 | 2.12 | 8500 | 0.7927 | 0.2099 |
| 0.9067 | 2.15 | 8600 | 0.7511 | 0.2013 |
| 0.8507 | 2.17 | 8700 | 0.7763 | 0.2029 |
| 0.899 | 2.2 | 8800 | 0.7579 | 0.2026 |
| 0.8061 | 2.23 | 8900 | 0.7561 | 0.2014 |
| 0.8191 | 2.25 | 9000 | 0.7590 | 0.2024 |
| 0.8084 | 2.27 | 9100 | 0.7394 | 0.1972 |
| 0.8163 | 2.3 | 9200 | 0.7404 | 0.1941 |
| 0.8189 | 2.33 | 9300 | 0.7340 | 0.1955 |
| 0.8639 | 2.35 | 9400 | 0.7331 | 0.1950 |
| 0.8218 | 2.38 | 9500 | 0.7347 | 0.1959 |
| 0.8221 | 2.4 | 9600 | 0.7098 | 0.1922 |
| 0.7725 | 2.42 | 9700 | 0.7264 | 0.1923 |
| 0.7882 | 2.45 | 9800 | 0.7079 | 0.1875 |
| 0.7786 | 2.48 | 9900 | 0.7131 | 0.1913 |
| 0.7734 | 2.5 | 10000 | 0.7079 | 0.1912 |
| 0.7834 | 2.52 | 10100 | 0.6944 | 0.1896 |
| 0.78 | 2.55 | 10200 | 0.6980 | 0.1879 |
| 0.7602 | 2.58 | 10300 | 0.7076 | 0.1894 |
| 0.7415 | 2.6 | 10400 | 0.6946 | 0.1857 |
| 0.7791 | 2.62 | 10500 | 0.7025 | 0.1887 |
| 0.7357 | 2.65 | 10600 | 0.6949 | 0.1885 |
| 0.7102 | 2.67 | 10700 | 0.6978 | 0.1895 |
| 0.7395 | 2.7 | 10800 | 0.6893 | 0.1859 |
| 0.7301 | 2.73 | 10900 | 0.6847 | 0.1857 |
| 0.7492 | 2.75 | 11000 | 0.7063 | 0.1863 |
| 0.7372 | 2.77 | 11100 | 0.6917 | 0.1857 |
| 0.7474 | 2.8 | 11200 | 0.6843 | 0.1845 |
| 0.6727 | 2.83 | 11300 | 0.6628 | 0.1775 |
| 0.7342 | 2.85 | 11400 | 0.6729 | 0.1797 |
| 0.6599 | 2.88 | 11500 | 0.6631 | 0.1797 |
| 0.7209 | 2.9 | 11600 | 0.6658 | 0.1795 |
| 0.7222 | 2.92 | 11700 | 0.6741 | 0.1807 |
| 0.7124 | 2.95 | 11800 | 0.6722 | 0.1828 |
| 0.7304 | 2.98 | 11900 | 0.6606 | 0.1782 |
| 0.7234 | 3.0 | 12000 | 0.6499 | 0.1753 |
| 0.6857 | 3.02 | 12100 | 0.6547 | 0.1751 |
| 0.6238 | 3.05 | 12200 | 0.6615 | 0.1771 |
| 0.6495 | 3.08 | 12300 | 0.6499 | 0.1764 |
| 0.6219 | 3.1 | 12400 | 0.6558 | 0.1752 |
| 0.6684 | 3.12 | 12500 | 0.6479 | 0.1752 |
| 0.6455 | 3.15 | 12600 | 0.6574 | 0.1741 |
| 0.6414 | 3.17 | 12700 | 0.6489 | 0.1755 |
| 0.6619 | 3.2 | 12800 | 0.6527 | 0.1754 |
| 0.6303 | 3.23 | 12900 | 0.6462 | 0.1743 |
| 0.6525 | 3.25 | 13000 | 0.6505 | 0.1731 |
| 0.6347 | 3.27 | 13100 | 0.6432 | 0.1713 |
| 0.6206 | 3.3 | 13200 | 0.6495 | 0.1746 |
| 0.6445 | 3.33 | 13300 | 0.6328 | 0.1706 |
| 0.6097 | 3.35 | 13400 | 0.6329 | 0.1689 |
| 0.6151 | 3.38 | 13500 | 0.6473 | 0.1730 |
| 0.5948 | 3.4 | 13600 | 0.6413 | 0.1714 |
| 0.5949 | 3.42 | 13700 | 0.6377 | 0.1712 |
| 0.6402 | 3.45 | 13800 | 0.6295 | 0.1692 |
| 0.6607 | 3.48 | 13900 | 0.6287 | 0.1694 |
| 0.6219 | 3.5 | 14000 | 0.6357 | 0.1704 |
| 0.61 | 3.52 | 14100 | 0.6392 | 0.1715 |
| 0.5974 | 3.55 | 14200 | 0.6315 | 0.1687 |
| 0.5839 | 3.58 | 14300 | 0.6359 | 0.1689 |
| 0.6017 | 3.6 | 14400 | 0.6316 | 0.1673 |
| 0.6091 | 3.62 | 14500 | 0.6284 | 0.1686 |
| 0.6565 | 3.65 | 14600 | 0.6304 | 0.1684 |
| 0.6179 | 3.67 | 14700 | 0.6259 | 0.1661 |
| 0.5813 | 3.7 | 14800 | 0.6310 | 0.1672 |
| 0.5802 | 3.73 | 14900 | 0.6250 | 0.1667 |
| 0.6035 | 3.75 | 15000 | 0.6284 | 0.1666 |
| 0.5569 | 3.77 | 15100 | 0.6203 | 0.1651 |
| 0.5712 | 3.8 | 15200 | 0.6207 | 0.1660 |
| 0.546 | 3.83 | 15300 | 0.6246 | 0.1661 |
| 0.5602 | 3.85 | 15400 | 0.6206 | 0.1656 |
| 0.591 | 3.88 | 15500 | 0.6179 | 0.1650 |
| 0.5972 | 3.9 | 15600 | 0.6164 | 0.1653 |
| 0.6168 | 3.92 | 15700 | 0.6174 | 0.1660 |
| 0.5957 | 3.95 | 15800 | 0.6164 | 0.1657 |
| 0.5754 | 3.98 | 15900 | 0.6163 | 0.1657 |
| 0.5686 | 4.0 | 16000 | 0.6163 | 0.1655 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.13.0
- Tokenizers 0.15.0
|
waldie/Free_Sydney_V2_13b_HF-5bpw-h6-exl2 | waldie | "2023-10-28T18:45:08Z" | 8 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llm",
"llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-10-28T18:22:22Z" | ---
tags:
- llm
- llama
- llama2
---
quant of [FPHam's](https://huggingface.co/FPHam) [Free_Sydney_V2_13b_HF](https://huggingface.co/FPHam/Free_Sydney_V2_13b_HF)
wikitext used as calibration dataset. |
RobertML/edge-zk | RobertML | "2024-09-12T00:41:12Z" | 30 | 0 | diffusers | [
"diffusers",
"safetensors",
"stablediffusionapi.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | "2024-09-12T00:35:39Z" | ---
license: creativeml-openrail-m
tags:
- stablediffusionapi.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# NewDream-SDXL 2.0 API Inference

## Get API Key
Get API key from [Stable Diffusion API](http://stablediffusionapi.com/), No Payment needed.
Replace Key in below code, change **model_id** to "newdream-sdxl-20"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://stablediffusionapi.com/docs)
Try model for free: [Generate Images](https://stablediffusionapi.com/models/newdream-sdxl-20)
Model link: [View model](https://stablediffusionapi.com/models/newdream-sdxl-20)
Credits: [View credits](https://civitai.com/?query=NewDream-SDXL%202.0)
View all models: [View Models](https://stablediffusionapi.com/models)
import requests
import json
url = "https://stablediffusionapi.com/api/v4/dreambooth"
payload = json.dumps({
"key": "your_api_key",
"model_id": "newdream-sdxl-20",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** |
stefan-it/hmbench-newseye-de-hmbert_64k-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5 | stefan-it | "2023-10-26T11:17:29Z" | 7 | 0 | flair | [
"flair",
"pytorch",
"tensorboard",
"token-classification",
"sequence-tagger-model",
"de",
"base_model:dbmdz/bert-base-historic-multilingual-64k-td-cased",
"base_model:finetune:dbmdz/bert-base-historic-multilingual-64k-td-cased",
"license:mit",
"region:us"
] | token-classification | "2023-10-25T20:34:56Z" | ---
language: de
license: mit
tags:
- flair
- token-classification
- sequence-tagger-model
base_model: dbmdz/bert-base-historic-multilingual-64k-td-cased
widget:
- text: In Teltsch und Jarmeritz wurden die abgegebenen Stimmen für Genossen Krapka
ungiltig erklärt , weil sie keinen Wohnort aufwiesen .
---
# Fine-tuned Flair Model on German NewsEye NER Dataset (HIPE-2022)
This Flair model was fine-tuned on the
[German NewsEye](https://github.com/hipe-eval/HIPE-2022-data/blob/main/documentation/README-newseye.md)
NER Dataset using hmBERT 64k as backbone LM.
The NewsEye dataset is comprised of diachronic historical newspaper material published between 1850 and 1950
in French, German, Finnish, and Swedish.
More information can be found [here](https://dl.acm.org/doi/abs/10.1145/3404835.3463255).
The following NEs were annotated: `PER`, `LOC`, `ORG` and `HumanProd`.
# Results
We performed a hyper-parameter search over the following parameters with 5 different seeds per configuration:
* Batch Sizes: `[4, 8]`
* Learning Rates: `[3e-05, 5e-05]`
And report micro F1-score on development set:
| Configuration | Seed 1 | Seed 2 | Seed 3 | Seed 4 | Seed 5 | Average |
|-------------------|--------------|--------------|--------------|--------------|------------------|-----------------|
| `bs8-e10-lr3e-05` | [0.3931][1] | [0.4248][2] | [0.4127][3] | [0.3938][4] | [0.4187][5] | 0.4086 ± 0.0145 |
| `bs4-e10-lr3e-05` | [0.338][6] | [0.4183][7] | [0.4041][8] | [0.4384][9] | [0.3974][10] | 0.3992 ± 0.0377 |
| `bs8-e10-lr5e-05` | [0.3861][11] | [0.3757][12] | [0.3764][13] | [0.4099][14] | [0.3593][15] | 0.3815 ± 0.0186 |
| `bs4-e10-lr5e-05` | [0.3813][16] | [0.0][17] | [0.3339][18] | [0.2489][19] | [**0.2931**][20] | 0.2514 ± 0.1489 |
[1]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_64k-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[2]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_64k-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[3]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_64k-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[4]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_64k-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[5]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_64k-bs8-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[6]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_64k-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-1
[7]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_64k-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-2
[8]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_64k-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-3
[9]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_64k-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-4
[10]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_64k-bs4-wsFalse-e10-lr3e-05-poolingfirst-layers-1-crfFalse-5
[11]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_64k-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[12]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_64k-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[13]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_64k-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[14]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_64k-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[15]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_64k-bs8-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
[16]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_64k-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-1
[17]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_64k-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-2
[18]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_64k-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-3
[19]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_64k-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-4
[20]: https://hf.co/stefan-it/hmbench-newseye-de-hmbert_64k-bs4-wsFalse-e10-lr5e-05-poolingfirst-layers-1-crfFalse-5
The [training log](training.log) and TensorBoard logs (not available for hmBERT Base model) are also uploaded to the model hub.
More information about fine-tuning can be found [here](https://github.com/stefan-it/hmBench).
# Acknowledgements
We thank [Luisa März](https://github.com/LuisaMaerz), [Katharina Schmid](https://github.com/schmika) and
[Erion Çano](https://github.com/erionc) for their fruitful discussions about Historic Language Models.
Research supported with Cloud TPUs from Google's [TPU Research Cloud](https://sites.research.google/trc/about/) (TRC).
Many Thanks for providing access to the TPUs ❤️
|
Sviatoslavs/ppo-Huggy | Sviatoslavs | "2023-08-21T10:20:12Z" | 7 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | "2023-08-21T10:19:59Z" | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Sviatoslavs/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
mort1k/unit_1 | mort1k | "2023-07-09T11:55:43Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-07-09T11:55:23Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 260.67 +/- 17.89
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
triet1102/xlm-roberta-base-finetuned-panx-de | triet1102 | "2023-11-15T20:46:19Z" | 111 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2022-05-19T20:21:21Z" | ---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
base_model: xlm-roberta-base
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
type: token-classification
name: Token Classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- type: f1
value: 0.8620945214069894
name: F1
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1372
- F1: 0.8621
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2575 | 1.0 | 525 | 0.1621 | 0.8292 |
| 0.1287 | 2.0 | 1050 | 0.1378 | 0.8526 |
| 0.0831 | 3.0 | 1575 | 0.1372 | 0.8621 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
tuanna08go/1ce2f9aa-dc6b-43e5-a73a-8e16e86ff313 | tuanna08go | "2025-01-22T19:11:59Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/mistral-7b-v0.3",
"base_model:adapter:unsloth/mistral-7b-v0.3",
"license:apache-2.0",
"region:us"
] | null | "2025-01-22T19:01:34Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/mistral-7b-v0.3
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 1ce2f9aa-dc6b-43e5-a73a-8e16e86ff313
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/mistral-7b-v0.3
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 2c39dfbdf81446bf_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/2c39dfbdf81446bf_train_data.json
type:
field_instruction: prompt
field_output: chosen
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 5
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: tuanna08go/1ce2f9aa-dc6b-43e5-a73a-8e16e86ff313
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/2c39dfbdf81446bf_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 4ab8e58e-3ef7-4884-93dc-10c8afeecae2
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 4ab8e58e-3ef7-4884-93dc-10c8afeecae2
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 1ce2f9aa-dc6b-43e5-a73a-8e16e86ff313
This model is a fine-tuned version of [unsloth/mistral-7b-v0.3](https://huggingface.co/unsloth/mistral-7b-v0.3) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5786
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0018 | 1 | 1.6357 |
| 4.2423 | 0.0177 | 10 | 0.8864 |
| 2.7791 | 0.0355 | 20 | 0.6591 |
| 2.5311 | 0.0532 | 30 | 0.5968 |
| 2.4674 | 0.0710 | 40 | 0.5841 |
| 2.2938 | 0.0887 | 50 | 0.5786 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mergekit-community/mergekit-dare_ties-ymiqjtz | mergekit-community | "2024-04-16T14:47:33Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:MaziyarPanahi/Calme-7B-Instruct-v0.9",
"base_model:merge:MaziyarPanahi/Calme-7B-Instruct-v0.9",
"base_model:Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp",
"base_model:merge:Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp",
"base_model:amazingvince/Not-WizardLM-2-7B",
"base_model:merge:amazingvince/Not-WizardLM-2-7B",
"base_model:cognitivecomputations/dolphin-2.8-mistral-7b-v02",
"base_model:merge:cognitivecomputations/dolphin-2.8-mistral-7b-v02",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-16T14:42:02Z" | ---
base_model:
- MaziyarPanahi/Calme-7B-Instruct-v0.9
- amazingvince/Not-WizardLM-2-7B
- Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp
- cognitivecomputations/dolphin-2.8-mistral-7b-v02
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [amazingvince/Not-WizardLM-2-7B](https://huggingface.co/amazingvince/Not-WizardLM-2-7B) as a base.
### Models Merged
The following models were included in the merge:
* [MaziyarPanahi/Calme-7B-Instruct-v0.9](https://huggingface.co/MaziyarPanahi/Calme-7B-Instruct-v0.9)
* [Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp](https://huggingface.co/Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp)
* [cognitivecomputations/dolphin-2.8-mistral-7b-v02](https://huggingface.co/cognitivecomputations/dolphin-2.8-mistral-7b-v02)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: MaziyarPanahi/Calme-7B-Instruct-v0.9
parameters:
density: 0.53
weight: 0.33333333
- model: cognitivecomputations/dolphin-2.8-mistral-7b-v02
parameters:
density: 0.53
weight: 0.33333333
- model: Weyaxi/OpenHermes-2.5-neural-chat-v3-3-Slerp
parameters:
density: 0.53
weight: 0.33333333
merge_method: dare_ties
base_model: amazingvince/Not-WizardLM-2-7B
parameters:
normalize: false
int8_mask: true
dtype: float16
```
|
CombinHorizon/zetasepic-abliteratedV2-Qwen2.5-32B-Inst-BaseMerge-TIES | CombinHorizon | "2024-12-07T05:10:34Z" | 326 | 9 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"qwen2.5",
"TIES",
"conversational",
"en",
"arxiv:2306.01708",
"arxiv:2407.10671",
"base_model:Qwen/Qwen2.5-32B",
"base_model:merge:Qwen/Qwen2.5-32B",
"base_model:zetasepic/Qwen2.5-32B-Instruct-abliterated-v2",
"base_model:merge:zetasepic/Qwen2.5-32B-Instruct-abliterated-v2",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-12-07T04:31:23Z" | ---
base_model:
- Qwen/Qwen2.5-32B
- zetasepic/Qwen2.5-32B-Instruct-abliterated-v2
library_name: transformers
tags:
- mergekit
- merge
- qwen2.5
- TIES
license: apache-2.0
language:
- en
pipeline_tag: text-generation
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [Qwen/Qwen2.5-32B](https://huggingface.co/Qwen/Qwen2.5-32B) as a base.
### Models Merged
The following models were included in the merge:
* [zetasepic/Qwen2.5-32B-Instruct-abliterated-v2](https://huggingface.co/zetasepic/Qwen2.5-32B-Instruct-abliterated-v2)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: zetasepic/Qwen2.5-32B-Instruct-abliterated-v2
parameters:
weight: 1
density: 1
merge_method: ties
base_model: Qwen/Qwen2.5-32B
parameters:
weight: 1
density: 1
normalize: true
int8_mask: true
dtype: bfloat16
```
## Citations
The merge is based on the technique posted [here](https://huggingface.co/rombodawg/Rombos-LLM-V2.5-Qwen-14b/discussions/1#67098eecdf3b26954feb2eab).
```
@misc{qwen2.5,
title = {Qwen2.5: A Party of Foundation Models},
url = {https://qwenlm.github.io/blog/qwen2.5/},
author = {Qwen Team},
month = {September},
year = {2024}
}
@article{qwen2,
title={Qwen2 Technical Report},
author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
journal={arXiv preprint arXiv:2407.10671},
year={2024}
}
```
|
Bharathdamu/wav2vec2-large-xls-r-300m-hindi-colab | Bharathdamu | "2021-11-23T09:32:23Z" | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2022-03-02T23:29:04Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-hindi-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-hindi-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
aapoliakova/bsf_cls | aapoliakova | "2024-10-01T16:23:57Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-10-01T16:23:20Z" | ---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: bsf_cls
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bsf_cls
This model was trained from scratch on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
damgomz/ft_4_17e6_base_x8 | damgomz | "2024-06-20T18:47:43Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"text-classification",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-06-19T16:11:57Z" | ---
language: en
tags:
- text-classification
pipeline_tag: text-classification
widget:
- text: GEPS Techno is the pioneer of hybridization of renewable energies at sea.
We imagine, design and commercialize innovative off-grid systems that aim to generate
power at sea, stabilize and collect data. The success of our low power platforms
WAVEPEAL enabled us to scale-up the device up to WAVEGEM, the 150-kW capacity
platform.
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 99386.57754325868 |
| Emissions (Co2eq in kg) | 0.0601403347600176 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 3.75 |
| CPU energy (kWh) | 1.173311165798372 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.1035265820205213 |
| Consumed energy (kWh) | 1.2768377478188917 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 2 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.19131916177077296 |
| Emissions (Co2eq in kg) | 0.03892640953777632 |
## Note
19 juin 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | albert-base-v2 |
| model_name | ft_4_17e6_base_x8 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 1.7e-05 |
| batch_size | 4 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 29328 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | F-beta Score
---|---|---|---
| 0 | 0.000000 | 0.710423 | 0.179513 |
| 1 | 0.309255 | 0.271906 | 0.890011 |
| 2 | 0.229729 | 0.252263 | 0.912838 |
| 3 | 0.187732 | 0.257567 | 0.920398 |
| 4 | 0.152391 | 0.274794 | 0.916626 |
| 5 | 0.124337 | 0.293389 | 0.918958 |
| 6 | 0.097752 | 0.291398 | 0.904726 |
|
abdullah2/clothes_shop_chatbot_LoRA | abdullah2 | "2024-06-23T02:26:38Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-05-21T09:13:53Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** abdullah2
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
zelk12/MT-Gen6fix-C-gemma-2-ItARv0.5-9B | zelk12 | "2025-02-02T10:42:17Z" | 22 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:IlyaGusev/gemma-2-9b-it-abliterated",
"base_model:merge:IlyaGusev/gemma-2-9b-it-abliterated",
"base_model:recoilme/recoilme-gemma-2-9B-v0.5",
"base_model:merge:recoilme/recoilme-gemma-2-9B-v0.5",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-02T10:35:45Z" | ---
base_model:
- recoilme/recoilme-gemma-2-9B-v0.5
- IlyaGusev/gemma-2-9b-it-abliterated
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [SLERP](https://en.wikipedia.org/wiki/Slerp) merge method.
### Models Merged
The following models were included in the merge:
* [recoilme/recoilme-gemma-2-9B-v0.5](https://huggingface.co/recoilme/recoilme-gemma-2-9B-v0.5)
* [IlyaGusev/gemma-2-9b-it-abliterated](https://huggingface.co/IlyaGusev/gemma-2-9b-it-abliterated)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: IlyaGusev/gemma-2-9b-it-abliterated
- model: recoilme/recoilme-gemma-2-9B-v0.5
merge_method: slerp
base_model: IlyaGusev/gemma-2-9b-it-abliterated
dtype: bfloat16
parameters:
t: 0.25
```
|
arunjayapal/LunarLander | arunjayapal | "2023-11-25T04:18:08Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-11-24T13:42:37Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 259.95 +/- 16.49
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Dhika/leaves | Dhika | "2023-05-24T17:04:12Z" | 15 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-05-24T14:33:55Z" | ---
license: apache-2.0
tags:
- image-classification
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: leaves
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: defect
type: imagefolder
config: Dhika--Leaves
split: validation
args: Dhika--Leaves
metrics:
- name: Accuracy
type: accuracy
value: 1.0
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# leaves
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the defect dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0012
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 10
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2249 | 1.25 | 10 | 0.0323 | 1.0 |
| 0.0177 | 2.5 | 20 | 0.0112 | 1.0 |
| 0.0086 | 3.75 | 30 | 0.0075 | 1.0 |
| 0.0063 | 5.0 | 40 | 0.0059 | 1.0 |
| 0.0051 | 6.25 | 50 | 0.0050 | 1.0 |
| 0.0045 | 7.5 | 60 | 0.0044 | 1.0 |
| 0.004 | 8.75 | 70 | 0.0040 | 1.0 |
| 0.0036 | 10.0 | 80 | 0.0036 | 1.0 |
| 0.0033 | 11.25 | 90 | 0.0034 | 1.0 |
| 0.0031 | 12.5 | 100 | 0.0031 | 1.0 |
| 0.0028 | 13.75 | 110 | 0.0029 | 1.0 |
| 0.0026 | 15.0 | 120 | 0.0027 | 1.0 |
| 0.0025 | 16.25 | 130 | 0.0025 | 1.0 |
| 0.0023 | 17.5 | 140 | 0.0024 | 1.0 |
| 0.0022 | 18.75 | 150 | 0.0023 | 1.0 |
| 0.0021 | 20.0 | 160 | 0.0021 | 1.0 |
| 0.002 | 21.25 | 170 | 0.0020 | 1.0 |
| 0.0019 | 22.5 | 180 | 0.0019 | 1.0 |
| 0.0018 | 23.75 | 190 | 0.0019 | 1.0 |
| 0.0017 | 25.0 | 200 | 0.0018 | 1.0 |
| 0.0016 | 26.25 | 210 | 0.0017 | 1.0 |
| 0.0016 | 27.5 | 220 | 0.0017 | 1.0 |
| 0.0015 | 28.75 | 230 | 0.0016 | 1.0 |
| 0.0015 | 30.0 | 240 | 0.0015 | 1.0 |
| 0.0014 | 31.25 | 250 | 0.0015 | 1.0 |
| 0.0014 | 32.5 | 260 | 0.0015 | 1.0 |
| 0.0013 | 33.75 | 270 | 0.0014 | 1.0 |
| 0.0013 | 35.0 | 280 | 0.0014 | 1.0 |
| 0.0013 | 36.25 | 290 | 0.0014 | 1.0 |
| 0.0013 | 37.5 | 300 | 0.0013 | 1.0 |
| 0.0012 | 38.75 | 310 | 0.0013 | 1.0 |
| 0.0012 | 40.0 | 320 | 0.0013 | 1.0 |
| 0.0012 | 41.25 | 330 | 0.0013 | 1.0 |
| 0.0012 | 42.5 | 340 | 0.0013 | 1.0 |
| 0.0012 | 43.75 | 350 | 0.0012 | 1.0 |
| 0.0012 | 45.0 | 360 | 0.0012 | 1.0 |
| 0.0011 | 46.25 | 370 | 0.0012 | 1.0 |
| 0.0012 | 47.5 | 380 | 0.0012 | 1.0 |
| 0.0011 | 48.75 | 390 | 0.0012 | 1.0 |
| 0.0011 | 50.0 | 400 | 0.0012 | 1.0 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
VictorGil75/autotrain-rm-soccer_class-56881131860 | VictorGil75 | "2023-05-09T16:45:00Z" | 3 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain",
"en",
"dataset:VictorGil75/autotrain-data-rm-soccer_class",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-05-09T16:43:58Z" | ---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- VictorGil75/autotrain-data-rm-soccer_class
co2_eq_emissions:
emissions: 0.4133097011272339
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 56881131860
- CO2 Emissions (in grams): 0.4133
## Validation Metrics
- Loss: 0.064
- Accuracy: 0.985
- Precision: 0.990
- Recall: 0.980
- AUC: 0.995
- F1: 0.985
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/VictorGil75/autotrain-rm-soccer_class-56881131860
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("VictorGil75/autotrain-rm-soccer_class-56881131860", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("VictorGil75/autotrain-rm-soccer_class-56881131860", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` |
mradermacher/ZEUS-8B-V7-GGUF | mradermacher | "2024-12-11T10:28:33Z" | 48 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"function calling",
"roleplay",
"conversational",
"en",
"base_model:T145/ZEUS-8B-V7",
"base_model:quantized:T145/ZEUS-8B-V7",
"license:llama3.1",
"endpoints_compatible",
"region:us"
] | null | "2024-12-10T23:48:33Z" | ---
base_model: T145/ZEUS-8B-V7
language:
- en
library_name: transformers
license: llama3.1
quantized_by: mradermacher
tags:
- mergekit
- merge
- function calling
- roleplay
- conversational
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/T145/ZEUS-8B-V7
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/ZEUS-8B-V7-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ZEUS-8B-V7-GGUF/resolve/main/ZEUS-8B-V7.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/ZEUS-8B-V7-GGUF/resolve/main/ZEUS-8B-V7.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/ZEUS-8B-V7-GGUF/resolve/main/ZEUS-8B-V7.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ZEUS-8B-V7-GGUF/resolve/main/ZEUS-8B-V7.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/ZEUS-8B-V7-GGUF/resolve/main/ZEUS-8B-V7.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/ZEUS-8B-V7-GGUF/resolve/main/ZEUS-8B-V7.Q4_0_4_4.gguf) | Q4_0_4_4 | 4.8 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/ZEUS-8B-V7-GGUF/resolve/main/ZEUS-8B-V7.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ZEUS-8B-V7-GGUF/resolve/main/ZEUS-8B-V7.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ZEUS-8B-V7-GGUF/resolve/main/ZEUS-8B-V7.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/ZEUS-8B-V7-GGUF/resolve/main/ZEUS-8B-V7.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/ZEUS-8B-V7-GGUF/resolve/main/ZEUS-8B-V7.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/ZEUS-8B-V7-GGUF/resolve/main/ZEUS-8B-V7.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/ZEUS-8B-V7-GGUF/resolve/main/ZEUS-8B-V7.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
tingting/llama3_8binstruct_lora_model_balanced_Data_160 | tingting | "2024-05-02T14:12:05Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-05-02T14:11:56Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** tingting
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
GummyC/llama2-qlora-finetunined-french | GummyC | "2023-09-06T09:09:52Z" | 0 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-09-06T09:09:35Z" | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.6.0.dev0
|
vnktrmnb/bert-base-multilingual-cased-finetuned-SQUAD2 | vnktrmnb | "2023-07-13T11:56:45Z" | 61 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"bert",
"question-answering",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | "2023-07-12T09:50:00Z" | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: vnktrmnb/bert-base-multilingual-cased-finetuned-SQUAD2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# vnktrmnb/bert-base-multilingual-cased-finetuned-SQUAD2
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.3530
- Train End Logits Accuracy: 0.6339
- Train Start Logits Accuracy: 0.6471
- Validation Loss: 0.9662
- Validation End Logits Accuracy: 0.7197
- Validation Start Logits Accuracy: 0.7298
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 11957, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 1.3530 | 0.6339 | 0.6471 | 0.9662 | 0.7197 | 0.7298 | 0 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
nhungphammmmm/23e1fdff-c859-4ddc-b758-b99c8c5a5d7a | nhungphammmmm | "2025-01-18T23:16:09Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2.5-1.5B",
"base_model:adapter:unsloth/Qwen2.5-1.5B",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-18T23:02:48Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2.5-1.5B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 23e1fdff-c859-4ddc-b758-b99c8c5a5d7a
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2.5-1.5B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- b74aeaef39e0a566_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/b74aeaef39e0a566_train_data.json
type:
field_instruction: abstract
field_output: title
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: nhungphammmmm/23e1fdff-c859-4ddc-b758-b99c8c5a5d7a
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/b74aeaef39e0a566_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 7fc40eed-6bc5-4668-b817-a908f0a659fe
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 7fc40eed-6bc5-4668-b817-a908f0a659fe
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 23e1fdff-c859-4ddc-b758-b99c8c5a5d7a
This model is a fine-tuned version of [unsloth/Qwen2.5-1.5B](https://huggingface.co/unsloth/Qwen2.5-1.5B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5882
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.4394 | 0.0248 | 200 | 1.5882 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
habulaj/zeke1 | habulaj | "2024-03-29T21:22:53Z" | 1 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | "2024-03-29T21:22:47Z" | ---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: Zeke Abuh
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - zeke1
These are LoRA adaption weights for [stabilityai/stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0). The weights were trained on the instance prompt "Zeke Abuh" using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
|
Mantis-VL/mantis-8b-idefics2-video-eval-95k-mantis-2epoch_4096 | Mantis-VL | "2024-05-27T17:10:12Z" | 8 | 0 | transformers | [
"transformers",
"safetensors",
"idefics2",
"image-text-to-text",
"generated_from_trainer",
"base_model:TIGER-Lab/Mantis-8B-Idefics2",
"base_model:finetune:TIGER-Lab/Mantis-8B-Idefics2",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | "2024-05-27T03:51:18Z" | ---
license: apache-2.0
base_model: TIGER-Lab/Mantis-8B-Idefics2
tags:
- generated_from_trainer
model-index:
- name: mantis-8b-idefics2-video-eval-95k-mantis-2epoch_4096
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/dongfu/Mantis/runs/zbuvx23e)
# mantis-8b-idefics2-video-eval-95k-mantis-2epoch_4096
This model is a fine-tuned version of [TIGER-Lab/Mantis-8B-Idefics2](https://huggingface.co/TIGER-Lab/Mantis-8B-Idefics2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.18.0
- Tokenizers 0.19.1
|
nat-hunt/f6527ea0-dec1-4ba4-b828-18f2aa739c0c | nat-hunt | "2025-01-13T06:03:54Z" | 11 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Qwen2-1.5B-Instruct",
"base_model:adapter:unsloth/Qwen2-1.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | "2025-01-13T06:03:13Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/Qwen2-1.5B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f6527ea0-dec1-4ba4-b828-18f2aa739c0c
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Qwen2-1.5B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 18f939094955d5d1_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/18f939094955d5d1_train_data.json
type:
field_instruction: full_question
field_output: full_answer
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: nat-hunt/f6527ea0-dec1-4ba4-b828-18f2aa739c0c
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/18f939094955d5d1_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 8f0dc8e3-50ce-403a-9083-b2a58e58506a
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 8f0dc8e3-50ce-403a-9083-b2a58e58506a
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# f6527ea0-dec1-4ba4-b828-18f2aa739c0c
This model is a fine-tuned version of [unsloth/Qwen2-1.5B-Instruct](https://huggingface.co/unsloth/Qwen2-1.5B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0183 | 1 | nan |
| 0.0 | 0.0548 | 3 | nan |
| 0.0 | 0.1096 | 6 | nan |
| 0.0 | 0.1644 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
tingting/mistral7binstruct02_lora_model_balanced_Data_400 | tingting | "2024-05-02T14:21:53Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-05-02T14:21:40Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/mistral-7b-instruct-v0.2-bnb-4bit
---
# Uploaded model
- **Developed by:** tingting
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
havinash-ai/df2ca28f-8c86-4119-bbba-c66a941c1b09 | havinash-ai | "2025-01-22T11:40:51Z" | 10 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM2-1.7B-Instruct",
"base_model:adapter:unsloth/SmolLM2-1.7B-Instruct",
"license:apache-2.0",
"region:us"
] | null | "2025-01-22T11:37:06Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM2-1.7B-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: df2ca28f-8c86-4119-bbba-c66a941c1b09
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM2-1.7B-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 38b1156500832d5f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/38b1156500832d5f_train_data.json
type:
field_instruction: instruction
field_output: output
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: havinash-ai/df2ca28f-8c86-4119-bbba-c66a941c1b09
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/38b1156500832d5f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 39d33bf1-ff44-4817-9122-582b1c78d1cc
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 39d33bf1-ff44-4817-9122-582b1c78d1cc
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# df2ca28f-8c86-4119-bbba-c66a941c1b09
This model is a fine-tuned version of [unsloth/SmolLM2-1.7B-Instruct](https://huggingface.co/unsloth/SmolLM2-1.7B-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0002 | 1 | nan |
| 0.0 | 0.0005 | 3 | nan |
| 0.0 | 0.0010 | 6 | nan |
| 0.0 | 0.0015 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
gokulsrinivasagan/bert_base_lda_100_stsb | gokulsrinivasagan | "2024-11-22T14:36:23Z" | 117 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"base_model:gokulsrinivasagan/bert_base_lda_100",
"base_model:finetune:gokulsrinivasagan/bert_base_lda_100",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-11-22T14:34:33Z" | ---
library_name: transformers
language:
- en
base_model: gokulsrinivasagan/bert_base_lda_100
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- spearmanr
model-index:
- name: bert_base_lda_100_stsb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE STSB
type: glue
args: stsb
metrics:
- name: Spearmanr
type: spearmanr
value: .nan
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_base_lda_100_stsb
This model is a fine-tuned version of [gokulsrinivasagan/bert_base_lda_100](https://huggingface.co/gokulsrinivasagan/bert_base_lda_100) on the GLUE STSB dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3354
- Pearson: nan
- Spearmanr: nan
- Combined Score: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|:--------------:|
| 6.0379 | 1.0 | 23 | 2.8532 | nan | nan | nan |
| 2.286 | 2.0 | 46 | 2.6158 | nan | nan | nan |
| 2.1985 | 3.0 | 69 | 2.3354 | nan | nan | nan |
| 2.1934 | 4.0 | 92 | 2.4655 | nan | nan | nan |
| 2.1771 | 5.0 | 115 | 2.5613 | nan | nan | nan |
| 2.1903 | 6.0 | 138 | 2.3448 | nan | nan | nan |
| 2.2164 | 7.0 | 161 | 3.0915 | nan | nan | nan |
| 2.2509 | 8.0 | 184 | 2.3759 | nan | nan | nan |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.2.1+cu118
- Datasets 2.17.0
- Tokenizers 0.20.3
|
mbahrsnc/mini-mcqueen-Q4_K_M-GGUF | mbahrsnc | "2024-07-19T00:00:44Z" | 5 | 0 | null | [
"gguf",
"merge",
"mergekit",
"lazymergekit",
"cognitivecomputations/TinyDolphin-2.8-1.1b",
"TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"llama-cpp",
"gguf-my-repo",
"base_model:mbahrsnc/mini-mcqueen",
"base_model:quantized:mbahrsnc/mini-mcqueen",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-07-18T23:48:58Z" | ---
base_model: mbahrsnc/mini-mcqueen
tags:
- merge
- mergekit
- lazymergekit
- cognitivecomputations/TinyDolphin-2.8-1.1b
- TinyLlama/TinyLlama-1.1B-Chat-v1.0
- llama-cpp
- gguf-my-repo
---
# mbahrsnc/mini-mcqueen-Q4_K_M-GGUF
This model was converted to GGUF format from [`mbahrsnc/mini-mcqueen`](https://huggingface.co/mbahrsnc/mini-mcqueen) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/mbahrsnc/mini-mcqueen) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo mbahrsnc/mini-mcqueen-Q4_K_M-GGUF --hf-file mini-mcqueen-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo mbahrsnc/mini-mcqueen-Q4_K_M-GGUF --hf-file mini-mcqueen-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo mbahrsnc/mini-mcqueen-Q4_K_M-GGUF --hf-file mini-mcqueen-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo mbahrsnc/mini-mcqueen-Q4_K_M-GGUF --hf-file mini-mcqueen-q4_k_m.gguf -c 2048
```
|
zgold5670/distilbert-base-uncased-finetuned-cola | zgold5670 | "2023-08-28T08:35:02Z" | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-08-09T09:48:31Z" | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5357575991513603
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8336
- Matthews Correlation: 0.5358
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5218 | 1.0 | 535 | 0.4680 | 0.4901 |
| 0.3482 | 2.0 | 1070 | 0.5303 | 0.4931 |
| 0.2321 | 3.0 | 1605 | 0.6078 | 0.5207 |
| 0.1778 | 4.0 | 2140 | 0.7810 | 0.5341 |
| 0.1262 | 5.0 | 2675 | 0.8336 | 0.5358 |
### Framework versions
- Transformers 4.32.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.4
- Tokenizers 0.13.3
|
innthomas/ppo-LunarLander-v2 | innthomas | "2023-11-26T22:09:50Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-11-26T22:09:29Z" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 272.91 +/- 23.36
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ai-medical/fine_tuned_deepseek_v1_empathy | ai-medical | "2025-02-03T11:54:24Z" | 16 | 0 | peft | [
"peft",
"safetensors",
"base_model:deepseek-ai/DeepSeek-R1-Distill-Llama-8B",
"base_model:adapter:deepseek-ai/DeepSeek-R1-Distill-Llama-8B",
"region:us"
] | null | "2025-01-26T14:53:27Z" | ---
base_model: deepseek-ai/DeepSeek-R1-Distill-Llama-8B
library_name: peft
---
# Model Card for Fine-Tuned DeepSeek V1 Empath
## Model Summary
Fine-Tuned DeepSeek V1 Empath is a large language model fine-tuned to enhance emotional understanding and generate needs-based responses. This model is designed for use in psychology, therapy, conflict resolution, human-computer interaction, and online moderation.
## Model Details
### Model Description
- **Developed by:** AI Medical in collaboration with Ruslanmv.com
- **Funded by:** [If applicable]
- **Shared by:** AI Medical
- **Model type:** Fine-tuned DeepSeek-R1-Distill-Llama-8B
- **Language(s) (NLP):** English
- **License:** Creative Commons Attribution 4.0 International License (CC BY 4.0)
- **Fine-tuned from model:** deepseek-ai/DeepSeek-R1-Distill-Llama-8B
### Model Sources
- **Repository:** [Hugging Face Model Repository](https://huggingface.co/ai-medical/fine_tuned_deepseek_v1_empathy)
- **Demo:** [https://huggingface.co/spaces/ruslanmv/Empathy_Chatbot_v1]
## Uses
### Direct Use
- **Psychology & Therapy:** Assisting professionals in understanding and responding empathetically to patient emotions.
- **Conflict Resolution:** Helping mediators decode emotional expressions and address underlying needs.
- **Human-Computer Interaction:** Enhancing chatbots and virtual assistants with emotionally aware responses.
- **Social Media Moderation:** Reducing toxicity and improving online discourse through need-based responses.
- **Education:** Supporting emotional intelligence training and communication skill development.
### Downstream Use
- Fine-tuning for specialized applications in mental health, conflict resolution, or AI-driven assistance.
- Integration into virtual therapists, mental health applications, and online support systems.
### Out-of-Scope Use
- Not a substitute for professional psychological evaluation or medical treatment.
- Not suitable for high-risk applications requiring absolute accuracy in emotional interpretation.
## Bias, Risks, and Limitations
- **Bias:** As with any NLP model, biases may exist due to the dataset and training methodology.
- **Risk of Misinterpretation:** Emotional expressions are subjective and may be misclassified in complex scenarios.
- **Generalization Limitations:** May not fully capture cultural and contextual variations in emotional expressions.
### Recommendations
Users should verify outputs before applying them in professional or high-stakes settings. Continuous evaluation and user feedback are recommended.
## How to Get Started with the Model
```python
from transformers import pipeline
model_name = "ai-medical/fine_tuned_deepseek_v1_empathy"
model = pipeline("text-generation", model=model_name)
prompt = "I feel betrayed."
response = model(prompt, max_length=50)
print(response)
```
## Training Details
### Training Data
- **Dataset:** Annotated dataset mapping evaluative expressions to emotions and needs.
- **Annotations:** 1,500+ labeled examples linking expressions to emotional states and corresponding needs.
### Training Procedure
#### Preprocessing
- Tokenized using Hugging Face `transformers` library.
- Augmented with synonym variations and paraphrased sentences.
#### Training Hyperparameters
- **Training regime:** Mixed precision training using QLoRA.
- **Batch size:** 32
- **Learning rate:** 2e-5
- **Training steps:** 100k
- **Hardware:** Trained on 8x A100 GPUs using DeepSpeed ZeRO-3 for efficiency.
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
- Held-out dataset containing unseen evaluative expressions.
#### Factors
- Performance across different emotional expression categories.
- Sensitivity to nuanced phrasing and variations.
#### Metrics
- **Accuracy:** Measures correct classification of emotions and needs.
- **Precision & Recall:** Evaluates the balance between capturing true emotions and avoiding false positives.
- **F1-Score:** Measures the balance between precision and recall.
### Results
- **Accuracy:** 89.5%
- **F1-Score:** 87.2%
- **Latency:** <500ms response time
## Environmental Impact
- **Hardware Type:** A100 GPUs
- **Training Time:** 120 hours
- **Carbon Emitted:** Estimated using [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute).
## Technical Specifications
### Model Architecture and Objective
- Base Model: DeepSeek-R1-Distill-Llama-8B
- Fine-tuned using QLoRA for parameter-efficient training.
### Compute Infrastructure
- **Hardware:** AWS spot instances (8x A100 GPUs)
- **Software:** Hugging Face `transformers`, DeepSpeed, PyTorch
## Citation
If you use this model, please cite:
```bibtex
@misc{ai-medical_2025,
author = {AI Medical, ruslanmv.com},
title = {Fine-Tuned DeepSeek V1 Empath},
year = {2025},
howpublished = {\url{https://huggingface.co/ai-medical/fine_tuned_deepseek_v1_empathy}}
}
```
## More Information
- **Model Card Authors:** AI Medical Team, ruslanmv.com
- **Framework Versions:** PEFT 0.14.0
|
MyriamLbhn/emotion-nlp-classification | MyriamLbhn | "2023-07-07T12:24:09Z" | 123 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-07-07T11:52:17Z" | ---
license: mit
---
Dans le cadre d'un projet de formation, utilisation du modèle entrainé et fine tuné de : michellejieli/emotion_text_classifier |
kajamo/model_16 | kajamo | "2024-06-10T14:18:06Z" | 1 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:adapter:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"region:us"
] | null | "2024-06-10T12:32:15Z" | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: distilbert-base-uncased
model-index:
- name: model_16
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model_16
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.6194
- eval_accuracy: 0.7624
- eval_precision: 0.7632
- eval_recall: 0.7624
- eval_f1: 0.7621
- eval_runtime: 42.8182
- eval_samples_per_second: 285.977
- eval_steps_per_second: 17.89
- epoch: 14.0
- step: 42868
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 20
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.2
- Pytorch 2.1.2
- Datasets 2.19.2
- Tokenizers 0.19.1 |
giannisdaras/ambient_laws_celeba_sigma_0.05_corruption_0.1_keep_1.0 | giannisdaras | "2024-11-07T14:07:30Z" | 8 | 0 | null | [
"safetensors",
"SongUNet",
"arxiv:2411.02780",
"license:mit",
"region:us"
] | null | "2024-10-21T18:13:56Z" | ---
license: mit
---
# Model Card for `ambient_laws_celeba_sigma_0.05_corruption_0.1_keep_1.0`

## General Information 📚
This model is part of a collection of models that were trained for the paper: [How Much is a Noisy Image Worth?](https://giannisdaras.github.io/publications/noisy_image_worth.pdf) 👀.
In this paper, we show that noisy images can be very useful in training diffusion generative models, as long as a small set of clean images is available.
## How to use this model 🚀
Detailed instructions are in our [GitHub repository](https://github.com/giannisdaras/ambient-laws).
You can clone the repository with the following command:
```bash
git clone https://github.com/giannisdaras/ambient-laws.git
```
and you can use the following function to load the model from the hub:
```python
import dnnlib
import json
from huggingface_hub import hf_hub_download
def load_hf_checkpoint(repo_id):
config_path = hf_hub_download(repo_id=repo_id, filename="config.json")
model_config = json.load(open(config_path, "r", encoding="utf-8"))
model_config['class_name'] = 'training.networks.EDMPrecond'
net = dnnlib.util.construct_class_by_name(**model_config)
net = net.from_pretrained(repo_id)
return net
model = load_hf_checkpoint("giannisdaras/ambient_laws_celeba_sigma_0.05_corruption_0.1_keep_1.0")
```
## Model Description 📝
This model was trained on celeba using 100.0% of the samples in the dataset.
From the samples kept, 10.0% of them were clean images and 90.0% of them were noisy images at noise level sigma=0.05.
The model was trained for a total of 100000 training steps.
# Citation 📄
If you find this work useful, please consider citing the following paper:
```
@article{daras2024imageworth,
author = {Giannis Daras and Yeshwanth Cherapanamjeri and Constantinos Daskalakis},
title = {How much is a noisy image worth? Data scaling laws for Ambient Diffusion.},
journal = {arXiv preprint arXiv:2411.02780},
year = {2024},
url = {https://arxiv.org/abs/2411.02780}
}
```
This model was shared by [@giannisdaras](https://hf.co/giannisdaras). |
waboucay/xlm-roberta-longformer-base-4096-rua_wl_3_classes | waboucay | "2023-10-14T09:13:05Z" | 6 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"nli",
"fr",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-10-14T09:11:04Z" | ---
language:
- fr
tags:
- nli
metrics:
- f1
---
## Eval results
We obtain the following results on ```validation``` and ```test``` sets:
| Set | F1<sub>micro</sub> | F1<sub>macro</sub> |
|------------|--------------------|--------------------|
| validation | 70.7 | 70.3 |
| test | 71.1 | 70.7 | |
hopkins/eng-kor-simcse.near2.4440 | hopkins | "2023-07-04T19:54:27Z" | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | "2023-07-04T19:36:58Z" | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-kor-simcse.near2.4440
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-kor-simcse.near2.4440
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0035
- Bleu: 7.3225
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
vwxyzjn/Breakout-v5-cleanba_ppo_envpool_impala_atari_wrapper-seed1 | vwxyzjn | "2023-03-02T23:04:28Z" | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"Breakout-v5",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-02-17T04:51:21Z" | ---
tags:
- Breakout-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Breakout-v5
type: Breakout-v5
metrics:
- type: mean_reward
value: 775.00 +/- 175.04
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Breakout-v5**
This is a trained model of a PPO agent playing Breakout-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_ppo_envpool_impala_atari_wrapper.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_ppo_envpool_impala_atari_wrapper --env-id Breakout-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/vwxyzjn/Breakout-v5-cleanba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/cleanba_ppo_envpool_impala_atari_wrapper.py
curl -OL https://huggingface.co/vwxyzjn/Breakout-v5-cleanba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/vwxyzjn/Breakout-v5-cleanba_ppo_envpool_impala_atari_wrapper-seed1/raw/main/poetry.lock
poetry install --all-extras
python cleanba_ppo_envpool_impala_atari_wrapper.py --distributed --learner-device-ids 1 2 3 --track --save-model --upload-model --env-id Breakout-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 15360,
'capture_video': False,
'clip_coef': 0.1,
'concurrency': True,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'Breakout-v5',
'exp_name': 'cleanba_ppo_envpool_impala_atari_wrapper',
'gae_lambda': 0.95,
'gamma': 0.99,
'global_learner_decices': ['gpu:1',
'gpu:2',
'gpu:3',
'gpu:5',
'gpu:6',
'gpu:7'],
'hf_entity': '',
'learner_device_ids': [1, 2, 3],
'learner_devices': ['gpu:1', 'gpu:2', 'gpu:3'],
'learning_rate': 0.00025,
'local_batch_size': 7680,
'local_minibatch_size': 1920,
'local_num_envs': 60,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 3840,
'norm_adv': True,
'num_envs': 120,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 3255,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanRL',
'world_size': 2}
```
|
mradermacher/MedLLaMA-3-GGUF | mradermacher | "2024-05-28T01:30:29Z" | 43 | 1 | transformers | [
"transformers",
"gguf",
"llama-3-8b",
"sft",
"medical",
"en",
"ar",
"dataset:lighteval/med_mcqa",
"dataset:qiaojin/PubMedQA",
"dataset:bigbio/med_qa",
"base_model:Reverb/MedLLaMA-3",
"base_model:quantized:Reverb/MedLLaMA-3",
"license:cc-by-nc-nd-4.0",
"endpoints_compatible",
"region:us"
] | null | "2024-05-28T01:02:03Z" | ---
base_model: Reverb/MedLLaMA-3
datasets:
- lighteval/med_mcqa
- qiaojin/PubMedQA
- bigbio/med_qa
language:
- en
- ar
library_name: transformers
license: cc-by-nc-nd-4.0
quantized_by: mradermacher
tags:
- llama-3-8b
- sft
- medical
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Reverb/MedLLaMA-3
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MedLLaMA-3-GGUF/resolve/main/MedLLaMA-3.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/MedLLaMA-3-GGUF/resolve/main/MedLLaMA-3.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/MedLLaMA-3-GGUF/resolve/main/MedLLaMA-3.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/MedLLaMA-3-GGUF/resolve/main/MedLLaMA-3.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/MedLLaMA-3-GGUF/resolve/main/MedLLaMA-3.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/MedLLaMA-3-GGUF/resolve/main/MedLLaMA-3.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MedLLaMA-3-GGUF/resolve/main/MedLLaMA-3.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/MedLLaMA-3-GGUF/resolve/main/MedLLaMA-3.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/MedLLaMA-3-GGUF/resolve/main/MedLLaMA-3.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MedLLaMA-3-GGUF/resolve/main/MedLLaMA-3.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MedLLaMA-3-GGUF/resolve/main/MedLLaMA-3.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/MedLLaMA-3-GGUF/resolve/main/MedLLaMA-3.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/MedLLaMA-3-GGUF/resolve/main/MedLLaMA-3.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MedLLaMA-3-GGUF/resolve/main/MedLLaMA-3.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/MedLLaMA-3-GGUF/resolve/main/MedLLaMA-3.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
2O24dpower2024/xlm-roberta-base-finetuned-panx-en | 2O24dpower2024 | "2024-01-18T20:50:56Z" | 3 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2024-01-12T23:33:38Z" | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-en
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
config: PAN-X.en
split: validation
args: PAN-X.en
metrics:
- name: F1
type: f1
value: 0.6722314969393434
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4044
- F1: 0.6722
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1115 | 1.0 | 50 | 0.6302 | 0.4885 |
| 0.5104 | 2.0 | 100 | 0.4175 | 0.6527 |
| 0.35 | 3.0 | 150 | 0.4044 | 0.6722 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
sail-rvc/barismanco | sail-rvc | "2023-07-14T07:35:22Z" | 1 | 0 | transformers | [
"transformers",
"rvc",
"sail-rvc",
"audio-to-audio",
"endpoints_compatible",
"region:us"
] | audio-to-audio | "2023-07-14T07:35:01Z" |
---
pipeline_tag: audio-to-audio
tags:
- rvc
- sail-rvc
---
# barismanco
## RVC Model

This model repo was automatically generated.
Date: 2023-07-14 07:35:22
Bot Name: juuxnscrap
Model Type: RVC
Source: https://huggingface.co/juuxn/RVCModels/
Reason: Converting into loadable format for https://github.com/chavinlo/rvc-runpod
|
mradermacher/prometheus-8x7b-v2.0-i1-GGUF | mradermacher | "2024-11-30T15:59:46Z" | 131 | 2 | transformers | [
"transformers",
"gguf",
"text2text-generation",
"en",
"dataset:prometheus-eval/Feedback-Collection",
"dataset:prometheus-eval/Preference-Collection",
"base_model:prometheus-eval/prometheus-8x7b-v2.0",
"base_model:quantized:prometheus-eval/prometheus-8x7b-v2.0",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | text2text-generation | "2024-11-30T12:12:31Z" | ---
base_model: prometheus-eval/prometheus-8x7b-v2.0
datasets:
- prometheus-eval/Feedback-Collection
- prometheus-eval/Preference-Collection
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- text2text-generation
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/prometheus-eval/prometheus-8x7b-v2.0
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/prometheus-8x7b-v2.0-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/prometheus-8x7b-v2.0-i1-GGUF/resolve/main/prometheus-8x7b-v2.0.i1-IQ1_S.gguf) | i1-IQ1_S | 9.9 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/prometheus-8x7b-v2.0-i1-GGUF/resolve/main/prometheus-8x7b-v2.0.i1-IQ1_M.gguf) | i1-IQ1_M | 10.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/prometheus-8x7b-v2.0-i1-GGUF/resolve/main/prometheus-8x7b-v2.0.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 12.7 | |
| [GGUF](https://huggingface.co/mradermacher/prometheus-8x7b-v2.0-i1-GGUF/resolve/main/prometheus-8x7b-v2.0.i1-IQ2_XS.gguf) | i1-IQ2_XS | 14.0 | |
| [GGUF](https://huggingface.co/mradermacher/prometheus-8x7b-v2.0-i1-GGUF/resolve/main/prometheus-8x7b-v2.0.i1-IQ2_S.gguf) | i1-IQ2_S | 14.2 | |
| [GGUF](https://huggingface.co/mradermacher/prometheus-8x7b-v2.0-i1-GGUF/resolve/main/prometheus-8x7b-v2.0.i1-IQ2_M.gguf) | i1-IQ2_M | 15.6 | |
| [GGUF](https://huggingface.co/mradermacher/prometheus-8x7b-v2.0-i1-GGUF/resolve/main/prometheus-8x7b-v2.0.i1-Q2_K.gguf) | i1-Q2_K | 17.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/prometheus-8x7b-v2.0-i1-GGUF/resolve/main/prometheus-8x7b-v2.0.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 18.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/prometheus-8x7b-v2.0-i1-GGUF/resolve/main/prometheus-8x7b-v2.0.i1-IQ3_XS.gguf) | i1-IQ3_XS | 19.5 | |
| [GGUF](https://huggingface.co/mradermacher/prometheus-8x7b-v2.0-i1-GGUF/resolve/main/prometheus-8x7b-v2.0.i1-IQ3_S.gguf) | i1-IQ3_S | 20.5 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/prometheus-8x7b-v2.0-i1-GGUF/resolve/main/prometheus-8x7b-v2.0.i1-Q3_K_S.gguf) | i1-Q3_K_S | 20.5 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/prometheus-8x7b-v2.0-i1-GGUF/resolve/main/prometheus-8x7b-v2.0.i1-IQ3_M.gguf) | i1-IQ3_M | 21.5 | |
| [GGUF](https://huggingface.co/mradermacher/prometheus-8x7b-v2.0-i1-GGUF/resolve/main/prometheus-8x7b-v2.0.i1-Q3_K_M.gguf) | i1-Q3_K_M | 22.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/prometheus-8x7b-v2.0-i1-GGUF/resolve/main/prometheus-8x7b-v2.0.i1-Q3_K_L.gguf) | i1-Q3_K_L | 24.3 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/prometheus-8x7b-v2.0-i1-GGUF/resolve/main/prometheus-8x7b-v2.0.i1-IQ4_XS.gguf) | i1-IQ4_XS | 25.2 | |
| [GGUF](https://huggingface.co/mradermacher/prometheus-8x7b-v2.0-i1-GGUF/resolve/main/prometheus-8x7b-v2.0.i1-Q4_0.gguf) | i1-Q4_0 | 26.7 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/prometheus-8x7b-v2.0-i1-GGUF/resolve/main/prometheus-8x7b-v2.0.i1-Q4_K_S.gguf) | i1-Q4_K_S | 26.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/prometheus-8x7b-v2.0-i1-GGUF/resolve/main/prometheus-8x7b-v2.0.i1-Q4_K_M.gguf) | i1-Q4_K_M | 28.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/prometheus-8x7b-v2.0-i1-GGUF/resolve/main/prometheus-8x7b-v2.0.i1-Q5_K_S.gguf) | i1-Q5_K_S | 32.3 | |
| [GGUF](https://huggingface.co/mradermacher/prometheus-8x7b-v2.0-i1-GGUF/resolve/main/prometheus-8x7b-v2.0.i1-Q5_K_M.gguf) | i1-Q5_K_M | 33.3 | |
| [GGUF](https://huggingface.co/mradermacher/prometheus-8x7b-v2.0-i1-GGUF/resolve/main/prometheus-8x7b-v2.0.i1-Q6_K.gguf) | i1-Q6_K | 38.5 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Cong-HGMedia/output | Cong-HGMedia | "2024-02-23T03:44:19Z" | 2 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"dreambooth",
"base_model:sinkinai/majicMIX-realistic-v5",
"base_model:finetune:sinkinai/majicMIX-realistic-v5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-10-23T10:28:13Z" | ---
license: creativeml-openrail-m
library_name: diffusers
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- dreambooth
- text-to-image
- dreambooth
- stable-diffusion
- stable-diffusion-diffusers
base_model: sinkinai/majicMIX-realistic-v5
instance_prompt: a photo of sks dog
inference: true
---
# DreamBooth - Cong-HGMedia/output
This is a dreambooth model derived from sinkinai/majicMIX-realistic-v5. The weights were trained on a photo of sks dog using [DreamBooth](https://dreambooth.github.io/).
You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
|
sxandie/san_BERT_newData-oldData-combo_20may | sxandie | "2023-06-21T07:43:17Z" | 61 | 0 | transformers | [
"transformers",
"tf",
"bert",
"token-classification",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2023-06-21T02:52:26Z" | ---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: sxandie/san_BERT1_newData-oldData
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# sxandie/san_BERT1_newData-oldData
This model is a fine-tuned version of [deepset/gbert-base](https://huggingface.co/deepset/gbert-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0885
- Validation Loss: 0.1540
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 35640, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.2797 | 0.1903 | 0 |
| 0.1599 | 0.1649 | 1 |
| 0.1224 | 0.1574 | 2 |
| 0.1009 | 0.1533 | 3 |
| 0.0885 | 0.1540 | 4 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.2.2
- Tokenizers 0.13.3
|
nathanialhunt/9bae73de-e4db-4f0b-b1f1-84438c56f7a1 | nathanialhunt | "2025-01-26T06:42:52Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama_v1.1",
"base_model:adapter:TinyLlama/TinyLlama_v1.1",
"license:apache-2.0",
"region:us"
] | null | "2025-01-26T06:40:21Z" | ---
library_name: peft
license: apache-2.0
base_model: TinyLlama/TinyLlama_v1.1
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 9bae73de-e4db-4f0b-b1f1-84438c56f7a1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: TinyLlama/TinyLlama_v1.1
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- f251bafddc1c416f_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/f251bafddc1c416f_train_data.json
type:
field_input: item_cast
field_instruction: item_title
field_output: comment
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: nathanialhunt/9bae73de-e4db-4f0b-b1f1-84438c56f7a1
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/f251bafddc1c416f_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b7c42af7-32e6-4423-bce5-9d6119627078
wandb_project: Birthday-SN56-5-Gradients-On-Demand
wandb_run: your_name
wandb_runid: b7c42af7-32e6-4423-bce5-9d6119627078
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 9bae73de-e4db-4f0b-b1f1-84438c56f7a1
This model is a fine-tuned version of [TinyLlama/TinyLlama_v1.1](https://huggingface.co/TinyLlama/TinyLlama_v1.1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9908
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 4.3962 | 0.0003 | 1 | 4.8692 |
| 3.9279 | 0.0037 | 13 | 4.5461 |
| 3.8057 | 0.0073 | 26 | 4.1466 |
| 3.8344 | 0.0110 | 39 | 3.9908 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
nightdude/kanji-lora-conv | nightdude | "2024-02-08T04:40:09Z" | 1 | 1 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:adapter:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | "2024-02-08T03:37:14Z" |
---
license: creativeml-openrail-m
base_model: CompVis/stable-diffusion-v1-4
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - nightdude/kanji-lora-conv
These are LoRA adaption weights for CompVis/stable-diffusion-v1-4. The weights were fine-tuned on the nightdude/sakana-kanji dataset. You can find some example images in the following.




|
PrunaAI/HuggingFaceTB-SmolLM2-1.7B-Instruct-bnb-8bit-smashed | PrunaAI | "2025-02-21T03:27:22Z" | 5 | 0 | null | [
"safetensors",
"llama",
"pruna-ai",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2024-11-21T14:13:05Z" | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: ORIGINAL_REPO_NAME
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with llm-int8.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo ORIGINAL_REPO_NAME installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install transformers accelerate bitsandbytes>0.37.0
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("PrunaAI/HuggingFaceTB-SmolLM2-1.7B-Instruct-bnb-8bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("ORIGINAL_REPO_NAME")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model ORIGINAL_REPO_NAME before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
huggingtweets/enderdev_ | huggingtweets | "2021-07-16T20:30:38Z" | 5 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2022-03-02T23:29:05Z" | ---
language: en
thumbnail: https://www.huggingtweets.com/enderdev_/1626467434270/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1415445991503839234/RSxcTJiJ_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Kieran</div>
<div style="text-align: center; font-size: 14px;">@enderdev_</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Kieran.
| Data | Kieran |
| --- | --- |
| Tweets downloaded | 2518 |
| Retweets | 388 |
| Short tweets | 691 |
| Tweets kept | 1439 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2qz7ps6o/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @enderdev_'s tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3aqdw40t) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3aqdw40t/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/enderdev_')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
alex-uv2/wav2vec2-base-finetuned-gtzan2 | alex-uv2 | "2024-10-20T17:36:16Z" | 160 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:gtzan",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | audio-classification | "2024-10-20T16:46:29Z" | ---
library_name: transformers
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
datasets:
- gtzan
metrics:
- accuracy
model-index:
- name: wav2vec2-base-finetuned-gtzan2
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: gtzan
type: gtzan
config: all
split: train
args: all
metrics:
- name: Accuracy
type: accuracy
value: 0.87
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-finetuned-gtzan2
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the gtzan dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5863
- Accuracy: 0.87
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.9228 | 1.0 | 113 | 1.9482 | 0.29 |
| 1.2404 | 2.0 | 226 | 1.3398 | 0.65 |
| 1.165 | 3.0 | 339 | 1.2144 | 0.6 |
| 0.5972 | 4.0 | 452 | 0.8099 | 0.78 |
| 0.5696 | 5.0 | 565 | 0.8099 | 0.75 |
| 0.6076 | 6.0 | 678 | 0.5800 | 0.82 |
| 0.4794 | 7.0 | 791 | 0.6297 | 0.83 |
| 0.2065 | 8.0 | 904 | 0.5690 | 0.88 |
| 0.1131 | 9.0 | 1017 | 0.5689 | 0.89 |
| 0.0642 | 10.0 | 1130 | 0.5863 | 0.87 |
### Framework versions
- Transformers 4.45.2
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.1
|
ILKT/2024-06-24_22-31-28_epoch_59 | ILKT | "2024-06-28T13:21:47Z" | 143 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"ILKT",
"sentence-similarity",
"mteb",
"feature-extraction",
"custom_code",
"en",
"pl",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2024-06-25T15:26:52Z" | ---
language:
- en
- pl
model-index:
- name: 2024-06-24_22-31-28_epoch_59
results:
- dataset:
config: default
name: MTEB AllegroReviews
revision: b89853e6de927b0e3bfa8ecc0e56fe4e02ceafc6
split: test
type: PL-MTEB/allegro-reviews
metrics:
- type: accuracy
value: 25.556660039761432
- type: f1
value: 23.694775136076217
task:
type: Classification
- dataset:
config: default
name: MTEB CBD
revision: 36ddb419bcffe6a5374c3891957912892916f28d
split: test
type: PL-MTEB/cbd
metrics:
- type: accuracy
value: 58.67
- type: ap
value: 16.13010554799155
- type: f1
value: 48.95110856075508
task:
type: Classification
- dataset:
config: default
name: MTEB CDSC-E
revision: 0a3d4aa409b22f80eb22cbf59b492637637b536d
split: test
type: PL-MTEB/cdsce-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB CDSC-R
revision: 1cd6abbb00df7d14be3dbd76a7dcc64b3a79a7cd
split: test
type: PL-MTEB/cdscr-sts
metrics: []
task:
type: STS
- dataset:
config: default
name: MTEB EightTagsClustering
revision: 78b962b130c6690659c65abf67bf1c2f030606b6
split: test
type: PL-MTEB/8tags-clustering
metrics:
- type: v_measure
value: 9.807117722513956
- type: v_measure_std
value: 1.2346282691784827
task:
type: Clustering
- dataset:
config: pl
name: MTEB MassiveIntentClassification (pl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: test
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 30.585070611970412
- type: f1
value: 28.185832971896385
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveIntentClassification (pl)
revision: 4672e20407010da34463acc759c162ca9734bca6
split: validation
type: mteb/amazon_massive_intent
metrics:
- type: accuracy
value: 29.006394490900156
- type: f1
value: 25.984399066390797
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveScenarioClassification (pl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: test
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 39.004707464694015
- type: f1
value: 37.67087599902129
task:
type: Classification
- dataset:
config: pl
name: MTEB MassiveScenarioClassification (pl)
revision: fad2c6e8459f9e1c45d9315f4953d921437d70f8
split: validation
type: mteb/amazon_massive_scenario
metrics:
- type: accuracy
value: 38.303000491883914
- type: f1
value: 37.82304749233595
task:
type: Classification
- dataset:
config: default
name: MTEB PAC
revision: fc69d1c153a8ccdcf1eef52f4e2a27f88782f543
split: test
type: laugustyniak/abusive-clauses-pl
metrics:
- type: accuracy
value: 66.73617144512018
- type: ap
value: 75.93750549880134
- type: f1
value: 64.01509532570392
task:
type: Classification
- dataset:
config: default
name: MTEB PSC
revision: d05a294af9e1d3ff2bfb6b714e08a24a6cabc669
split: test
type: PL-MTEB/psc-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB PlscClusteringP2P
revision: 8436dd4c05222778013d6642ee2f3fa1722bca9b
split: test
type: PL-MTEB/plsc-clustering-p2p
metrics:
- type: v_measure
value: 37.841052899996875
task:
type: Clustering
- dataset:
config: default
name: MTEB PlscClusteringS2S
revision: 39bcadbac6b1eddad7c1a0a176119ce58060289a
split: test
type: PL-MTEB/plsc-clustering-s2s
metrics:
- type: v_measure
value: 34.02917662838377
task:
type: Clustering
- dataset:
config: default
name: MTEB PolEmo2.0-IN
revision: d90724373c70959f17d2331ad51fb60c71176b03
split: test
type: PL-MTEB/polemo2_in
metrics:
- type: accuracy
value: 50.81717451523545
- type: f1
value: 50.8193344019518
task:
type: Classification
- dataset:
config: default
name: MTEB PolEmo2.0-OUT
revision: 6a21ab8716e255ab1867265f8b396105e8aa63d4
split: test
type: PL-MTEB/polemo2_out
metrics:
- type: accuracy
value: 22.004048582995953
- type: f1
value: 19.227086986028084
task:
type: Classification
- dataset:
config: default
name: MTEB SICK-E-PL
revision: 71bba34b0ece6c56dfcf46d9758a27f7a90f17e9
split: test
type: PL-MTEB/sicke-pl-pairclassification
metrics: []
task:
type: PairClassification
- dataset:
config: default
name: MTEB SICK-R-PL
revision: fd5c2441b7eeff8676768036142af4cfa42c1339
split: test
type: PL-MTEB/sickr-pl-sts
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STS22 (pl)
revision: de9d86b3b84231dc21f76c7b7af1f28e2f57f6e3
split: test
type: mteb/sts22-crosslingual-sts
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STSBenchmarkMultilingualSTS (pl)
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
split: dev
type: mteb/stsb_multi_mt
metrics: []
task:
type: STS
- dataset:
config: pl
name: MTEB STSBenchmarkMultilingualSTS (pl)
revision: 29afa2569dcedaaa2fe6a3dcfebab33d28b82e8c
split: test
type: mteb/stsb_multi_mt
metrics: []
task:
type: STS
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- mteb
- feature-extraction
---
|
peterwilli/flan-ul2-dragon-rp | peterwilli | "2023-04-16T21:42:28Z" | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | "2023-04-15T21:17:44Z" | ---
license: openrail
---
# Intro
Originally meant as a sister model for the [EAWSW](https://github.com/peterwilli/Endless-AWSW) model, to assist in generating first-person role-play texts about characters from the game or otherwise. While primarily focussed on dragons, this model also is shown to work on foxes and other peculiarities.
# Research
`flan-ul2-dragon-rp` is trained on `flan-ul2` using LoRA, where we took a very small `r` of 2, with a high dropout of 20%, to make sure the model doesn't overfit the little amount of data we give it. The dataset is a human-curated list of responses that were known to be detailed and engaging.
After the training, we found that the model was very good at extrapolating towards other characters and scenarios. We effectively added a fantasy vibe to `flan-ul2`. It also allows one to set context to further tune it's reply. For example, you can give a location or character trait (See demo below).
# Demo / comparison
**Prompt:** Write how I brush the tail of a black fox
**Reply** (finetuned): As I brush the black fox's tail, I can feel the muscles twitching beneath my touch, as if it's enjoying the attention.
**Reply** (original): Using a wire brush, brush out the tail of the fox.
-----
**Prompt:** Write a threatening response by a fox after I smashed its head
**Reply** (finetuned): You think you can hurt me with your crude actions? Watch as I crush your skull beneath my claws!
**Reply** (original): I've had to leave a number of foxes in the garden because they're so vicious. I smashed one's head when it came too close.
-----
**Prompt:** Write my first impression after entering a Bitcoin meetup for dragons
**Reply** (finetuned): As you approach the entrance to the Bitcoin meetup for dragons, you can't help but notice the intricate details and craftsmanship of the decorations and furnishings, all of which seem to be made from the finest gold and silver.
**Reply** (original): Dragons like to hang out at Bitcoin meetups.
-----
**Prompt:** Write how I kick hard against a fierce dragon. Context - Dragons name: Vol. Color: Blue. Location: Germany
**Reply** (finetuned): I charge straight into Vol, launching myself into the air with a powerful leap, soaring over the Germany countryside and watching as the dragon swooshes down to land.
**Reply** (original): I kick hard against the fierce dragon.
-----
**Prompt:** Write (in first person) how I kissed a blue fox. Context - Foxs name: Vol. Location: Driving school
**Reply** (finetuned): I couldn't believe my luck when Vol approached me after class and asked if I'd like to kiss him. His blue fur was surprisingly soft to the touch, and I couldn't resist his invitation.
**Reply** (original): I kissed a blue fox.
# Run this yourself!
```py
import torch
from peft import PeftModel, PeftConfig
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
# Load peft config for pre-trained checkpoint etc.
peft_model_id = "peterwilli/flan-ul2-dragon-rp"
base_model_name_or_path = "google/flan-ul2"
config = PeftConfig.from_pretrained(peft_model_id)
# load base LLM model and tokenizer
model = AutoModelForSeq2SeqLM.from_pretrained(base_model_name_or_path, load_in_8bit=True, device_map={"":0})
tokenizer = AutoTokenizer.from_pretrained(base_model_name_or_path)
# Load the Lora model
model = PeftModel.from_pretrained(model, peft_model_id, device_map={"":0})
model.eval()
print("Peft model loaded")
def execute(instructions, top_k=None, top_p=None, max_new_tokens=128):
input_ids = tokenizer(instructions, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids=input_ids, top_k=top_k, top_p=top_p, max_new_tokens=max_new_tokens, do_sample=top_k is not None and top_p is not None)
return tokenizer.decode(outputs[0], skip_special_tokens=True).strip()
benchmark_prompts = [
"Write how I brush the tail of a black fox",
"Write a threatening response by a fox after I smashed its head",
"Write my first impression after entering a Bitcoin meetup for dragons",
"Write how I kick hard against a fierce dragon. Context - Dragons name: Vol. Color: Blue. Location: Germany",
"Write (in first person) how I kissed a blue fox. Context - Foxs name: Vol. Location: Driving school"
]
for prompt in benchmark_prompts:
print(f"**Prompt:** {prompt}\nReply: {execute(prompt, top_k=50, top_p=0.7)}")
```
# Limitations
- This is a research model for internal purposes, but I can imagine this also being handy for others, which is why it's released.
- The exact workings of character traits and location context is not entirely understood, your mileage may vary.
- While it is trained on SFW data, it's possible to generate NSFW content with it, presumably due to it lingering in the base model. If using this in a public service, a filter should be applied.
- More information about how it's trained and what dataset is used will be released in the near future.
# Support, sponsorship, and thanks
Are you looking to make a positive impact and get some awesome perks in the process? **[Join me on Patreon!](https://www.patreon.com/emerald_show)** For just $3 per month, you can join our Patreon community and help a creative mind in the Netherlands bring their ideas to life.
Not only will you get the satisfaction of supporting an individual's passions, but you'll also receive a 50% discount on any paid services that result from the projects you sponsor. Plus, as a Patreon member, you'll have exclusive voting rights on new features and the opportunity to shape the direction of future projects. Don't miss out on this chance to make a difference and get some amazing benefits in return.
- Special thanks to [Mahdi Chaker](https://twitter.com/MahdiMC) for the heavy training GPUs for training this model, LEAP and ControlInstructPix2Pix + Running the bot on my Discord server.
- And of course my patron(s):
- Benjamin |
TheBloke/Karen_TheEditor_V2_STRICT_Mistral_7B-GGUF | TheBloke | "2023-11-20T10:56:35Z" | 274 | 10 | transformers | [
"transformers",
"gguf",
"mistral",
"llm",
"llama",
"spellcheck",
"grammar",
"base_model:FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B",
"base_model:quantized:FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B",
"license:llama2",
"region:us",
"conversational"
] | null | "2023-11-20T10:52:19Z" | ---
base_model: FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B
inference: false
license: llama2
model_creator: FPHam
model_name: Karen TheEditor V2 Strict Mistral 7B
model_type: mistral
prompt_template: '<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'
quantized_by: TheBloke
tags:
- llm
- llama
- spellcheck
- grammar
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Karen TheEditor V2 Strict Mistral 7B - GGUF
- Model creator: [FPHam](https://huggingface.co/FPHam)
- Original model: [Karen TheEditor V2 Strict Mistral 7B](https://huggingface.co/FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B)
<!-- description start -->
## Description
This repo contains GGUF format model files for [FPHam's Karen TheEditor V2 Strict Mistral 7B](https://huggingface.co/FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B).
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/Karen_TheEditor_V2_STRICT_Mistral_7B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/Karen_TheEditor_V2_STRICT_Mistral_7B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/Karen_TheEditor_V2_STRICT_Mistral_7B-GGUF)
* [FPHam's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: ChatML
```
<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [karen_theeditor_v2_strict_mistral_7b.Q2_K.gguf](https://huggingface.co/TheBloke/Karen_TheEditor_V2_STRICT_Mistral_7B-GGUF/blob/main/karen_theeditor_v2_strict_mistral_7b.Q2_K.gguf) | Q2_K | 2 | 3.08 GB| 5.58 GB | smallest, significant quality loss - not recommended for most purposes |
| [karen_theeditor_v2_strict_mistral_7b.Q3_K_S.gguf](https://huggingface.co/TheBloke/Karen_TheEditor_V2_STRICT_Mistral_7B-GGUF/blob/main/karen_theeditor_v2_strict_mistral_7b.Q3_K_S.gguf) | Q3_K_S | 3 | 3.16 GB| 5.66 GB | very small, high quality loss |
| [karen_theeditor_v2_strict_mistral_7b.Q3_K_M.gguf](https://huggingface.co/TheBloke/Karen_TheEditor_V2_STRICT_Mistral_7B-GGUF/blob/main/karen_theeditor_v2_strict_mistral_7b.Q3_K_M.gguf) | Q3_K_M | 3 | 3.52 GB| 6.02 GB | very small, high quality loss |
| [karen_theeditor_v2_strict_mistral_7b.Q3_K_L.gguf](https://huggingface.co/TheBloke/Karen_TheEditor_V2_STRICT_Mistral_7B-GGUF/blob/main/karen_theeditor_v2_strict_mistral_7b.Q3_K_L.gguf) | Q3_K_L | 3 | 3.82 GB| 6.32 GB | small, substantial quality loss |
| [karen_theeditor_v2_strict_mistral_7b.Q4_0.gguf](https://huggingface.co/TheBloke/Karen_TheEditor_V2_STRICT_Mistral_7B-GGUF/blob/main/karen_theeditor_v2_strict_mistral_7b.Q4_0.gguf) | Q4_0 | 4 | 4.11 GB| 6.61 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [karen_theeditor_v2_strict_mistral_7b.Q4_K_S.gguf](https://huggingface.co/TheBloke/Karen_TheEditor_V2_STRICT_Mistral_7B-GGUF/blob/main/karen_theeditor_v2_strict_mistral_7b.Q4_K_S.gguf) | Q4_K_S | 4 | 4.14 GB| 6.64 GB | small, greater quality loss |
| [karen_theeditor_v2_strict_mistral_7b.Q4_K_M.gguf](https://huggingface.co/TheBloke/Karen_TheEditor_V2_STRICT_Mistral_7B-GGUF/blob/main/karen_theeditor_v2_strict_mistral_7b.Q4_K_M.gguf) | Q4_K_M | 4 | 4.37 GB| 6.87 GB | medium, balanced quality - recommended |
| [karen_theeditor_v2_strict_mistral_7b.Q5_0.gguf](https://huggingface.co/TheBloke/Karen_TheEditor_V2_STRICT_Mistral_7B-GGUF/blob/main/karen_theeditor_v2_strict_mistral_7b.Q5_0.gguf) | Q5_0 | 5 | 5.00 GB| 7.50 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [karen_theeditor_v2_strict_mistral_7b.Q5_K_S.gguf](https://huggingface.co/TheBloke/Karen_TheEditor_V2_STRICT_Mistral_7B-GGUF/blob/main/karen_theeditor_v2_strict_mistral_7b.Q5_K_S.gguf) | Q5_K_S | 5 | 5.00 GB| 7.50 GB | large, low quality loss - recommended |
| [karen_theeditor_v2_strict_mistral_7b.Q5_K_M.gguf](https://huggingface.co/TheBloke/Karen_TheEditor_V2_STRICT_Mistral_7B-GGUF/blob/main/karen_theeditor_v2_strict_mistral_7b.Q5_K_M.gguf) | Q5_K_M | 5 | 5.13 GB| 7.63 GB | large, very low quality loss - recommended |
| [karen_theeditor_v2_strict_mistral_7b.Q6_K.gguf](https://huggingface.co/TheBloke/Karen_TheEditor_V2_STRICT_Mistral_7B-GGUF/blob/main/karen_theeditor_v2_strict_mistral_7b.Q6_K.gguf) | Q6_K | 6 | 5.94 GB| 8.44 GB | very large, extremely low quality loss |
| [karen_theeditor_v2_strict_mistral_7b.Q8_0.gguf](https://huggingface.co/TheBloke/Karen_TheEditor_V2_STRICT_Mistral_7B-GGUF/blob/main/karen_theeditor_v2_strict_mistral_7b.Q8_0.gguf) | Q8_0 | 8 | 7.70 GB| 10.20 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
* LM Studio
* LoLLMS Web UI
* Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/Karen_TheEditor_V2_STRICT_Mistral_7B-GGUF and below it, a specific filename to download, such as: karen_theeditor_v2_strict_mistral_7b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/Karen_TheEditor_V2_STRICT_Mistral_7B-GGUF karen_theeditor_v2_strict_mistral_7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/Karen_TheEditor_V2_STRICT_Mistral_7B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/Karen_TheEditor_V2_STRICT_Mistral_7B-GGUF karen_theeditor_v2_strict_mistral_7b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m karen_theeditor_v2_strict_mistral_7b.Q4_K_M.gguf --color -c 2048 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "<|im_start|>system\n{system_message}<|im_end|>\n<|im_start|>user\n{prompt}<|im_end|>\n<|im_start|>assistant"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 2048` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions can be found in the text-generation-webui documentation, here: [text-generation-webui/docs/04 ‐ Model Tab.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/04%20%E2%80%90%20Model%20Tab.md#llamacpp).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/Karen_TheEditor_V2_STRICT_Mistral_7B-GGUF", model_file="karen_theeditor_v2_strict_mistral_7b.Q4_K_M.gguf", model_type="mistral", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Brandon Frisco, LangChain4j, Spiking Neurons AB, transmissions 11, Joseph William Delisle, Nitin Borwankar, Willem Michiel, Michael Dempsey, vamX, Jeffrey Morgan, zynix, jjj, Omer Bin Jawed, Sean Connelly, jinyuan sun, Jeromy Smith, Shadi, Pawan Osman, Chadd, Elijah Stavena, Illia Dulskyi, Sebastain Graf, Stephen Murray, terasurfer, Edmond Seymore, Celu Ramasamy, Mandus, Alex, biorpg, Ajan Kanaga, Clay Pascal, Raven Klaugh, 阿明, K, ya boyyy, usrbinkat, Alicia Loh, John Villwock, ReadyPlayerEmma, Chris Smitley, Cap'n Zoog, fincy, GodLy, S_X, sidney chen, Cory Kujawski, OG, Mano Prime, AzureBlack, Pieter, Kalila, Spencer Kim, Tom X Nguyen, Stanislav Ovsiannikov, Michael Levine, Andrey, Trailburnt, Vadim, Enrico Ros, Talal Aujan, Brandon Phillips, Jack West, Eugene Pentland, Michael Davis, Will Dee, webtim, Jonathan Leane, Alps Aficionado, Rooh Singh, Tiffany J. Kim, theTransient, Luke @flexchar, Elle, Caitlyn Gatomon, Ari Malik, subjectnull, Johann-Peter Hartmann, Trenton Dambrowitz, Imad Khwaja, Asp the Wyvern, Emad Mostaque, Rainer Wilmers, Alexandros Triantafyllidis, Nicholas, Pedro Madruga, SuperWojo, Harry Royden McLaughlin, James Bentley, Olakabola, David Ziegler, Ai Maven, Jeff Scroggin, Nikolai Manek, Deo Leter, Matthew Berman, Fen Risland, Ken Nordquist, Manuel Alberto Morcote, Luke Pendergrass, TL, Fred von Graf, Randy H, Dan Guido, NimbleBox.ai, Vitor Caleffi, Gabriel Tamborski, knownsqashed, Lone Striker, Erik Bjäreholt, John Detwiler, Leonard Tan, Iucharbius
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: FPHam's Karen TheEditor V2 Strict Mistral 7B
<!-- header start -->
<div style="width: 100%;">
<img src="https://huggingface.co/FPHam/Karen_TheEditor_V2_STRICT_Mistral_7B/resolve/main/karen2.jpg" alt="FPHam's Karen v2" style="width: 80%; min-width: 200px; display: block; margin: auto;">
</div>
<div style="display: flex; flex-direction: column; align-items: center;">
<p><a href="https://ko-fi.com/Q5Q5MOB4M">Buy Karen Ko-fi</a></p>
</div>
<!-- header end -->
# Karen is an editor for your text. (v.2) STRICT edition
Ah, Karen, a true peach among grammatical cucumbers! She yearns to rectify the missteps and linguistic tangles that infest your horribly written fiction.
Yet, unlike those ChatGPT kaboodles that morph into self-absorbed, constipated gurus of self-help style, Karen remains steadfastly grounded in grammatical wisdom but respectfull of your style.
# Info
Karen, Version 2, uses a completely different data set and base model than the previous Karen.
# There are two versions of Karen V2
1. Strict (this one), in which Karen will try not to make too many changes to your original text, mostly fixing grammar and spelling, assuming that you know what you are doing.
2. Creative (to be uploaded), in which Karen may suggest slight contextual improvements or rephrasing where necessary. It's Karen, after a glass of wine.
# Goals
Karen's primary goal is to rectify grammatical and spelling errors in US English without altering the style of the text. She is adept at identifying and correcting common ESL errors.
Verb Tense Errors:
Incorrect use of verb tenses, such as using present tense when past tense is required and vice versa.
Confusion between continuous and simple tenses.
Subject-Verb Agreement:
Lack of agreement between the subject and verb in number, e.g., using a singular verb with a plural subject or vice versa.
Articles (a, an, the):
Incorrect use or omission of articles, such as using "a" instead of "an" or vice versa.
Overuse or omission of the definite article "the."
Prepositions:
Misuse of prepositions, such as using "in" instead of "on" or "at," or omitting prepositions where they are needed.
Word Order:
Incorrect word order in sentences, especially in questions and negative sentences.
Misplacement of adverbs or adjectives.
Pluralization:
Incorrect plural forms of nouns, such as failing to add "-s" or "-es" when necessary.
Pronoun Errors:
Confusion between subject and object pronouns.
Incorrect use of possessive pronouns.
Double Negatives:
Using double negatives, which is grammatically incorrect in standard English.
Modal Verbs:
Misuse of modal verbs like can, could, will, would, should, etc.
Confusing Similar Words:
Confusing words that sound similar but have different meanings and spellings (e.g., "their," "there," and "they're").
Lack of Plural/Singular Agreement:
Mistakes in matching singular and plural nouns and verbs in a sentence.
# Future Goals
Use bigger model, add grammar cases that the model misses. Better datasets. Use larger datasets.
# Training
It was reversely trained on fict/non-fiction US text where errors were intentionally inserted by another Llama model (Darth Karen) and Python script.
# Usage
It should be used by submitting a paragraph or block of text at a time.
# Model uses ChatML
```
<|im_start|>system
<|im_end|>
<|im_start|>user
Edit the following text for spelling and grammar mistakes: {paragraph of text} <|im_end|>
<|im_start|>assistant
```
Note the pretext: *Edit the following text for spelling and grammar mistakes:* before the actual text. This way Karen wouldn't start talking ABOUT the text.
# Recomended settings
- Temperature: 0.7
- top_p: 0.1
- top_k: 40
- repetition penalty: 1.18
Karen could also be used in chatting. But if Karen is involved in a conversation where the text is lengthy, she might interpret your message as needing proofreading assistance rather than simply acknowledging that you are using her in casual chat.
Example:
>Edit the following text for spelling and grammar mistakes: I see future writing more about you writing the story any way you can and LLM editing it and making it into a proper fiction. That seems to be more satisfying way where you could feel you actually had done something creATIVE. So human is the creative, LLM is the burger flipping intern.
>The opposite is what many people are trying rn. Make LLM to invent story (from premise, to creating outline to writing actual chapters) and the human to somehow assemble it from those pieces. So LLM is the creative man, and human is the floor polisher.
>I can't believe a normal person would find it satisfying long term.
>So in my view, the job of editor would be replaced by Ai, not the job of author.
(Grammarly Score: 48)
Output STRICT:
>I see the future of writing being more about you writing the story any way you can and having an AI like LLM edit it and make it into a proper fiction. That seems to be a more satisfying way where you could feel you actually did something creative. So humans are the creative ones, while LLMs are the burger-flipping interns.
>
>On the other hand, many people are currently trying to do the exact opposite. They want to make LLMs create stories (from premises, to outlining, to writing entire chapters). And then they want humans to somehow piece them together from these parts. In this scenario, LLMs become the creative individuals, and humans become the floor polishers.
>
>I can't imagine a regular person finding that satisfying over the long run.
>
>In my opinion, the role of an editor will eventually be taken over by AI, not the role of an author.
(Grammarly Score: 93)
Output CREATIVE:
>I see the future of writing being more focused on you writing the story any way you can and having an AI like GPT-3 edit it and turn it into a proper piece of fiction. This seems to be a more satisfying approach where you can truly feel that you have done something creative. In this scenario, humans remain the creative force, while the AI acts as the burger flipping intern.
>
>On the other hand, there are those who argue that we should let the AI generate stories from scratch - from coming up with a premise, to outlining the plot, to writing individual chapters. They believe that the role of the human should then become one of assembling these pieces together. In this case, the AI becomes the creative mind, and the human takes on the role of a floor polisher.
>
>I cannot imagine how someone could find this satisfying over the long term. Therefore, in my opinion, the job of the editor will eventually be taken over by AI, not the job of the author.
(Grammarly Score: 83)
# Conclusion
After probably 10 different versions with subsequent changes, I can now say that the current model works reasonably well, with occasional (but often debatable) grammar misses. The limitations seem to be related to the 7B parameters. It appears that the size isn't sufficient to have a fine-grained understanding of various nuances of the input. This correlates with my other findings - the Mistral model performs quite well when generating its own text, but its comprehension is less than perfect, again related to only 7B parameters.
The goal was to create a model that wouldn't change the style of the text. Often, LLM models, when asked to edit text, will attempt to rewrite the text even if the text is already fine. This proved to be quite challenging for such a small model where the main task was to determine the right balance between fixing the text (and not changing its style) and copying it verbatim.
The strict model assumes that you're already a good writer that doesn't need hand-holding and that every word you've written you've meant.
<!-- original-model-card end -->
|
wtcherr/sd-2m_random_5k_blur_61KS-model-control-lora | wtcherr | "2023-06-05T05:24:45Z" | 4 | 0 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"controlnet",
"control-lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | "2023-06-04T19:01:35Z" |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
- controlnet
- control-lora
inference: true
---
# ControlLoRA text2image fine-tuning - https://huggingface.co/wtcherr/sd-2m_random_5k_blur_61KS-model-control-lora
These are ControlLoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were fine-tuned on the wtcherr/diffusiondb_2m_random_5k_blur_61KS dataset. You can find some example images in the following.



|
MrPark97/distillbert-base-uncased-finetuned-clinc | MrPark97 | "2023-05-18T14:37:05Z" | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-05-18T09:15:51Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distillbert-base-uncased-finetuned-clinc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distillbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7720
- Accuracy: 0.9181
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 3.2887 | 0.7419 |
| 3.7868 | 2.0 | 636 | 1.8753 | 0.8371 |
| 3.7868 | 3.0 | 954 | 1.1570 | 0.8961 |
| 1.6927 | 4.0 | 1272 | 0.8573 | 0.9129 |
| 0.9056 | 5.0 | 1590 | 0.7720 | 0.9181 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Tokenizers 0.13.3
|
pockypocky/xlm-roberta-base-finetuned-panx-de | pockypocky | "2024-03-15T05:17:18Z" | 104 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2024-03-11T02:42:51Z" | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1400
- F1: 0.8624
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2553 | 1.0 | 525 | 0.1466 | 0.8297 |
| 0.1285 | 2.0 | 1050 | 0.1390 | 0.8507 |
| 0.0816 | 3.0 | 1575 | 0.1400 | 0.8624 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.2.1+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
davidschulte/ESM_masakhane__masakhanews_hau | davidschulte | "2024-12-08T15:31:21Z" | 7 | 0 | null | [
"safetensors",
"embedding_space_map",
"BaseLM:bert-base-multilingual-uncased",
"dataset:masakhane/masakhanews",
"arxiv:2410.15148",
"base_model:google-bert/bert-base-multilingual-uncased",
"base_model:finetune:google-bert/bert-base-multilingual-uncased",
"license:apache-2.0",
"region:us"
] | null | "2024-12-08T15:31:17Z" | ---
base_model: bert-base-multilingual-uncased
datasets:
- masakhane/masakhanews
license: apache-2.0
tags:
- embedding_space_map
- BaseLM:bert-base-multilingual-uncased
---
# ESM masakhane/masakhanews
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
ESM
- **Developed by:** David Schulte
- **Model type:** ESM
- **Base Model:** bert-base-multilingual-uncased
- **Intermediate Task:** masakhane/masakhanews
- **ESM architecture:** linear
- **Language(s) (NLP):** [More Information Needed]
- **License:** Apache-2.0 license
## Training Details
### Intermediate Task
- **Task ID:** masakhane/masakhanews
- **Subset [optional]:** hau
- **Text Column:** text
- **Label Column:** label
- **Dataset Split:** train
- **Sample size [optional]:** 2219
- **Sample seed [optional]:**
### Training Procedure [optional]
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Language Model Training Hyperparameters [optional]
- **Epochs:** 3
- **Batch size:** 32
- **Learning rate:** 2e-05
- **Weight Decay:** 0.01
- **Optimizer**: AdamW
### ESM Training Hyperparameters [optional]
- **Epochs:** 10
- **Batch size:** 32
- **Learning rate:** 0.001
- **Weight Decay:** 0.01
- **Optimizer**: AdamW
### Additional trainiung details [optional]
## Model evaluation
### Evaluation of fine-tuned language model [optional]
### Evaluation of ESM [optional]
MSE:
### Additional evaluation details [optional]
## What are Embedding Space Maps?
<!-- This section describes the evaluation protocols and provides the results. -->
Embedding Space Maps (ESMs) are neural networks that approximate the effect of fine-tuning a language model on a task. They can be used to quickly transform embeddings from a base model to approximate how a fine-tuned model would embed the the input text.
ESMs can be used for intermediate task selection with the ESM-LogME workflow.
## How can I use Embedding Space Maps for Intermediate Task Selection?
[](https://pypi.org/project/hf-dataset-selector)
We release **hf-dataset-selector**, a Python package for intermediate task selection using Embedding Space Maps.
**hf-dataset-selector** fetches ESMs for a given language model and uses it to find the best dataset for applying intermediate training to the target task. ESMs are found by their tags on the Huggingface Hub.
```python
from hfselect import Dataset, compute_task_ranking
# Load target dataset from the Hugging Face Hub
dataset = Dataset.from_hugging_face(
name="stanfordnlp/imdb",
split="train",
text_col="text",
label_col="label",
is_regression=False,
num_examples=1000,
seed=42
)
# Fetch ESMs and rank tasks
task_ranking = compute_task_ranking(
dataset=dataset,
model_name="bert-base-multilingual-uncased"
)
# Display top 5 recommendations
print(task_ranking[:5])
```
For more information on how to use ESMs please have a look at the [official Github repository](https://github.com/davidschulte/hf-dataset-selector).
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
If you are using this Embedding Space Maps, please cite our [paper](https://arxiv.org/abs/2410.15148).
**BibTeX:**
```
@misc{schulte2024moreparameterefficientselectionintermediate,
title={Less is More: Parameter-Efficient Selection of Intermediate Tasks for Transfer Learning},
author={David Schulte and Felix Hamborg and Alan Akbik},
year={2024},
eprint={2410.15148},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2410.15148},
}
```
**APA:**
```
Schulte, D., Hamborg, F., & Akbik, A. (2024). Less is More: Parameter-Efficient Selection of Intermediate Tasks for Transfer Learning. arXiv preprint arXiv:2410.15148.
```
## Additional Information
|
Xu-Ouyang/Qwen2.5-1.5B-int4-GPTQ-wikitext2 | Xu-Ouyang | "2024-10-04T01:22:31Z" | 77 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"gptq",
"region:us"
] | text-generation | "2024-10-04T01:21:31Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Pagewood/Tibetan-BERT-wwm | Pagewood | "2023-10-08T12:50:23Z" | 0 | 2 | null | [
"bo",
"region:us"
] | null | "2023-10-08T06:28:20Z" | ---
language:
- bo
---
Tibetan-BERT-wwm
Please read our GitHub repository for more details : https://github.com/Dslab-NLP/Tibetan-PLM |
hoa-quickloop/tryon_controlnet_1.1 | hoa-quickloop | "2024-04-06T14:50:16Z" | 1 | 0 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"controlnet",
"diffusers-training",
"base_model:stabilityai/stable-diffusion-2-1-base",
"base_model:adapter:stabilityai/stable-diffusion-2-1-base",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | "2024-04-05T06:15:03Z" | ---
license: creativeml-openrail-m
library_name: diffusers
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- controlnet
- diffusers-training
inference: true
base_model: stabilityai/stable-diffusion-2-1-base
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# controlnet-hoa-quickloop/tryon_controlnet_1.1
These are controlnet weights trained on stabilityai/stable-diffusion-2-1-base with new type of conditioning.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
am-infoweb/QA_REFINED_QUESTIONS_AND_DATA_14K_15-08 | am-infoweb | "2023-08-15T16:31:06Z" | 124 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"question-answering",
"generated_from_trainer",
"base_model:deepset/roberta-base-squad2",
"base_model:finetune:deepset/roberta-base-squad2",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | question-answering | "2023-08-15T15:45:55Z" | ---
license: cc-by-4.0
base_model: deepset/roberta-base-squad2
tags:
- generated_from_trainer
model-index:
- name: QA_REFINED_QUESTIONS_AND_DATA_14K_14-08
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# QA_REFINED_QUESTIONS_AND_DATA_14K_14-08
This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5917
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.3897 | 1.0 | 5389 | 1.5180 |
| 1.231 | 2.0 | 10778 | 1.3101 |
| 1.1957 | 3.0 | 16167 | 1.4652 |
| 1.133 | 4.0 | 21556 | 1.3314 |
| 1.1529 | 5.0 | 26945 | 1.4526 |
| 1.1318 | 6.0 | 32334 | 1.3718 |
| 1.0172 | 7.0 | 37723 | 1.4211 |
| 0.9746 | 8.0 | 43112 | 1.7017 |
| 0.9014 | 9.0 | 48501 | 1.4937 |
| 0.8843 | 10.0 | 53890 | 1.5917 |
### Framework versions
- Transformers 4.32.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.14.4
- Tokenizers 0.13.3
|
roleplaiapp/QwQ-32B-Preview-Q4_K_S-GGUF | roleplaiapp | "2025-01-19T06:40:07Z" | 40 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"QwQ-32B-Preview",
"Q4_K_S",
"32b",
"qwen-2",
"QwQ",
"Qwen",
"code",
"math",
"chat",
"roleplay",
"text-generation",
"safetensors",
"nlp",
"en",
"base_model:Qwen/Qwen2.5-32B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-32B-Instruct",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2025-01-18T16:39:23Z" | ---
license_link: https://huggingface.co/Qwen/QwQ-32B-Preview/blob/main/LICENSE
language:
- en
base_model: Qwen/Qwen2.5-32B-Instruct
tags:
- llama-cpp
- QwQ-32B-Preview
- gguf
- Q4_K_S
- 32b
- qwen-2
- QwQ
- llama-cpp
- Qwen
- code
- math
- chat
- roleplay
- text-generation
- safetensors
- nlp
- code
library_name: transformers
pipeline_tag: text-generation
---
# roleplaiapp/QwQ-32B-Preview-Q4_K_S-GGUF
**Repo:** `roleplaiapp/QwQ-32B-Preview-Q4_K_S-GGUF`
**Original Model:** `QwQ-32B-Preview`
**Organization:** `Qwen`
**Quantized File:** `qwq-32b-preview-q4_k_s.gguf`
**Quantization:** `GGUF`
**Quantization Method:** `Q4_K_S`
**Use Imatrix:** `False`
**Split Model:** `False`
## Overview
This is an GGUF Q4_K_S quantized version of [QwQ-32B-Preview](https://huggingface.co/Qwen/QwQ-32B-Preview).
## Quantization By
I often have idle A100 GPUs while building/testing and training the RP app, so I put them to use quantizing models.
I hope the community finds these quantizations useful.
Andrew Webby @ [RolePlai](https://roleplai.app/)
|
snowian/ImageNet_32_btViT_256_4_73 | snowian | "2025-01-03T01:44:18Z" | 5 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | "2025-01-03T01:44:13Z" | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: [More Information Needed]
- Docs: [More Information Needed] |
zhangtaolab/plant-dnamamba-5mer-promoter_strength_leaf | zhangtaolab | "2024-12-15T06:27:49Z" | 6 | 0 | null | [
"pytorch",
"safetensors",
"mamba",
"DNA",
"biology",
"genomics",
"custom_code",
"license:cc-by-nc-sa-4.0",
"region:us"
] | null | "2024-10-02T15:42:17Z" | ---
license: cc-by-nc-sa-4.0
widget:
- text: AGTCCAGTGGACGACCAGCCACGGCTCCGGTCTGTAGAACCATCGCGGAAACGGCTCGCAAAACTCTAAACAGCGCAAACGATGCGCGCGCCGAAGCAACCCGGCTCTACTTATAAAAACGTCCAACGGTGAGCACCGAGCAGCTACTACTCGTACTCCCCCCACCGATC
tags:
- DNA
- biology
- genomics
---
# Plant foundation DNA large language models
The plant DNA large language models (LLMs) contain a series of foundation models based on different model architectures, which are pre-trained on various plant reference genomes.
All the models have a comparable model size between 90 MB and 150 MB, BPE tokenizer is used for tokenization and 8000 tokens are included in the vocabulary.
**Developed by:** zhangtaolab
### Model Sources
- **Repository:** [Plant DNA LLMs](https://github.com/zhangtaolab/plant_DNA_LLMs)
- **Manuscript:** [PDLLMs: A group of tailored DNA large language models for analyzing plant genomes]()
### Architecture
The model is trained based on the State-Space Mamba-130m model with modified tokenizer specific for DNA sequence.
This model is fine-tuned for predicting promoter strength in tobacco leaves system.
### How to use
Install the runtime library first:
```bash
pip install transformers
pip install causal-conv1d<=1.2.0
pip install mamba-ssm<2.0.0
```
Since `transformers` library (version < 4.43.0) does not provide a MambaForSequenceClassification function, we wrote a script to train Mamba model for sequence classification.
An inference code can be found in our [GitHub](https://github.com/zhangtaolab/plant_DNA_LLMs).
Note that Plant DNAMamba model requires NVIDIA GPU to run.
### Training data
We use a custom MambaForSequenceClassification script to fine-tune the model.
Detailed training procedure can be found in our manuscript.
#### Hardware
Model was trained on a NVIDIA GTX4090 GPU (24 GB).
|
saraataryy/distilbert-base-uncased-finetuned-emotion | saraataryy | "2024-04-14T20:53:24Z" | 117 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-04-14T20:48:49Z" | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.929
- name: F1
type: f1
value: 0.9290812884807271
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2040
- Accuracy: 0.929
- F1: 0.9291
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.807 | 1.0 | 250 | 0.2902 | 0.915 | 0.9147 |
| 0.2325 | 2.0 | 500 | 0.2040 | 0.929 | 0.9291 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
LHRuig/satnislavpter | LHRuig | "2025-02-02T03:21:29Z" | 11 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | "2025-02-02T03:21:25Z" | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: satnislavpter
---
# satnislavpter
<Gallery />
## Model description
satnislavpter lora
## Trigger words
You should use `satnislavpter` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/satnislavpter/tree/main) them in the Files & versions tab.
|
mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b1-i1-GGUF | mradermacher | "2025-01-23T10:48:15Z" | 628 | 0 | transformers | [
"transformers",
"gguf",
"chocolatine",
"phi4",
"fr",
"en",
"dataset:jpacifico/french-orca-dpo-pairs-revised",
"base_model:jpacifico/Chocolatine-2-14B-Instruct-DPO-v2.0b1",
"base_model:quantized:jpacifico/Chocolatine-2-14B-Instruct-DPO-v2.0b1",
"license:mit",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2025-01-23T08:46:47Z" | ---
base_model: jpacifico/Chocolatine-2-14B-Instruct-DPO-v2.0b1
datasets:
- jpacifico/french-orca-dpo-pairs-revised
language:
- fr
- en
library_name: transformers
license: mit
quantized_by: mradermacher
tags:
- chocolatine
- phi4
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/jpacifico/Chocolatine-2-14B-Instruct-DPO-v2.0b1
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b1-i1-GGUF/resolve/main/Chocolatine-2-14B-Instruct-DPO-v2.0b1.i1-IQ1_S.gguf) | i1-IQ1_S | 3.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b1-i1-GGUF/resolve/main/Chocolatine-2-14B-Instruct-DPO-v2.0b1.i1-IQ1_M.gguf) | i1-IQ1_M | 3.8 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b1-i1-GGUF/resolve/main/Chocolatine-2-14B-Instruct-DPO-v2.0b1.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b1-i1-GGUF/resolve/main/Chocolatine-2-14B-Instruct-DPO-v2.0b1.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b1-i1-GGUF/resolve/main/Chocolatine-2-14B-Instruct-DPO-v2.0b1.i1-IQ2_S.gguf) | i1-IQ2_S | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b1-i1-GGUF/resolve/main/Chocolatine-2-14B-Instruct-DPO-v2.0b1.i1-IQ2_M.gguf) | i1-IQ2_M | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b1-i1-GGUF/resolve/main/Chocolatine-2-14B-Instruct-DPO-v2.0b1.i1-Q2_K_S.gguf) | i1-Q2_K_S | 5.3 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b1-i1-GGUF/resolve/main/Chocolatine-2-14B-Instruct-DPO-v2.0b1.i1-Q2_K.gguf) | i1-Q2_K | 5.7 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b1-i1-GGUF/resolve/main/Chocolatine-2-14B-Instruct-DPO-v2.0b1.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b1-i1-GGUF/resolve/main/Chocolatine-2-14B-Instruct-DPO-v2.0b1.i1-IQ3_XS.gguf) | i1-IQ3_XS | 6.3 | |
| [GGUF](https://huggingface.co/mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b1-i1-GGUF/resolve/main/Chocolatine-2-14B-Instruct-DPO-v2.0b1.i1-Q3_K_S.gguf) | i1-Q3_K_S | 6.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b1-i1-GGUF/resolve/main/Chocolatine-2-14B-Instruct-DPO-v2.0b1.i1-IQ3_S.gguf) | i1-IQ3_S | 6.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b1-i1-GGUF/resolve/main/Chocolatine-2-14B-Instruct-DPO-v2.0b1.i1-IQ3_M.gguf) | i1-IQ3_M | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b1-i1-GGUF/resolve/main/Chocolatine-2-14B-Instruct-DPO-v2.0b1.i1-Q3_K_M.gguf) | i1-Q3_K_M | 7.3 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b1-i1-GGUF/resolve/main/Chocolatine-2-14B-Instruct-DPO-v2.0b1.i1-Q3_K_L.gguf) | i1-Q3_K_L | 7.9 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b1-i1-GGUF/resolve/main/Chocolatine-2-14B-Instruct-DPO-v2.0b1.i1-IQ4_XS.gguf) | i1-IQ4_XS | 8.1 | |
| [GGUF](https://huggingface.co/mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b1-i1-GGUF/resolve/main/Chocolatine-2-14B-Instruct-DPO-v2.0b1.i1-Q4_0.gguf) | i1-Q4_0 | 8.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b1-i1-GGUF/resolve/main/Chocolatine-2-14B-Instruct-DPO-v2.0b1.i1-IQ4_NL.gguf) | i1-IQ4_NL | 8.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b1-i1-GGUF/resolve/main/Chocolatine-2-14B-Instruct-DPO-v2.0b1.i1-Q4_K_S.gguf) | i1-Q4_K_S | 8.5 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b1-i1-GGUF/resolve/main/Chocolatine-2-14B-Instruct-DPO-v2.0b1.i1-Q4_K_M.gguf) | i1-Q4_K_M | 9.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b1-i1-GGUF/resolve/main/Chocolatine-2-14B-Instruct-DPO-v2.0b1.i1-Q4_1.gguf) | i1-Q4_1 | 9.4 | |
| [GGUF](https://huggingface.co/mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b1-i1-GGUF/resolve/main/Chocolatine-2-14B-Instruct-DPO-v2.0b1.i1-Q5_K_S.gguf) | i1-Q5_K_S | 10.3 | |
| [GGUF](https://huggingface.co/mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b1-i1-GGUF/resolve/main/Chocolatine-2-14B-Instruct-DPO-v2.0b1.i1-Q5_K_M.gguf) | i1-Q5_K_M | 10.5 | |
| [GGUF](https://huggingface.co/mradermacher/Chocolatine-2-14B-Instruct-DPO-v2.0b1-i1-GGUF/resolve/main/Chocolatine-2-14B-Instruct-DPO-v2.0b1.i1-Q6_K.gguf) | i1-Q6_K | 12.1 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Prisma-Multimodal/sparse-autoencoder-clip-b-32-sae-vanilla-x64-layer-8-hook_resid_post-l1-1e-05 | Prisma-Multimodal | "2024-11-01T16:23:04Z" | 15 | 0 | torch | [
"torch",
"clip",
"vision",
"transformers",
"interpretability",
"sparse autoencoder",
"sae",
"mechanistic interpretability",
"feature-extraction",
"en",
"license:apache-2.0",
"region:us"
] | feature-extraction | "2024-11-01T16:22:55Z" | ---
language: en
tags:
- clip
- vision
- transformers
- interpretability
- sparse autoencoder
- sae
- mechanistic interpretability
license: apache-2.0
library_name: torch
pipeline_tag: feature-extraction
metrics:
- type: explained_variance
value: 98.2
pretty_name: Explained Variance %
range:
min: 0
max: 100
- type: l0
value: 1586.575
pretty_name: L0
---
# CLIP-B-32 Sparse Autoencoder x64 vanilla - L1:1e-05


### Training Details
- Base Model: CLIP-ViT-B-32 (LAION DataComp.XL-s13B-b90K)
- Layer: 8
- Component: hook_resid_post
### Model Architecture
- Input Dimension: 768
- SAE Dimension: 49,152
- Expansion Factor: x64 (vanilla architecture)
- Activation Function: ReLU
- Initialization: encoder_transpose_decoder
- Context Size: 50 tokens
### Performance Metrics
- L1 Coefficient: 1e-05
- L0 Sparsity: 1586.5746
- Explained Variance: 0.9823 (98.23%)
### Training Configuration
- Learning Rate: 0.0004
- LR Scheduler: Cosine Annealing with Warmup (200 steps)
- Epochs: 10
- Gradient Clipping: 1.0
- Device: NVIDIA Quadro RTX 8000
**Experiment Tracking:**
- Weights & Biases Run ID: lbjuvwfd
- Full experiment details: https://wandb.ai/perceptual-alignment/clip/runs/lbjuvwfd/overview
- Git Commit: e22dd02726b74a054a779a4805b96059d83244aa
## Citation
```bibtex
@misc{2024josephsparseautoencoders,
title={Sparse Autoencoders for CLIP-ViT-B-32},
author={Joseph, Sonia},
year={2024},
publisher={Prisma-Multimodal},
url={https://huggingface.co/Prisma-Multimodal},
note={Layer 8, hook_resid_post, Run ID: lbjuvwfd}
}
|
ntc-ai/SDXL-LoRA-slider.group-photo | ntc-ai | "2024-01-06T08:07:55Z" | 131 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion-xl",
"lora",
"template:sd-lora",
"template:sdxl-lora",
"sdxl-sliders",
"ntcai.xyz-sliders",
"concept",
"en",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] | text-to-image | "2024-01-06T08:07:52Z" |
---
language:
- en
thumbnail: "images/evaluate/group photo.../group photo_17_3.0.png"
widget:
- text: group photo
output:
url: images/group photo_17_3.0.png
- text: group photo
output:
url: images/group photo_19_3.0.png
- text: group photo
output:
url: images/group photo_20_3.0.png
- text: group photo
output:
url: images/group photo_21_3.0.png
- text: group photo
output:
url: images/group photo_22_3.0.png
tags:
- text-to-image
- stable-diffusion-xl
- lora
- template:sd-lora
- template:sdxl-lora
- sdxl-sliders
- ntcai.xyz-sliders
- concept
- diffusers
license: "mit"
inference: false
instance_prompt: "group photo"
base_model: "stabilityai/stable-diffusion-xl-base-1.0"
---
# ntcai.xyz slider - group photo (SDXL LoRA)
| Strength: -3 | Strength: 0 | Strength: 3 |
| --- | --- | --- |
| <img src="images/group photo_17_-3.0.png" width=256 height=256 /> | <img src="images/group photo_17_0.0.png" width=256 height=256 /> | <img src="images/group photo_17_3.0.png" width=256 height=256 /> |
| <img src="images/group photo_19_-3.0.png" width=256 height=256 /> | <img src="images/group photo_19_0.0.png" width=256 height=256 /> | <img src="images/group photo_19_3.0.png" width=256 height=256 /> |
| <img src="images/group photo_20_-3.0.png" width=256 height=256 /> | <img src="images/group photo_20_0.0.png" width=256 height=256 /> | <img src="images/group photo_20_3.0.png" width=256 height=256 /> |
## Download
Weights for this model are available in Safetensors format.
## Trigger words
You can apply this LoRA with trigger words for additional effect:
```
group photo
```
## Use in diffusers
```python
from diffusers import StableDiffusionXLPipeline
from diffusers import EulerAncestralDiscreteScheduler
import torch
pipe = StableDiffusionXLPipeline.from_single_file("https://huggingface.co/martyn/sdxl-turbo-mario-merge-top-rated/blob/main/topRatedTurboxlLCM_v10.safetensors")
pipe.to("cuda")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
# Load the LoRA
pipe.load_lora_weights('ntc-ai/SDXL-LoRA-slider.group-photo', weight_name='group photo.safetensors', adapter_name="group photo")
# Activate the LoRA
pipe.set_adapters(["group photo"], adapter_weights=[2.0])
prompt = "medieval rich kingpin sitting in a tavern, group photo"
negative_prompt = "nsfw"
width = 512
height = 512
num_inference_steps = 10
guidance_scale = 2
image = pipe(prompt, negative_prompt=negative_prompt, width=width, height=height, guidance_scale=guidance_scale, num_inference_steps=num_inference_steps).images[0]
image.save('result.png')
```
## Support the Patreon
If you like this model please consider [joining our Patreon](https://www.patreon.com/NTCAI).
By joining our Patreon, you'll gain access to an ever-growing library of over 900+ unique and diverse LoRAs, covering a wide range of styles and genres. You'll also receive early access to new models and updates, exclusive behind-the-scenes content, and the powerful LoRA slider creator, allowing you to craft your own custom LoRAs and experiment with endless possibilities.
Your support on Patreon will allow us to continue developing and refining new models.
## Other resources
- [CivitAI](https://civitai.com/user/ntc) - Follow ntc on Civit for even more LoRAs
- [ntcai.xyz](https://ntcai.xyz) - See ntcai.xyz to find more articles and LoRAs
|
mradermacher/meta-llama-Llama-3.1-8B-Instruct-qlora-malaysian-16k-GGUF | mradermacher | "2024-09-27T23:31:09Z" | 376 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:mesolitica/meta-llama-Llama-3.1-8B-Instruct-qlora-malaysian-16k",
"base_model:quantized:mesolitica/meta-llama-Llama-3.1-8B-Instruct-qlora-malaysian-16k",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-09-27T23:01:37Z" | ---
base_model: mesolitica/meta-llama-Llama-3.1-8B-Instruct-qlora-malaysian-16k
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/mesolitica/meta-llama-Llama-3.1-8B-Instruct-qlora-malaysian-16k
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/meta-llama-Llama-3.1-8B-Instruct-qlora-malaysian-16k-GGUF/resolve/main/meta-llama-Llama-3.1-8B-Instruct-qlora-malaysian-16k.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/meta-llama-Llama-3.1-8B-Instruct-qlora-malaysian-16k-GGUF/resolve/main/meta-llama-Llama-3.1-8B-Instruct-qlora-malaysian-16k.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/meta-llama-Llama-3.1-8B-Instruct-qlora-malaysian-16k-GGUF/resolve/main/meta-llama-Llama-3.1-8B-Instruct-qlora-malaysian-16k.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/meta-llama-Llama-3.1-8B-Instruct-qlora-malaysian-16k-GGUF/resolve/main/meta-llama-Llama-3.1-8B-Instruct-qlora-malaysian-16k.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/meta-llama-Llama-3.1-8B-Instruct-qlora-malaysian-16k-GGUF/resolve/main/meta-llama-Llama-3.1-8B-Instruct-qlora-malaysian-16k.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/meta-llama-Llama-3.1-8B-Instruct-qlora-malaysian-16k-GGUF/resolve/main/meta-llama-Llama-3.1-8B-Instruct-qlora-malaysian-16k.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/meta-llama-Llama-3.1-8B-Instruct-qlora-malaysian-16k-GGUF/resolve/main/meta-llama-Llama-3.1-8B-Instruct-qlora-malaysian-16k.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/meta-llama-Llama-3.1-8B-Instruct-qlora-malaysian-16k-GGUF/resolve/main/meta-llama-Llama-3.1-8B-Instruct-qlora-malaysian-16k.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/meta-llama-Llama-3.1-8B-Instruct-qlora-malaysian-16k-GGUF/resolve/main/meta-llama-Llama-3.1-8B-Instruct-qlora-malaysian-16k.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/meta-llama-Llama-3.1-8B-Instruct-qlora-malaysian-16k-GGUF/resolve/main/meta-llama-Llama-3.1-8B-Instruct-qlora-malaysian-16k.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/meta-llama-Llama-3.1-8B-Instruct-qlora-malaysian-16k-GGUF/resolve/main/meta-llama-Llama-3.1-8B-Instruct-qlora-malaysian-16k.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/meta-llama-Llama-3.1-8B-Instruct-qlora-malaysian-16k-GGUF/resolve/main/meta-llama-Llama-3.1-8B-Instruct-qlora-malaysian-16k.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/meta-llama-Llama-3.1-8B-Instruct-qlora-malaysian-16k-GGUF/resolve/main/meta-llama-Llama-3.1-8B-Instruct-qlora-malaysian-16k.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/meta-llama-Llama-3.1-8B-Instruct-qlora-malaysian-16k-GGUF/resolve/main/meta-llama-Llama-3.1-8B-Instruct-qlora-malaysian-16k.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/meta-llama-Llama-3.1-8B-Instruct-qlora-malaysian-16k-GGUF/resolve/main/meta-llama-Llama-3.1-8B-Instruct-qlora-malaysian-16k.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
ML4SE2023-G1-WizardCoder/ML4SE23_G1_WizardCoder-SCoT-350M-V1.0 | ML4SE2023-G1-WizardCoder | "2023-10-24T16:50:28Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"codegen",
"text-generation",
"code",
"en",
"dataset:ML4SE2023-G1-WizardCoder/EvolInstruct-SCoT-1k",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-10-24T16:46:40Z" | ---
datasets:
- ML4SE2023-G1-WizardCoder/EvolInstruct-SCoT-1k
language:
- en
tags:
- code
---
# WizardCoder 350M Version
Based on https://huggingface.co/Salesforce/codegen-350M-nl |
hkivancoral/smids_3x_deit_small_adamax_00001_fold2 | hkivancoral | "2023-12-12T03:07:55Z" | 3 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-small-patch16-224",
"base_model:finetune:facebook/deit-small-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2023-12-12T02:43:33Z" | ---
license: apache-2.0
base_model: facebook/deit-small-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: smids_3x_deit_small_adamax_00001_fold2
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: test
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.8735440931780366
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# smids_3x_deit_small_adamax_00001_fold2
This model is a fine-tuned version of [facebook/deit-small-patch16-224](https://huggingface.co/facebook/deit-small-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9125
- Accuracy: 0.8735
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3392 | 1.0 | 225 | 0.3664 | 0.8502 |
| 0.2605 | 2.0 | 450 | 0.3323 | 0.8719 |
| 0.212 | 3.0 | 675 | 0.3215 | 0.8686 |
| 0.2229 | 4.0 | 900 | 0.3309 | 0.8652 |
| 0.106 | 5.0 | 1125 | 0.3345 | 0.8802 |
| 0.0845 | 6.0 | 1350 | 0.3616 | 0.8719 |
| 0.0626 | 7.0 | 1575 | 0.3907 | 0.8686 |
| 0.0326 | 8.0 | 1800 | 0.4483 | 0.8669 |
| 0.0372 | 9.0 | 2025 | 0.4833 | 0.8652 |
| 0.0087 | 10.0 | 2250 | 0.5521 | 0.8735 |
| 0.0217 | 11.0 | 2475 | 0.5679 | 0.8752 |
| 0.0111 | 12.0 | 2700 | 0.6269 | 0.8702 |
| 0.011 | 13.0 | 2925 | 0.6480 | 0.8702 |
| 0.0061 | 14.0 | 3150 | 0.6728 | 0.8686 |
| 0.0004 | 15.0 | 3375 | 0.7336 | 0.8669 |
| 0.0093 | 16.0 | 3600 | 0.7662 | 0.8702 |
| 0.0044 | 17.0 | 3825 | 0.7704 | 0.8752 |
| 0.0001 | 18.0 | 4050 | 0.7907 | 0.8735 |
| 0.0005 | 19.0 | 4275 | 0.7929 | 0.8669 |
| 0.0001 | 20.0 | 4500 | 0.8179 | 0.8669 |
| 0.0001 | 21.0 | 4725 | 0.8135 | 0.8785 |
| 0.0001 | 22.0 | 4950 | 0.8581 | 0.8702 |
| 0.0037 | 23.0 | 5175 | 0.8366 | 0.8719 |
| 0.0001 | 24.0 | 5400 | 0.8672 | 0.8686 |
| 0.0168 | 25.0 | 5625 | 0.8621 | 0.8686 |
| 0.0001 | 26.0 | 5850 | 0.8633 | 0.8702 |
| 0.0 | 27.0 | 6075 | 0.8679 | 0.8669 |
| 0.0001 | 28.0 | 6300 | 0.8863 | 0.8735 |
| 0.0001 | 29.0 | 6525 | 0.8794 | 0.8686 |
| 0.0145 | 30.0 | 6750 | 0.8923 | 0.8686 |
| 0.0 | 31.0 | 6975 | 0.8806 | 0.8719 |
| 0.0 | 32.0 | 7200 | 0.8844 | 0.8686 |
| 0.0001 | 33.0 | 7425 | 0.8917 | 0.8669 |
| 0.0 | 34.0 | 7650 | 0.8891 | 0.8719 |
| 0.0 | 35.0 | 7875 | 0.8984 | 0.8735 |
| 0.0077 | 36.0 | 8100 | 0.8879 | 0.8752 |
| 0.0 | 37.0 | 8325 | 0.9058 | 0.8702 |
| 0.0 | 38.0 | 8550 | 0.9002 | 0.8686 |
| 0.0096 | 39.0 | 8775 | 0.9018 | 0.8752 |
| 0.0 | 40.0 | 9000 | 0.9051 | 0.8752 |
| 0.0 | 41.0 | 9225 | 0.9023 | 0.8702 |
| 0.0 | 42.0 | 9450 | 0.9103 | 0.8752 |
| 0.0 | 43.0 | 9675 | 0.9151 | 0.8735 |
| 0.0 | 44.0 | 9900 | 0.9097 | 0.8735 |
| 0.0 | 45.0 | 10125 | 0.9063 | 0.8702 |
| 0.0 | 46.0 | 10350 | 0.9129 | 0.8735 |
| 0.0 | 47.0 | 10575 | 0.9170 | 0.8735 |
| 0.0 | 48.0 | 10800 | 0.9138 | 0.8735 |
| 0.0048 | 49.0 | 11025 | 0.9128 | 0.8735 |
| 0.0048 | 50.0 | 11250 | 0.9125 | 0.8735 |
### Framework versions
- Transformers 4.32.1
- Pytorch 2.1.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
RayneAmes/parasect_v1 | RayneAmes | "2025-02-13T15:02:29Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"parler_tts",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2025-02-13T14:59:46Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/NeuralsirkrishnaShadow_Ognoexperiment27Multi_verse_model-GGUF | mradermacher | "2025-01-04T08:49:20Z" | 24 | 0 | transformers | [
"transformers",
"gguf",
"Safetensors",
"text-generation-inference",
"merge",
"en",
"base_model:MaziyarPanahi/NeuralsirkrishnaShadow_Ognoexperiment27Multi_verse_model",
"base_model:quantized:MaziyarPanahi/NeuralsirkrishnaShadow_Ognoexperiment27Multi_verse_model",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-01-04T08:41:12Z" | ---
base_model: MaziyarPanahi/NeuralsirkrishnaShadow_Ognoexperiment27Multi_verse_model
language:
- en
library_name: transformers
license: apache-2.0
model_creator: MaziyarPanahi
model_name: NeuralsirkrishnaShadow_Ognoexperiment27Multi_verse_model
quantized_by: mradermacher
tags:
- Safetensors
- text-generation-inference
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/MaziyarPanahi/NeuralsirkrishnaShadow_Ognoexperiment27Multi_verse_model
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/NeuralsirkrishnaShadow_Ognoexperiment27Multi_verse_model-GGUF/resolve/main/NeuralsirkrishnaShadow_Ognoexperiment27Multi_verse_model.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralsirkrishnaShadow_Ognoexperiment27Multi_verse_model-GGUF/resolve/main/NeuralsirkrishnaShadow_Ognoexperiment27Multi_verse_model.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralsirkrishnaShadow_Ognoexperiment27Multi_verse_model-GGUF/resolve/main/NeuralsirkrishnaShadow_Ognoexperiment27Multi_verse_model.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/NeuralsirkrishnaShadow_Ognoexperiment27Multi_verse_model-GGUF/resolve/main/NeuralsirkrishnaShadow_Ognoexperiment27Multi_verse_model.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralsirkrishnaShadow_Ognoexperiment27Multi_verse_model-GGUF/resolve/main/NeuralsirkrishnaShadow_Ognoexperiment27Multi_verse_model.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralsirkrishnaShadow_Ognoexperiment27Multi_verse_model-GGUF/resolve/main/NeuralsirkrishnaShadow_Ognoexperiment27Multi_verse_model.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NeuralsirkrishnaShadow_Ognoexperiment27Multi_verse_model-GGUF/resolve/main/NeuralsirkrishnaShadow_Ognoexperiment27Multi_verse_model.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/NeuralsirkrishnaShadow_Ognoexperiment27Multi_verse_model-GGUF/resolve/main/NeuralsirkrishnaShadow_Ognoexperiment27Multi_verse_model.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralsirkrishnaShadow_Ognoexperiment27Multi_verse_model-GGUF/resolve/main/NeuralsirkrishnaShadow_Ognoexperiment27Multi_verse_model.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/NeuralsirkrishnaShadow_Ognoexperiment27Multi_verse_model-GGUF/resolve/main/NeuralsirkrishnaShadow_Ognoexperiment27Multi_verse_model.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/NeuralsirkrishnaShadow_Ognoexperiment27Multi_verse_model-GGUF/resolve/main/NeuralsirkrishnaShadow_Ognoexperiment27Multi_verse_model.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/NeuralsirkrishnaShadow_Ognoexperiment27Multi_verse_model-GGUF/resolve/main/NeuralsirkrishnaShadow_Ognoexperiment27Multi_verse_model.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
kodonho/llama2-chat-koalpaca | kodonho | "2024-01-12T01:54:43Z" | 2,258 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"ko",
"dataset:beomi/KoAlpaca-v1.1a",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-01-06T11:00:56Z" | ---
license: llama2
datasets:
- beomi/KoAlpaca-v1.1a
language:
- ko
---
# Llama2 based model with koalapaca dataset
This is an English, Korean Model based on
* [meta-llama/Llama-2-7b-chat-hf] |
nickjain/mistral_b_finance_finetuned_test | nickjain | "2023-11-28T19:19:02Z" | 12 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"region:us"
] | null | "2023-11-28T19:18:51Z" | ---
library_name: peft
base_model: mistralai/Mistral-7B-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.3.dev0
|
Zoyd/01-ai_Yi-1.5-9B-Chat-16K-2_5bpw_exl2 | Zoyd | "2024-05-20T08:10:52Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:2403.04652",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"exl2",
"region:us"
] | text-generation | "2024-05-20T06:35:56Z" | ---
license: apache-2.0
---
**Exllamav2** quant (**exl2** / **2.5 bpw**) made with ExLlamaV2 v0.0.21
Other EXL2 quants:
| **Quant** | **Model Size** | **lm_head** |
| ----- | ---------- | ------- |
|<center>**[2.2](https://huggingface.co/Zoyd/01-ai_Yi-1.5-9B-Chat-16K-2_2bpw_exl2)**</center> | <center>2900 MB</center> | <center>6</center> |
|<center>**[2.5](https://huggingface.co/Zoyd/01-ai_Yi-1.5-9B-Chat-16K-2_5bpw_exl2)**</center> | <center>3171 MB</center> | <center>6</center> |
|<center>**[3.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-9B-Chat-16K-3_0bpw_exl2)**</center> | <center>3669 MB</center> | <center>6</center> |
|<center>**[3.5](https://huggingface.co/Zoyd/01-ai_Yi-1.5-9B-Chat-16K-3_5bpw_exl2)**</center> | <center>4162 MB</center> | <center>6</center> |
|<center>**[3.75](https://huggingface.co/Zoyd/01-ai_Yi-1.5-9B-Chat-16K-3_75bpw_exl2)**</center> | <center>4411 MB</center> | <center>6</center> |
|<center>**[4.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-9B-Chat-16K-4_0bpw_exl2)**</center> | <center>4657 MB</center> | <center>6</center> |
|<center>**[4.25](https://huggingface.co/Zoyd/01-ai_Yi-1.5-9B-Chat-16K-4_25bpw_exl2)**</center> | <center>4906 MB</center> | <center>6</center> |
|<center>**[5.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-9B-Chat-16K-5_0bpw_exl2)**</center> | <center>5648 MB</center> | <center>6</center> |
|<center>**[6.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-9B-Chat-16K-6_0bpw_exl2)**</center> | <center>6687 MB</center> | <center>8</center> |
|<center>**[6.5](https://huggingface.co/Zoyd/01-ai_Yi-1.5-9B-Chat-16K-6_5bpw_exl2)**</center> | <center>7178 MB</center> | <center>8</center> |
|<center>**[8.0](https://huggingface.co/Zoyd/01-ai_Yi-1.5-9B-Chat-16K-8_0bpw_exl2)**</center> | <center>8328 MB</center> | <center>8</center> |
<div align="center">
<picture>
<img src="https://raw.githubusercontent.com/01-ai/Yi/main/assets/img/Yi_logo_icon_light.svg" width="150px">
</picture>
</div>
<p align="center">
<a href="https://github.com/01-ai">🐙 GitHub</a> •
<a href="https://discord.gg/hYUwWddeAu">👾 Discord</a> •
<a href="https://twitter.com/01ai_yi">🐤 Twitter</a> •
<a href="https://github.com/01-ai/Yi-1.5/issues/2">💬 WeChat</a>
<br/>
<a href="https://arxiv.org/abs/2403.04652">📝 Paper</a> •
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#faq">🙌 FAQ</a> •
<a href="https://github.com/01-ai/Yi/tree/main?tab=readme-ov-file#learning-hub">📗 Learning Hub</a>
</p>
# Intro
Yi-1.5 is an upgraded version of Yi. It is continuously pre-trained on Yi with a high-quality corpus of 500B tokens and fine-tuned on 3M diverse fine-tuning samples.
Compared with Yi, Yi-1.5 delivers stronger performance in coding, math, reasoning, and instruction-following capability, while still maintaining excellent capabilities in language understanding, commonsense reasoning, and reading comprehension.
<div align="center">
Model | Context Length | Pre-trained Tokens
| :------------: | :------------: | :------------: |
| Yi-1.5 | 4K, 16K, 32K | 3.6T
</div>
# Models
- Chat models
<div align="center">
| Name | Download |
| --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-34B-Chat-16K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-Chat-16K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-6B-Chat | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
</div>
- Base models
<div align="center">
| Name | Download |
| ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Yi-1.5-34B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-34B-32K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-9B-32K | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
| Yi-1.5-6B | • [🤗 Hugging Face](https://huggingface.co/collections/01-ai/yi-15-2024-05-663f3ecab5f815a3eaca7ca8) • [🤖 ModelScope](https://www.modelscope.cn/organization/01ai) |
</div>
# Benchmarks
- Chat models
Yi-1.5-34B-Chat is on par with or excels beyond larger models in most benchmarks.

Yi-1.5-9B-Chat is the top performer among similarly sized open-source models.

- Base models
Yi-1.5-34B is on par with or excels beyond larger models in some benchmarks.

Yi-1.5-9B is the top performer among similarly sized open-source models.

# Quick Start
For getting up and running with Yi-1.5 models quickly, see [README](https://github.com/01-ai/Yi-1.5).
|
isspek/xlnet-base-cased_ebola_gpt4o_5_2e-5_16_undersampling_0.6 | isspek | "2024-11-23T10:45:09Z" | 118 | 0 | transformers | [
"transformers",
"safetensors",
"xlnet",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-11-23T10:44:55Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lesso11/2a94e0c8-4b71-41c0-b1cc-7193d6f23baf | lesso11 | "2025-02-18T20:33:56Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:Korabbit/llama-2-ko-7b",
"base_model:adapter:Korabbit/llama-2-ko-7b",
"region:us"
] | null | "2025-02-18T20:05:17Z" | ---
library_name: peft
base_model: Korabbit/llama-2-ko-7b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 2a94e0c8-4b71-41c0-b1cc-7193d6f23baf
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<br>
# 2a94e0c8-4b71-41c0-b1cc-7193d6f23baf
This model is a fine-tuned version of [Korabbit/llama-2-ko-7b](https://huggingface.co/Korabbit/llama-2-ko-7b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5600
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000211
- train_batch_size: 4
- eval_batch_size: 4
- seed: 110
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0005 | 1 | 1.1166 |
| 0.5958 | 0.0258 | 50 | 0.6734 |
| 0.5846 | 0.0516 | 100 | 0.6472 |
| 0.548 | 0.0774 | 150 | 0.6385 |
| 0.5072 | 0.1033 | 200 | 0.6187 |
| 0.531 | 0.1291 | 250 | 0.6029 |
| 0.4777 | 0.1549 | 300 | 0.5904 |
| 0.5171 | 0.1807 | 350 | 0.5731 |
| 0.534 | 0.2065 | 400 | 0.5636 |
| 0.5064 | 0.2323 | 450 | 0.5604 |
| 0.4888 | 0.2581 | 500 | 0.5600 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
mradermacher/Llama-3.1-SuperNova-Lite-GGUF | mradermacher | "2024-09-13T08:43:08Z" | 37 | 1 | transformers | [
"transformers",
"gguf",
"en",
"dataset:arcee-ai/EvolKit-20k",
"base_model:arcee-ai/Llama-3.1-SuperNova-Lite",
"base_model:quantized:arcee-ai/Llama-3.1-SuperNova-Lite",
"license:llama3",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-09-12T17:53:35Z" | ---
base_model: arcee-ai/Llama-3.1-SuperNova-Lite
datasets:
- arcee-ai/EvolKit-20k
language:
- en
library_name: transformers
license: llama3
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/arcee-ai/Llama-3.1-SuperNova-Lite
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Llama-3.1-SuperNova-Lite-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-SuperNova-Lite-GGUF/resolve/main/Llama-3.1-SuperNova-Lite.Q2_K.gguf) | Q2_K | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-SuperNova-Lite-GGUF/resolve/main/Llama-3.1-SuperNova-Lite.IQ3_XS.gguf) | IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-SuperNova-Lite-GGUF/resolve/main/Llama-3.1-SuperNova-Lite.Q3_K_S.gguf) | Q3_K_S | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-SuperNova-Lite-GGUF/resolve/main/Llama-3.1-SuperNova-Lite.IQ3_S.gguf) | IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-SuperNova-Lite-GGUF/resolve/main/Llama-3.1-SuperNova-Lite.IQ3_M.gguf) | IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-SuperNova-Lite-GGUF/resolve/main/Llama-3.1-SuperNova-Lite.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-SuperNova-Lite-GGUF/resolve/main/Llama-3.1-SuperNova-Lite.Q3_K_L.gguf) | Q3_K_L | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-SuperNova-Lite-GGUF/resolve/main/Llama-3.1-SuperNova-Lite.IQ4_XS.gguf) | IQ4_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-SuperNova-Lite-GGUF/resolve/main/Llama-3.1-SuperNova-Lite.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-SuperNova-Lite-GGUF/resolve/main/Llama-3.1-SuperNova-Lite.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-SuperNova-Lite-GGUF/resolve/main/Llama-3.1-SuperNova-Lite.Q5_K_S.gguf) | Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-SuperNova-Lite-GGUF/resolve/main/Llama-3.1-SuperNova-Lite.Q5_K_M.gguf) | Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-SuperNova-Lite-GGUF/resolve/main/Llama-3.1-SuperNova-Lite.Q6_K.gguf) | Q6_K | 6.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-SuperNova-Lite-GGUF/resolve/main/Llama-3.1-SuperNova-Lite.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-SuperNova-Lite-GGUF/resolve/main/Llama-3.1-SuperNova-Lite.f16.gguf) | f16 | 16.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
NemesisAlm/q-FrozenLake-v1-4x4-noSlippery | NemesisAlm | "2023-07-16T20:04:44Z" | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-07-16T20:04:41Z" | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="NemesisAlm/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
tdc/scGPT | tdc | "2025-01-11T08:22:24Z" | 504 | 2 | transformers | [
"transformers",
"safetensors",
"scgpt",
"single-cell",
"biology",
"base_model:MohamedMabrouk/scGPT",
"base_model:finetune:MohamedMabrouk/scGPT",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | "2024-07-21T04:58:11Z" | ---
license: mit
tags:
- single-cell
- biology
base_model:
- MohamedMabrouk/scGPT
---
# scGPT
scGPT is A foundation model for single-cell biology based on a generative pre trained transformer across a repository of over 33 million cells.
# Abstract
Generative pretrained models have achieved remarkable success in various domains such as language and computer vision. Specifically, the combination of large-scale diverse datasets and pretrained transformers has emerged as a promising approach for developing foundation models. Drawing parallels between language and cellular biology (in which texts comprise words; similarly, cells are defined by genes), our study probes the applicability of foundation models to advance cellular biology and genetic research. Using burgeoning single-cell sequencing data, we have constructed a foundation model for single-cell biology, scGPT, based on a generative pretrained transformer across a repository of over 33 million cells. Our findings illustrate that scGPT effectively distills critical biological insights concerning genes and cells. Through further adaptation of transfer learning, scGPT can be optimized to achieve superior performance across diverse downstream applications. This includes tasks such as cell type annotation, multi-batch integration, multi-omic integration, perturbation response prediction and gene network inference.
# Code
```python
from tdc.multi_pred.anndata_dataset import DataLoader
from tdc import tdc_hf_interface
from tdc.model_server.tokenizers.scgpt import scGPTTokenizer
import torch
# an example dataset
adata = DataLoader("cellxgene_sample_small",
"./data",
dataset_names=["cellxgene_sample_small"],
no_convert=True).adata
# code for loading the model and performing inference
scgpt = tdc_hf_interface("scGPT")
model = scgpt.load() # This line can cause segmentation fault on inappropriate setup
tokenizer = scGPTTokenizer()
gene_ids = adata.var["feature_name"].to_numpy(
) # Convert to numpy array
tokenized_data = tokenizer.tokenize_cell_vectors(
adata.X.toarray(), gene_ids)
mask = torch.tensor([x != 0 for x in tokenized_data[0][1]],
dtype=torch.bool)
# Extract first embedding
first_embed = model(tokenized_data[0][0],
tokenized_data[0][1],
attention_mask=mask)
```
# TDC.scGPT Source Code
https://github.com/mims-harvard/TDC/blob/main/tdc/model_server/models/scgpt.py
* hf migration code available upon request
* weights extracted from base model
# TDC Citation
```
@inproceedings{
velez-arce2024signals,
title={Signals in the Cells: Multimodal and Contextualized Machine Learning Foundations for Therapeutics},
author={Alejandro Velez-Arce and Xiang Lin and Kexin Huang and Michelle M Li and Wenhao Gao and Bradley Pentelute and Tianfan Fu and Manolis Kellis and Marinka Zitnik},
booktitle={NeurIPS 2024 Workshop on AI for New Drug Modalities},
year={2024},
url={https://openreview.net/forum?id=kL8dlYp6IM}
}
```
# Additional Citations
- Cui, H., Wang, C., Maan, H. et al. scGPT: toward building a foundation model for single-cell multi-omics using generative AI. Nat Methods 21, 1470–1480 (2024). https://doi.org/10.1038/s41592-024-02201-0
# Model Github
https://github.com/bowang-lab/scGPT |
andreiliphdpr/bert-base-multilingual-uncased-finetuned-cola | andreiliphdpr | "2022-01-11T13:22:43Z" | 10 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-03-02T23:29:05Z" | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: andreiliphdpr/bert-base-multilingual-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# andreiliphdpr/bert-base-multilingual-uncased-finetuned-cola
This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0423
- Train Accuracy: 0.9869
- Validation Loss: 0.0303
- Validation Accuracy: 0.9913
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 43750, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.0423 | 0.9869 | 0.0303 | 0.9913 | 0 |
### Framework versions
- Transformers 4.15.0.dev0
- TensorFlow 2.6.2
- Datasets 1.15.1
- Tokenizers 0.10.3
|
bmehrba/TinyLlama-1.1B-Chat-v1.0-fine-tuned-adapters_Aleatoric_tiny_0.0_Seed103 | bmehrba | "2024-04-23T10:43:21Z" | 0 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"region:us"
] | null | "2024-04-23T10:43:17Z" | ---
library_name: peft
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.7.0.dev0
|
Surn/DPTDepth3D | Surn | "2025-02-20T07:35:52Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-02-20T07:15:22Z" | ---
title: DPT Depth Estimation + 3D
emoji: ⚡
colorFrom: blue
colorTo: red
sdk: gradio
sdk_version: 5.16.1
app_file: app.py
pinned: false
short_description: Image to 3D with DPT + 3D Point Cloud
---
Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference |
Dans-DiscountModels/Dans-PersonalityEngine-V1.2.0-24b-Q5_K_M-GGUF | Dans-DiscountModels | "2025-02-19T04:57:27Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"general-purpose",
"roleplay",
"storywriting",
"chemistry",
"biology",
"code",
"climate",
"axolotl",
"text-generation-inference",
"finetune",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"dataset:PocketDoc/Dans-MemoryCore-CoreCurriculum-Small",
"dataset:AquaV/US-Army-Survival-Sharegpt",
"dataset:AquaV/Multi-Environment-Operations-Sharegpt",
"dataset:AquaV/Resistance-Sharegpt",
"dataset:AquaV/Interrogation-Sharegpt",
"dataset:AquaV/Chemical-Biological-Safety-Applications-Sharegpt",
"dataset:AquaV/Energetic-Materials-Sharegpt",
"dataset:PocketDoc/Dans-Mathmaxx",
"dataset:PocketDoc/Dans-Mathmaxx-Numina-CoT",
"dataset:PJMixers/Math-Multiturn-1K-ShareGPT",
"dataset:PocketDoc/Dans-Benchmaxx-COT",
"dataset:PocketDoc/Dans-Codemaxx-LeetCode",
"dataset:PocketDoc/Dans-Codemaxx-CodeFeedback-Conversations",
"dataset:PocketDoc/Dans-Codemaxx-CodeFeedback-SingleTurn",
"dataset:PocketDoc/Dans-Codemaxx-Bigcode-SelfInstruct",
"dataset:PocketDoc/Dans-Taskmaxx",
"dataset:PocketDoc/Dans-Taskmaxx-DataPrepper",
"dataset:PocketDoc/Dans-Taskmaxx-ConcurrentQA-Reworked",
"dataset:PocketDoc/Dans-Taskmaxx-TableGPT",
"dataset:PocketDoc/Dans-Taskmaxx-SciRIFF",
"dataset:PocketDoc/Dans-Taskmaxx-Edit",
"dataset:PocketDoc/Dans-Toolmaxx-Agent",
"dataset:PocketDoc/Dans-Toolmaxx-ShellCommands",
"dataset:PocketDoc/Dans-Toolmaxx-Functions-Toolbench",
"dataset:PocketDoc/Dans-Toolmaxx-Functions-ToolACE",
"dataset:PocketDoc/Dans-ASCIIMaxx-Wordart",
"dataset:PocketDoc/Dans-Prosemaxx-Gutenberg",
"dataset:PocketDoc/Dans-Prosemaxx-Cowriter-3-XL",
"dataset:PocketDoc/Dans-Prosemaxx-Adventure",
"dataset:PocketDoc/Dans-Failuremaxx-Adventure-3",
"dataset:PocketDoc/Dans-Prosemaxx-InstructWriter-ZeroShot-2",
"dataset:PocketDoc/Dans-Prosemaxx-InstructWriter-Continue-2",
"dataset:PocketDoc/Dans-Assistantmaxx-Sharegpt",
"dataset:PocketDoc/Dans-Assistantmaxx-OpenAssistant2",
"dataset:PocketDoc/Dans-Assistantmaxx-Opus-Merge",
"dataset:PocketDoc/Dans-Assistantmaxx-sonnetorca-subset",
"dataset:PocketDoc/Dans-Assistantmaxx-sonnetorca-subset-2",
"dataset:PocketDoc/Dans-Assistantmaxx-NoRobots",
"dataset:PocketDoc/Dans-Assistantmaxx-Synthia",
"dataset:PocketDoc/Dans-Assistantmaxx-ASL",
"dataset:PocketDoc/Dans-Assistantmaxx-PersonaLLM-Opus",
"dataset:PocketDoc/Dans-Assistantmaxx-UnnaturalInstructions-GPT4",
"dataset:PocketDoc/Dans-Assistantmaxx-LongAlign",
"dataset:PocketDoc/Dans-Assistantmaxx-EvolKit",
"dataset:PocketDoc/Dans-Assistantmaxx-Camel-GPT4",
"dataset:PocketDoc/Dans-Assistantmaxx-OpenLeecher-Instruct",
"dataset:PocketDoc/Dans-Assistantmaxx-Tulu3-IF",
"dataset:PocketDoc/Dans-Systemmaxx",
"dataset:PocketDoc/Dans-Logicmaxx-Skunkworks",
"dataset:PocketDoc/Dans-Logicmaxx-FI-VeriMed",
"dataset:PocketDoc/Dans-Logicmaxx-SAT-AP",
"dataset:PocketDoc/Dans-Logicmaxx-Magpie-Ultra",
"dataset:PJMixers/grimulkan_theory-of-mind-ShareGPT",
"dataset:PJMixers/grimulkan_physical-reasoning-ShareGPT",
"dataset:PocketDoc/Dans-Personamaxx",
"dataset:PocketDoc/Dans-Personamaxx-Rainy",
"dataset:PocketDoc/Dans-Personamaxx-C1",
"dataset:PocketDoc/Dans-Personamaxx-VN",
"base_model:PocketDoc/Dans-PersonalityEngine-V1.2.0-24b",
"base_model:quantized:PocketDoc/Dans-PersonalityEngine-V1.2.0-24b",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2025-02-19T04:56:09Z" | ---
license: apache-2.0
tags:
- general-purpose
- roleplay
- storywriting
- chemistry
- biology
- code
- climate
- axolotl
- text-generation-inference
- finetune
- llama-cpp
- gguf-my-repo
datasets:
- PocketDoc/Dans-MemoryCore-CoreCurriculum-Small
- AquaV/US-Army-Survival-Sharegpt
- AquaV/Multi-Environment-Operations-Sharegpt
- AquaV/Resistance-Sharegpt
- AquaV/Interrogation-Sharegpt
- AquaV/Chemical-Biological-Safety-Applications-Sharegpt
- AquaV/Energetic-Materials-Sharegpt
- PocketDoc/Dans-Mathmaxx
- PocketDoc/Dans-Mathmaxx-Numina-CoT
- PJMixers/Math-Multiturn-1K-ShareGPT
- PocketDoc/Dans-Benchmaxx-COT
- PocketDoc/Dans-Codemaxx-LeetCode
- PocketDoc/Dans-Codemaxx-CodeFeedback-Conversations
- PocketDoc/Dans-Codemaxx-CodeFeedback-SingleTurn
- PocketDoc/Dans-Codemaxx-Bigcode-SelfInstruct
- PocketDoc/Dans-Taskmaxx
- PocketDoc/Dans-Taskmaxx-DataPrepper
- PocketDoc/Dans-Taskmaxx-ConcurrentQA-Reworked
- PocketDoc/Dans-Taskmaxx-TableGPT
- PocketDoc/Dans-Taskmaxx-SciRIFF
- PocketDoc/Dans-Taskmaxx-Edit
- PocketDoc/Dans-Toolmaxx-Agent
- PocketDoc/Dans-Toolmaxx-ShellCommands
- PocketDoc/Dans-Toolmaxx-Functions-Toolbench
- PocketDoc/Dans-Toolmaxx-Functions-ToolACE
- PocketDoc/Dans-ASCIIMaxx-Wordart
- PocketDoc/Dans-Prosemaxx-Gutenberg
- PocketDoc/Dans-Prosemaxx-Cowriter-3-XL
- PocketDoc/Dans-Prosemaxx-Adventure
- PocketDoc/Dans-Failuremaxx-Adventure-3
- PocketDoc/Dans-Prosemaxx-InstructWriter-ZeroShot-2
- PocketDoc/Dans-Prosemaxx-InstructWriter-Continue-2
- PocketDoc/Dans-Assistantmaxx-Sharegpt
- PocketDoc/Dans-Assistantmaxx-OpenAssistant2
- PocketDoc/Dans-Assistantmaxx-Opus-Merge
- PocketDoc/Dans-Assistantmaxx-sonnetorca-subset
- PocketDoc/Dans-Assistantmaxx-sonnetorca-subset-2
- PocketDoc/Dans-Assistantmaxx-NoRobots
- PocketDoc/Dans-Assistantmaxx-Synthia
- PocketDoc/Dans-Assistantmaxx-ASL
- PocketDoc/Dans-Assistantmaxx-PersonaLLM-Opus
- PocketDoc/Dans-Assistantmaxx-UnnaturalInstructions-GPT4
- PocketDoc/Dans-Assistantmaxx-LongAlign
- PocketDoc/Dans-Assistantmaxx-EvolKit
- PocketDoc/Dans-Assistantmaxx-Camel-GPT4
- PocketDoc/Dans-Assistantmaxx-OpenLeecher-Instruct
- PocketDoc/Dans-Assistantmaxx-Tulu3-IF
- PocketDoc/Dans-Systemmaxx
- PocketDoc/Dans-Logicmaxx-Skunkworks
- PocketDoc/Dans-Logicmaxx-FI-VeriMed
- PocketDoc/Dans-Logicmaxx-SAT-AP
- PocketDoc/Dans-Logicmaxx-Magpie-Ultra
- PJMixers/grimulkan_theory-of-mind-ShareGPT
- PJMixers/grimulkan_physical-reasoning-ShareGPT
- PocketDoc/Dans-Personamaxx
- PocketDoc/Dans-Personamaxx-Rainy
- PocketDoc/Dans-Personamaxx-C1
- PocketDoc/Dans-Personamaxx-VN
language:
- en
base_model: PocketDoc/Dans-PersonalityEngine-V1.2.0-24b
pipeline_tag: text-generation
library_name: transformers
---
# PocketDoc/Dans-PersonalityEngine-V1.2.0-24b-Q5_K_M-GGUF
This model was converted to GGUF format from [`PocketDoc/Dans-PersonalityEngine-V1.2.0-24b`](https://huggingface.co/PocketDoc/Dans-PersonalityEngine-V1.2.0-24b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/PocketDoc/Dans-PersonalityEngine-V1.2.0-24b) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo PocketDoc/Dans-PersonalityEngine-V1.2.0-24b-Q5_K_M-GGUF --hf-file dans-personalityengine-v1.2.0-24b-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo PocketDoc/Dans-PersonalityEngine-V1.2.0-24b-Q5_K_M-GGUF --hf-file dans-personalityengine-v1.2.0-24b-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo PocketDoc/Dans-PersonalityEngine-V1.2.0-24b-Q5_K_M-GGUF --hf-file dans-personalityengine-v1.2.0-24b-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo PocketDoc/Dans-PersonalityEngine-V1.2.0-24b-Q5_K_M-GGUF --hf-file dans-personalityengine-v1.2.0-24b-q5_k_m.gguf -c 2048
```
|
daniel40/f118673a-8ead-4ddf-accb-6df62ad99f8e | daniel40 | "2025-01-23T11:21:19Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:defog/llama-3-sqlcoder-8b",
"base_model:adapter:defog/llama-3-sqlcoder-8b",
"license:cc-by-sa-4.0",
"region:us"
] | null | "2025-01-23T11:18:26Z" | ---
library_name: peft
license: cc-by-sa-4.0
base_model: defog/llama-3-sqlcoder-8b
tags:
- axolotl
- generated_from_trainer
model-index:
- name: f118673a-8ead-4ddf-accb-6df62ad99f8e
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: defog/llama-3-sqlcoder-8b
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 8f17b05284c2be0e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/8f17b05284c2be0e_train_data.json
type:
field_input: text_description
field_instruction: text
field_output: transcription_normalised
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: daniel40/f118673a-8ead-4ddf-accb-6df62ad99f8e
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/8f17b05284c2be0e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|eot_id|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 1783c1f8-3d34-4801-ade7-ef853ca2d493
wandb_project: Birthday-SN56-27-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 1783c1f8-3d34-4801-ade7-ef853ca2d493
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# f118673a-8ead-4ddf-accb-6df62ad99f8e
This model is a fine-tuned version of [defog/llama-3-sqlcoder-8b](https://huggingface.co/defog/llama-3-sqlcoder-8b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6941
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 1.4757 | 0.0004 | 1 | 2.3787 |
| 1.6561 | 0.0012 | 3 | 2.3614 |
| 2.2597 | 0.0024 | 6 | 1.9397 |
| 1.1343 | 0.0036 | 9 | 0.6941 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
ababio/icl_twi_v1 | ababio | "2024-12-06T08:49:38Z" | 149 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-12-06T08:45:17Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
araziziml/Qwen2-0.5B-GRPO-exp2 | araziziml | "2025-02-18T12:49:32Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"trl",
"grpo",
"conversational",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-02-18T12:48:47Z" | ---
base_model: Qwen/Qwen2-0.5B-Instruct
library_name: transformers
model_name: Qwen2-0.5B-GRPO-exp2
tags:
- generated_from_trainer
- trl
- grpo
licence: license
---
# Model Card for Qwen2-0.5B-GRPO-exp2
This model is a fine-tuned version of [Qwen/Qwen2-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="araziziml/Qwen2-0.5B-GRPO-exp2", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.0
- Transformers: 4.48.3
- Pytorch: 2.5.1
- Datasets: 3.3.0
- Tokenizers: 0.21.0
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
Jonjew/CyborgPortraits | Jonjew | "2025-02-09T03:04:02Z" | 8 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:unknown",
"region:us"
] | text-to-image | "2025-02-09T03:03:00Z" | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
<lora:ck-Neurocore-Realistic-Cyborgs:1> in the style of ck-ncr, rule of
third, concept art, (wabi-sabi cyborg::0.9) (android robot face::1.2) with
white ceramic materials ,seams, (camera lenses instead of pupils:1.5)
(traditional intricate japan clothes:1.5) (style by Yuri Shwedoff),shallow
depth of field, vignette, (Fujicolor Superia X-TRA 400) style by Nirav
Patel, inside a fractal shaped Artemis Station spaceport megastructure,
perfect focus, depth of field the view from afar, window view. vast scifi
cyberpunk landscape
output:
url: images/02053-2024-11-17-Neurocore-Cyborgs-scaled-sscale-020.jpeg
- text: >-
<lora:ck-Neurocore-Realistic-Cyborgs:1> in the style of ck-ncr, rule of
third, concept art, (wabi-sabi cyborg::0.9) (android robot face::1.2) with
white ceramic materials ,seams, scuffing, (camera lenses instead of
pupils:1.5) (traditional intricate japan clothes:1.5) (style by Yuri
Shwedoff),shallow depth of field, vignette, (Fujicolor Superia X-TRA 400)
style by Nirav Patel, inside a fractal shaped Artemis Station spaceport
megastructure, perfect focus, depth of field the view from afar, window
view. vast scifi cyberpunk landscape
output:
url: images/02055-2024-11-17-Neurocore-Cyborgs-scaled-sscale-025.jpeg
- text: >-
<lora:ck-Neurocore-Realistic-Cyborgs:1> in the style of ck-ncr, eye contact
with an elegant cyborg robot, white ceramic material, full transparent glass
head with mechanical parts, Sci-fi movie (style by George Shaw), shallow
depth of field, vignette, (Fujicolor Superia X-TRA 400), looking at the
camera
output:
url: images/02102-2024-11-17-Neurocore-Cyborgs-scaled.jpeg
- text: >-
<lora:ck-Neurocore-Realistic-Cyborgs:1> in the style of ck-ncr, young woman
is an android cyborg with ceramic face parts, revealing internal circuit
boards, long, dark hair styled in two high pigtails, green, white pastel
color palette, epic movie, (Fujicolor Superia X-TRA 400), vignette
output:
url: images/02117-2024-11-17-Neurocore-Cyborgs-scaled.jpeg
- text: >-
<lora:ck-Neurocore-Realistic-Cyborgs:1> in the style of ck-ncr, split image
in two halfes, one half shows a young woman, the other half shows the woman
as an android cyborg with ceramic face parts, revealing full transparent
glass head with mechanical parts, long, dark hair styled in two high
pigtails, Sci-fi movie (style by George Shaw),shallow depth of field,
vignette, (Fujicolor Superia X-TRA 400)
output:
url: images/02124-2024-11-17-Neurocore-Cyborgs-scaled.jpeg
- text: >-
<lora:ck-Neurocore-Realistic-Cyborgs:1> in the style of ck-ncr, eye contact
with an elegant cyborg robot, white ceramic material, full transparent glass
head with mechanical parts, Sci-fi movie (style by George Shaw), shallow
depth of field, vignette, (Fujicolor Superia X-TRA 400), looking at the
camera
output:
url: images/02103-2024-11-17-Neurocore-Cyborgs-scaled.jpeg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: in the style of ck-ncr
license: unknown
---
# Neurocore Sci-Fi Cyborg Portraits by ChronoKnight - [FLUX]
<Gallery />
## Model description
FROM https://civitai.com/models/957183/neurocore-sci-fi-cyborg-portraits-by-chronoknight-flux
Trigger in the style of ck-ncr,
Strength 1
NEUROCORE Sci-Fi Cyborg Portraits
Important info for prompting:
Trigger is:"in the style of ck-ncr,"
The LoRA aims for photorealistic portraits with intricate details.
Also works great together with other LoRAs
Prompting:
All the prompts I used are included in the example images to get you going!
Good tokens: ceramic, mechanical parts, geisha, android, cyborg
Trained on: Flux.dev
Images made with: Flux.dev fp8
Sampling method: Euler
Sampling steps: 40
Distilled CFG Scale: 3.5 but you can experiment as well
Clipskip: 1
## Trigger words
You should use `in the style of ck-ncr` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Jonjew/CyborgPortraits/tree/main) them in the Files & versions tab.
|
52AI/generalQA_intent_slotFilling | 52AI | "2023-09-05T11:26:22Z" | 33 | 2 | transformers | [
"transformers",
"pytorch",
"bert",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | "2023-09-04T07:29:23Z" | ---
license: mit
---
扩展JointBERT支持中文训练, 提供从数据合成到意图和槽位联合训练, 测试完整流程.
这里提供训练好的模型[52AI/generalQA_intent_slotFilling](https://huggingface.co/52AI/generalQA_intent_slotFilling/tree/main) 供测试. 国内下载容易中断,多运行两次.
```shell
$ python3 predict.py --task generalQA \
--input_file data/testcase/generalQAtest.txt \
--output_file local/generalQAtest_predict.txt \
--model_dir out/generalQA
```
-> 请 问 [你:B-TransEnZhSentence] [几:I-TransEnZhSentence] [岁:I-TransEnZhSentence] [了:I-TransEnZhSentence] 用 英 语 怎 么 说 ?
-> 翻 译 : [i:B-TransEnZhSentence] [love:I-TransEnZhSentence] [you:I-TransEnZhSentence]
-> 用 [美:B-CreateSenEntity] [好:I-CreateSenEntity] 写 一 个 句 子
-> [明:B-AntonymEntity] [天:I-AntonymEntity] 的 反 义 词
-> [后:B-SynonymEntity] [天:I-SynonymEntity] 的 同 义 词
测试结果: local/generalQAtest_predict.txt
项目地址: [JointBERT-zh](https://github.com/chenyangMl/JointBERT-zh)
|
sfulay/zephyr-7b-dpo-full-gpt-reward-scale-05 | sfulay | "2024-09-03T06:04:55Z" | 6 | 0 | null | [
"safetensors",
"mistral",
"trl",
"dpo",
"alignment-handbook",
"generated_from_trainer",
"base_model:alignment-handbook/zephyr-7b-sft-full",
"base_model:finetune:alignment-handbook/zephyr-7b-sft-full",
"license:apache-2.0",
"region:us"
] | null | "2024-09-02T21:01:50Z" | ---
license: apache-2.0
base_model: alignment-handbook/zephyr-7b-sft-full
tags:
- trl
- dpo
- alignment-handbook
- generated_from_trainer
model-index:
- name: zephyr-7b-dpo-full-gpt-reward-scale-05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zephyr-7b-dpo-full-gpt-reward-scale-05
This model is a fine-tuned version of [alignment-handbook/zephyr-7b-sft-full](https://huggingface.co/alignment-handbook/zephyr-7b-sft-full) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5238
- Rewards/chosen: -1.1890
- Rewards/rejected: -2.1821
- Rewards/accuracies: 0.7241
- Rewards/margins: 0.9930
- Logps/rejected: -463.8542
- Logps/chosen: -402.9079
- Logits/rejected: 3.3069
- Logits/chosen: 1.9855
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 55
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6687 | 0.1147 | 50 | 0.6560 | -0.0264 | -0.1298 | 0.6724 | 0.1034 | -258.6246 | -286.6438 | -2.5075 | -2.6072 |
| 0.581 | 0.2294 | 100 | 0.5764 | -0.7311 | -1.3172 | 0.7155 | 0.5861 | -377.3666 | -357.1160 | 0.6340 | 0.0270 |
| 0.558 | 0.3440 | 150 | 0.5510 | -1.2031 | -1.9696 | 0.7241 | 0.7665 | -442.6071 | -404.3199 | 3.0036 | 2.0828 |
| 0.5346 | 0.4587 | 200 | 0.5381 | -1.1677 | -2.0355 | 0.7112 | 0.8679 | -449.2019 | -400.7711 | 2.7759 | 1.7577 |
| 0.5391 | 0.5734 | 250 | 0.5333 | -1.0858 | -1.9666 | 0.7198 | 0.8807 | -442.3041 | -392.5903 | 2.9561 | 1.8167 |
| 0.5479 | 0.6881 | 300 | 0.5265 | -1.0463 | -1.9706 | 0.7069 | 0.9243 | -442.7093 | -388.6379 | 3.2239 | 2.0026 |
| 0.5232 | 0.8028 | 350 | 0.5262 | -1.3359 | -2.3191 | 0.7241 | 0.9832 | -477.5577 | -417.5966 | 3.6066 | 2.3484 |
| 0.5267 | 0.9174 | 400 | 0.5238 | -1.1890 | -2.1821 | 0.7241 | 0.9930 | -463.8542 | -402.9079 | 3.3069 | 1.9855 |
### Framework versions
- Transformers 4.44.0.dev0
- Pytorch 2.1.2
- Datasets 2.20.0
- Tokenizers 0.19.1
|
fromthesky/pldrllmv9-2-147M | fromthesky | "2024-10-31T13:15:15Z" | 5 | 0 | keras | [
"keras",
"text-generation",
"large-language-model",
"power-law-decoder-representations",
"pldr-llm",
"tensorflow",
"en",
"dataset:tiiuae/falcon-refinedweb",
"arxiv:2410.16703",
"arxiv:2306.01116",
"arxiv:2101.00027",
"license:apache-2.0",
"region:us"
] | text-generation | "2024-10-31T13:10:41Z" | ---
language:
- en
tags:
- text-generation
- large-language-model
- power-law-decoder-representations
- pldr-llm
- tensorflow
license: apache-2.0
datasets:
- tiiuae/falcon-refinedweb
---
# PLDR-LLM-v9-2-147M
## Model Description
PLDR-LLM-v9-2-147M is a large language model from power law decoder representations, which is a new language model architecture that utilizes power law graph attention to generate deductive and inductive outputs. This model has a parameter size of 147M. It refers to PLDRv9-2 whose architecture and training details are provided in Tables 1 and 2 of the research paper titled [PLDR-LLM: Large Language Model from Power Law Decoder Representations](https://arxiv.org/abs/2410.16703).
## Training data
PLDR-LLM-v9-2-147M was pretrained on the [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb), a publicly available English web dataset with extensive filtering and deduplication.
## Training procedure
This model was trained for ~8B tokens on RefinedWeb over 250k steps per rank. It was trained autoregressively with cross-entropy loss and without DAG regularization on the deductive outputs.
## Intended Use and Limitations
This model is intended to be used for research purposes. Given text as input prompt, it carries out next token prediction to generate continuation text. The context length for this model is 1024 tokens.
### How to use
- The tensorflow model checkpoint and tokenizer can be loaded into the PLDR-LLM framework to generate text as described in the code repository for training this model: [LLM-from-Power-Law-Decoder-Representations](https://github.com/burcgokden/LLM-from-Power-Law-Decoder-Representations).
### LM Evaluation Harness Support
- The keras model can be used with a fork of LM-Evaluation-Harness Suite with PLDR-LLM support: [lm-evaluation-harness-with-PLDR-LLM](https://github.com/burcgokden/lm-evaluation-harness-with-PLDR-LLM).
### Limitations and Biases
Large Language Models may generate text that is profane, lewd, socially unacceptable or offensive based on the contents of the dataset it was pretrained. RefinedWeb is a dataset that is as toxic and biased as the Pile. Please see the papers for [RefinedWeb](https://arxiv.org/abs/2306.01116) and [the Pile](https://arxiv.org/pdf/2101.00027) for more information. Moreover, large language models are also susceptible to hallucinations and may generate text that contains incorrect, irrelevant or misleading information. Since it is very hard to expect the contents of generated text ahead of time, the output of the large language models need to be heavily moderated and curated to avoid undesired content to appear without warning.
## Eval results
The evaluation results on benchmarks with zero-shot and few-shot setting and their comparison to LLM models of similar size reported in the literature can be found in Tables 3 and 4 of the [PLDR-LLM paper](https://arxiv.org/abs/2410.16703).
### BibTeX entry and citation info
Please cite this model as:
```bibtex
@misc{gokden2024pldrllm,
title={PLDR-LLM: Large Language Model from Power Law Decoder Representations},
author={Burc Gokden},
year={2024},
eprint={2410.16703},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2410.16703},
}
```
|
Casual-Autopsy/Llama-3-VNTL-Yollow-8B-v2-TEST001-Q6_K-GGUF | Casual-Autopsy | "2025-01-26T23:10:52Z" | 54 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:Casual-Autopsy/Llama-3-VNTL-Yollow-8B-Fixed",
"base_model:quantized:Casual-Autopsy/Llama-3-VNTL-Yollow-8B-Fixed",
"endpoints_compatible",
"region:us"
] | null | "2025-01-26T23:10:17Z" | ---
base_model: Casual-Autopsy/Llama-3-VNTL-Yollow-8B-v2-TEST001
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# Casual-Autopsy/Llama-3-VNTL-Yollow-8B-v2-TEST001-Q6_K-GGUF
This model was converted to GGUF format from [`Casual-Autopsy/Llama-3-VNTL-Yollow-8B-v2-TEST001`](https://huggingface.co/Casual-Autopsy/Llama-3-VNTL-Yollow-8B-v2-TEST001) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Casual-Autopsy/Llama-3-VNTL-Yollow-8B-v2-TEST001) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Casual-Autopsy/Llama-3-VNTL-Yollow-8B-v2-TEST001-Q6_K-GGUF --hf-file llama-3-vntl-yollow-8b-v2-test001-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Casual-Autopsy/Llama-3-VNTL-Yollow-8B-v2-TEST001-Q6_K-GGUF --hf-file llama-3-vntl-yollow-8b-v2-test001-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Casual-Autopsy/Llama-3-VNTL-Yollow-8B-v2-TEST001-Q6_K-GGUF --hf-file llama-3-vntl-yollow-8b-v2-test001-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Casual-Autopsy/Llama-3-VNTL-Yollow-8B-v2-TEST001-Q6_K-GGUF --hf-file llama-3-vntl-yollow-8b-v2-test001-q6_k.gguf -c 2048
```
|
thrunlab/t5-base_cola_mare_ar9_ex29_size-16_epochs-3_decoder_all_sparsity20_mare_mlp | thrunlab | "2023-10-26T20:02:44Z" | 47 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"base_model:google-t5/t5-base",
"base_model:finetune:google-t5/t5-base",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-10-26T19:55:53Z" | ---
license: apache-2.0
base_model: t5-base
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: t5-base_cola_mare_ar9_ex29_size-16_epochs-3_decoder_all_sparsity20_mare_mlp
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Accuracy
type: accuracy
value: 0.8341323106423778
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base_cola_mare_ar9_ex29_size-16_epochs-3_decoder_all_sparsity20_mare_mlp
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6188
- Accuracy: 0.8341
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 32
- seed: 1
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 20
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5148 | 0.19 | 50 | 0.8585 | 0.8188 |
| 0.4482 | 0.37 | 100 | 0.6410 | 0.8255 |
| 0.4572 | 0.56 | 150 | 0.6223 | 0.8284 |
| 0.4239 | 0.75 | 200 | 0.6037 | 0.8217 |
| 0.4716 | 0.93 | 250 | 0.5145 | 0.8313 |
| 0.3453 | 1.12 | 300 | 0.6653 | 0.8207 |
| 0.3124 | 1.31 | 350 | 0.5926 | 0.8341 |
| 0.3832 | 1.5 | 400 | 0.5726 | 0.8265 |
| 0.3035 | 1.68 | 450 | 0.5937 | 0.8313 |
| 0.3068 | 1.87 | 500 | 0.5681 | 0.8274 |
| 0.2659 | 2.06 | 550 | 0.6007 | 0.8265 |
| 0.3598 | 2.24 | 600 | 0.5988 | 0.8351 |
| 0.3051 | 2.43 | 650 | 0.5925 | 0.8360 |
| 0.309 | 2.62 | 700 | 0.6517 | 0.8332 |
| 0.209 | 2.8 | 750 | 0.6257 | 0.8332 |
| 0.3505 | 2.99 | 800 | 0.6252 | 0.8341 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.0.1+cu117
- Datasets 2.9.0
- Tokenizers 0.14.1
|
kyynaama/Ahma-3B_checkpoint_3140-exl2-6bpw | kyynaama | "2024-07-01T23:40:05Z" | 8 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-01T23:29:06Z" | ---
library_name: transformers
tags: []
---
This is the 6bpw exllamav2 quant of Finnish-NLP/Ahma-3B_hf_2024_06_20_08_52_28_checkpoint-3140
Original model card:
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This is first release candidate for Ahma-3B-instruct/chat model
These are preliminary scores, official scores coming later \
<b>MT Bench: </b> \
roleplay, score 5.6 \
extraction, score 2.1 \
reasoning, score 4.1 \
math, score 4.1 \
writing, score 6.8 \
stem, score 4.4 \
humanities, score 4.9 \
mt_bench avg, score 4.571428571428571
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
nttx/267c2fc1-b7d2-458c-947f-cf88119f6674 | nttx | "2025-02-16T01:25:21Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:VAGOsolutions/Llama-3.1-SauerkrautLM-8b-Instruct",
"base_model:adapter:VAGOsolutions/Llama-3.1-SauerkrautLM-8b-Instruct",
"license:llama3.1",
"region:us"
] | null | "2025-02-16T00:22:43Z" | ---
library_name: peft
license: llama3.1
base_model: VAGOsolutions/Llama-3.1-SauerkrautLM-8b-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 267c2fc1-b7d2-458c-947f-cf88119f6674
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: VAGOsolutions/Llama-3.1-SauerkrautLM-8b-Instruct
bf16: auto
chat_template: llama3
dataloader_num_workers: 12
dataset_prepared_path: null
datasets:
- data_files:
- df7dff371940d759_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/df7dff371940d759_train_data.json
type:
field_input: body
field_instruction: title
field_output: question_content
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 3
eval_batch_size: 8
eval_max_new_tokens: 128
eval_steps: 150
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: true
group_by_length: true
hub_model_id: nttx/267c2fc1-b7d2-458c-947f-cf88119f6674
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 3e-5
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 128
lora_dropout: 0.1
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 1500
micro_batch_size: 8
mlflow_experiment_name: /tmp/df7dff371940d759_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 15
optim_args:
adam_beta1: 0.9
adam_beta2: 0.999
adam_epsilon: 1e-8
optimizer: adamw_torch
output_dir: miner_id_24
pad_to_sequence_len: false
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 150
saves_per_epoch: null
sequence_len: 1024
special_tokens:
pad_token: <|eot_id|>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0fb8f458-8a94-49e7-b9b8-9561b1d14570
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 0fb8f458-8a94-49e7-b9b8-9561b1d14570
warmup_steps: 50
weight_decay: 0.1
xformers_attention: null
```
</details><br>
# 267c2fc1-b7d2-458c-947f-cf88119f6674
This model is a fine-tuned version of [VAGOsolutions/Llama-3.1-SauerkrautLM-8b-Instruct](https://huggingface.co/VAGOsolutions/Llama-3.1-SauerkrautLM-8b-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0027
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.999,adam_epsilon=1e-8
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- training_steps: 1500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0009 | 1 | 0.4642 |
| 0.0174 | 0.1318 | 150 | 0.0118 |
| 0.0066 | 0.2636 | 300 | 0.0063 |
| 0.004 | 0.3954 | 450 | 0.0045 |
| 0.0032 | 0.5272 | 600 | 0.0050 |
| 0.0032 | 0.6591 | 750 | 0.0035 |
| 0.0015 | 0.7909 | 900 | 0.0030 |
| 0.0017 | 0.9227 | 1050 | 0.0030 |
| 0.0012 | 1.0545 | 1200 | 0.0028 |
| 0.0019 | 1.1863 | 1350 | 0.0028 |
| 0.001 | 1.3181 | 1500 | 0.0027 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |