Search is not available for this dataset
modelId
stringlengths 5
134
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
223M
| likes
int64 0
8.08k
| library_name
stringclasses 349
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 53
values | createdAt
unknown | card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
prithivMLmods/Castor-Dramatic-Neon-Flux-LoRA | prithivMLmods | "2024-10-30T05:46:44Z" | 89 | 10 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"Dramatic Neon",
"Flux.1-Dev",
"Painting",
"Art",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:apache-2.0",
"region:us"
] | text-to-image | "2024-10-30T05:10:19Z" | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
- Dramatic Neon
- Flux.1-Dev
- Painting
- Art
widget:
- text: >-
Dramatic Neon, An animated image of a womans face, painted in a vibrant blue
color. Her eyes are glowing red, and her mouth is adorned with white teeth.
Her eyebrows are black, and she has long, wavy hair. Her lips are painted a
vibrant shade of pink, and there are droplets of blood dripping from the
right side of her face. The background is a dark blue, and the womans hair
is a mix of black and red.
output:
url: images/DN1.webp
- text: >-
Dramatic Neon, An animated painting of a woman with long dark hair. The
womans face is painted in shades of purple, yellow, and blue. Her eyes are
painted yellow, while her lips are painted black. Her hair is pulled back in
a ponytail, and she is wearing a purple dress with a black belt around her
neck. The background is a deep blue, and there are wavy lines in the
background. There is a crescent moon in the top right corner of the
painting.
output:
url: images/DN2.webp
- text: >-
Dramatic Neon, portrait of a woman with curly silver hair cascading over her
shoulders. Her face is painted in bold shades of turquoise, fuchsia, and
orange, casting an ethereal glow. Her eyes are a striking emerald green, and
her lips are painted a dark, shimmering violet. She wears a sleek,
off-the-shoulder magenta top with intricate black lace details. The
background is a deep emerald with swirling, mist-like patterns in pastel
pinks and purples. A glowing full moon appears in the top left corner,
surrounded by soft, glowing stars
output:
url: images/DN3.webp
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Dramatic Neon
license: apache-2.0
---
# Castor-Dramatic-Neon-Flux-LoRA
- Demo: https://huggingface.co/spaces/prithivMLmods/FLUX-LoRA-DLC
<Gallery />
**The model is still in the training phase. This is not the final version and may contain artifacts and perform poorly in some cases.**
## Model description
**prithivMLmods/Castor-Dramatic-Neon-Flux-LoRA**
Image Processing Parameters
| Parameter | Value | Parameter | Value |
|---------------------------|--------|---------------------------|--------|
| LR Scheduler | constant | Noise Offset | 0.03 |
| Optimizer | AdamW | Multires Noise Discount | 0.1 |
| Network Dim | 64 | Multires Noise Iterations | 10 |
| Network Alpha | 32 | Repeat & Steps | 25 & 2.5K |
| Epoch | 15 | Save Every N Epochs | 1 |
Labeling: florence2-en(natural language & English)
Total Images Used for Training : 20+ [ Hi-RES ]
## Data Source
https://playground.com/
## Trigger words❤️🔥
You should use `Dramatic Neon` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/prithivMLmods/Castor-Dramatic-Neon-Flux-LoRA/tree/main) them in the Files & versions tab. |
itlwas/TinyDolphin-2.8-1.1b-Q4_K_M-GGUF | itlwas | "2024-12-30T15:26:35Z" | 12 | 0 | null | [
"gguf",
"llama-cpp",
"gguf-my-repo",
"en",
"dataset:cerebras/SlimPajama-627B",
"dataset:bigcode/starcoderdata",
"dataset:teknium/openhermes",
"base_model:cognitivecomputations/TinyDolphin-2.8-1.1b",
"base_model:quantized:cognitivecomputations/TinyDolphin-2.8-1.1b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-12-30T15:26:31Z" | ---
license: apache-2.0
datasets:
- cerebras/SlimPajama-627B
- bigcode/starcoderdata
- teknium/openhermes
language:
- en
tags:
- llama-cpp
- gguf-my-repo
base_model: cognitivecomputations/TinyDolphin-2.8-1.1b
---
# itlwas/TinyDolphin-2.8-1.1b-Q4_K_M-GGUF
This model was converted to GGUF format from [`cognitivecomputations/TinyDolphin-2.8-1.1b`](https://huggingface.co/cognitivecomputations/TinyDolphin-2.8-1.1b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/cognitivecomputations/TinyDolphin-2.8-1.1b) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo itlwas/TinyDolphin-2.8-1.1b-Q4_K_M-GGUF --hf-file tinydolphin-2.8-1.1b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo itlwas/TinyDolphin-2.8-1.1b-Q4_K_M-GGUF --hf-file tinydolphin-2.8-1.1b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo itlwas/TinyDolphin-2.8-1.1b-Q4_K_M-GGUF --hf-file tinydolphin-2.8-1.1b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo itlwas/TinyDolphin-2.8-1.1b-Q4_K_M-GGUF --hf-file tinydolphin-2.8-1.1b-q4_k_m.gguf -c 2048
```
|
mradermacher/Llama-3.1-8B-ArliAI-RPMax-v1.2-i1-GGUF | mradermacher | "2024-11-23T00:20:49Z" | 83 | 2 | transformers | [
"transformers",
"gguf",
"en",
"base_model:ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.2",
"base_model:quantized:ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.2",
"license:llama3.1",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2024-11-22T21:37:26Z" | ---
base_model: ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.2
language:
- en
library_name: transformers
license: llama3.1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.2
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Llama-3.1-8B-ArliAI-RPMax-v1.2-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-ArliAI-RPMax-v1.2-i1-GGUF/resolve/main/Llama-3.1-8B-ArliAI-RPMax-v1.2.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-ArliAI-RPMax-v1.2-i1-GGUF/resolve/main/Llama-3.1-8B-ArliAI-RPMax-v1.2.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-ArliAI-RPMax-v1.2-i1-GGUF/resolve/main/Llama-3.1-8B-ArliAI-RPMax-v1.2.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-ArliAI-RPMax-v1.2-i1-GGUF/resolve/main/Llama-3.1-8B-ArliAI-RPMax-v1.2.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-ArliAI-RPMax-v1.2-i1-GGUF/resolve/main/Llama-3.1-8B-ArliAI-RPMax-v1.2.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-ArliAI-RPMax-v1.2-i1-GGUF/resolve/main/Llama-3.1-8B-ArliAI-RPMax-v1.2.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-ArliAI-RPMax-v1.2-i1-GGUF/resolve/main/Llama-3.1-8B-ArliAI-RPMax-v1.2.i1-Q2_K.gguf) | i1-Q2_K | 3.3 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-ArliAI-RPMax-v1.2-i1-GGUF/resolve/main/Llama-3.1-8B-ArliAI-RPMax-v1.2.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-ArliAI-RPMax-v1.2-i1-GGUF/resolve/main/Llama-3.1-8B-ArliAI-RPMax-v1.2.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-ArliAI-RPMax-v1.2-i1-GGUF/resolve/main/Llama-3.1-8B-ArliAI-RPMax-v1.2.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-ArliAI-RPMax-v1.2-i1-GGUF/resolve/main/Llama-3.1-8B-ArliAI-RPMax-v1.2.i1-IQ3_S.gguf) | i1-IQ3_S | 3.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-ArliAI-RPMax-v1.2-i1-GGUF/resolve/main/Llama-3.1-8B-ArliAI-RPMax-v1.2.i1-IQ3_M.gguf) | i1-IQ3_M | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-ArliAI-RPMax-v1.2-i1-GGUF/resolve/main/Llama-3.1-8B-ArliAI-RPMax-v1.2.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-ArliAI-RPMax-v1.2-i1-GGUF/resolve/main/Llama-3.1-8B-ArliAI-RPMax-v1.2.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.4 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-ArliAI-RPMax-v1.2-i1-GGUF/resolve/main/Llama-3.1-8B-ArliAI-RPMax-v1.2.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-ArliAI-RPMax-v1.2-i1-GGUF/resolve/main/Llama-3.1-8B-ArliAI-RPMax-v1.2.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 4.8 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-ArliAI-RPMax-v1.2-i1-GGUF/resolve/main/Llama-3.1-8B-ArliAI-RPMax-v1.2.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 4.8 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-ArliAI-RPMax-v1.2-i1-GGUF/resolve/main/Llama-3.1-8B-ArliAI-RPMax-v1.2.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 4.8 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-ArliAI-RPMax-v1.2-i1-GGUF/resolve/main/Llama-3.1-8B-ArliAI-RPMax-v1.2.i1-Q4_0.gguf) | i1-Q4_0 | 4.8 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-ArliAI-RPMax-v1.2-i1-GGUF/resolve/main/Llama-3.1-8B-ArliAI-RPMax-v1.2.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.8 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-ArliAI-RPMax-v1.2-i1-GGUF/resolve/main/Llama-3.1-8B-ArliAI-RPMax-v1.2.i1-Q4_K_M.gguf) | i1-Q4_K_M | 5.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-ArliAI-RPMax-v1.2-i1-GGUF/resolve/main/Llama-3.1-8B-ArliAI-RPMax-v1.2.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-ArliAI-RPMax-v1.2-i1-GGUF/resolve/main/Llama-3.1-8B-ArliAI-RPMax-v1.2.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/Llama-3.1-8B-ArliAI-RPMax-v1.2-i1-GGUF/resolve/main/Llama-3.1-8B-ArliAI-RPMax-v1.2.i1-Q6_K.gguf) | i1-Q6_K | 6.7 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
berny-bit/my_awesome_model | berny-bit | "2023-12-31T01:57:10Z" | 1 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-12-30T23:53:57Z" | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: berny-bit/my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# berny-bit/my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0644
- Validation Loss: 0.2424
- Train Accuracy: 0.9265
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 7810, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.2541 | 0.2163 | 0.9114 | 0 |
| 0.1353 | 0.1924 | 0.9294 | 1 |
| 0.0644 | 0.2424 | 0.9265 | 2 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.16.1
- Tokenizers 0.15.0
|
mpasila/gpt-13b-nordic-prerelease-exl2 | mpasila | "2024-03-30T05:18:42Z" | 1 | 0 | null | [
"fi",
"nn",
"en",
"no",
"da",
"sv",
"is",
"license:apache-2.0",
"region:us"
] | null | "2024-03-13T16:02:49Z" | ---
license: apache-2.0
language:
- fi
- nn
- en
- 'no'
- da
- sv
- is
---
This is an ExLlamaV2 quantized model of [HPLT/gpt-13b-nordic-prerelease](https://huggingface.co/HPLT/gpt-13b-nordic-prerelease) using the default calibration dataset.
The quants are uploaded on individual branches and the list is here (not finished yet):
[3bpw](https://huggingface.co/mpasila/gpt-13b-nordic-prerelease-exl2/tree/3bpw)
# Original Model card:
This is a pre-release checkpoint for a Nordic generative language model currently in training.
This preliminary release is provided for HPLT (https://hplt-project.org/) deliverable 4.1 (“First language models trained”)(https://hplt-project.org/deliverables). Consult the HPLT website for further details.
More documentation will be provided soon. |
mlabonne/NeuralDarewin-7B | mlabonne | "2024-03-04T15:15:19Z" | 243 | 4 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-01-23T20:41:06Z" | ---
license: apache-2.0
model-index:
- name: NeuralDarewin-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 70.14
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralDarewin-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 86.4
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralDarewin-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.85
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralDarewin-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 62.92
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralDarewin-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 79.72
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralDarewin-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 66.72
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/NeuralDarewin-7B
name: Open LLM Leaderboard
---
Darewin-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Intel/neural-chat-7b-v3-3](https://huggingface.co/Intel/neural-chat-7b-v3-3)
* [openaccess-ai-collective/DPOpenHermes-7B-v2](https://huggingface.co/openaccess-ai-collective/DPOpenHermes-7B-v2)
* [fblgit/una-cybertron-7b-v2-bf16](https://huggingface.co/fblgit/una-cybertron-7b-v2-bf16)
* [openchat/openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106)
* [OpenPipe/mistral-ft-optimized-1227](https://huggingface.co/OpenPipe/mistral-ft-optimized-1227)
* [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/NeuralHermes-2.5-Mistral-7B)
## 🧩 Configuration
```yaml
models:
- model: mistralai/Mistral-7B-v0.1
# No parameters necessary for base model
- model: Intel/neural-chat-7b-v3-3
parameters:
density: 0.6
weight: 0.2
- model: openaccess-ai-collective/DPOpenHermes-7B-v2
parameters:
density: 0.6
weight: 0.1
- model: fblgit/una-cybertron-7b-v2-bf16
parameters:
density: 0.6
weight: 0.2
- model: openchat/openchat-3.5-0106
parameters:
density: 0.6
weight: 0.15
- model: OpenPipe/mistral-ft-optimized-1227
parameters:
density: 0.6
weight: 0.25
- model: mlabonne/NeuralHermes-2.5-Mistral-7B
parameters:
density: 0.6
weight: 0.1
merge_method: dare_ties
base_model: mistralai/Mistral-7B-v0.1
parameters:
int8_mask: true
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "mlabonne/NeuralDarewin-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_mlabonne__NeuralDarewin-7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |71.79|
|AI2 Reasoning Challenge (25-Shot)|70.14|
|HellaSwag (10-Shot) |86.40|
|MMLU (5-Shot) |64.85|
|TruthfulQA (0-shot) |62.92|
|Winogrande (5-shot) |79.72|
|GSM8k (5-shot) |66.72|
|
jingyaogong/minimind-v-v1 | jingyaogong | "2024-10-04T16:44:26Z" | 28 | 3 | null | [
"pytorch",
"minimind",
"custom_code",
"arxiv:2304.08485",
"arxiv:2310.03744",
"region:us"
] | null | "2024-10-04T15:26:07Z" | <div align="center">

</div>
<div align="center">

[](https://github.com/jingyaogong/minimind-v/stargazers)
[](LICENSE)
[](https://github.com/jingyaogong/minimind-v/commits/master)
[](https://github.com/jingyaogong/minimind-v/pulls)
</div>
<div align="center">
<h3>"大道至简"</h3>
</div>
<div align="center">
中文 | [English](./README_en.md)
</div>
* 本开源项目旨在从0开始,最快3小时训练一个小参数量的,具备视觉模态能力的语言模型**MiniMind-V**
* **MiniMind-V**同样极其轻量,最小版本体积仅为 GPT3 的约 $\frac{1}{7000}$,力求做到个人GPU也可快速推理甚至训练。
* 这不仅是一个开源模型的实现,也是入门视觉语言模型(VLM)的教程。
* 希望此项目能为研究者提供一个抛砖引玉的入门示例,帮助大家快速上手并对VLM领域产生更多的探索与创新。
> 为防止误读,「从0开始」特指基于纯语言模型MiniMind(这是一个完全从0训练的类GPT模型)做进一步的,从0到1的视觉能力拓展。
> 若需详细了解后者,请参考孪生项目[MiniMind](https://github.com/jingyaogong/minimind)。
> 为防止误读,「最快3小时」是指您需要具备>本人硬件配置的机器,具体规格的详细信息将在下文提供。

<div align="center">
Demo已部署至ModelScope创空间,可以在此网站上体验:
[ModelScope在线体验](https://modelscope.cn/studios/gongjy/minimind-v)
</div>
# 📌 Introduction
视觉语言模型(VLM)如 GPT-4V、Qwen-VL、LlaVA 等,虽然在效果上令人惊艳,
但这些动辄 100 亿级参数的庞大模型,往往需要极高的硬件配置。
对于个人设备来说,不仅显存远远不足以支持训练,甚至连推理都十分困难。
我们通过阅读论文或公众号讲解来了解略显新颖的 VLM,
往往只能一知半解云里雾里。
而我们真正需要了解的是:
为多模态大模型是否真的如想象中那样复杂?它的代码实现到底如何?
训练过程究竟难不难?如果我只有一张 2080Ti 显卡,能否从0开始进行训练?
通过 **MiniMind-V**,本项目希望回答这些问题,
帮助研究者在有限的硬件条件下理解视觉语言模型的核心原理。
> [!TIP]
> (截至2024-10-04)MiniMind-V 系列已完成了 2 个型号模型的预训练,最小仅需27M(0.027B),即可具备识图和对话的能力!
| 模型 (大小) | tokenizer长度 | 推理占用 | release | 主观评分(/100) |
|---------------------------|-------------|--------|------------|------------|
| minimind-v-v1-small (27M) | 6400 | 0.6 GB | 2024.10.04 | 50' |
| minimind-v-v1 (109M) | 6400 | 1.1 GB | 2024.10.04 | 60' |
> 该分析在具有Torch 2.1.2、CUDA 12.2和Flash Attention 2的2×RTX 3090 GPU上进行。
### 👉**最近更新**
<details close>
<summary> <b>2024-10-05 (newest 🎉)</b> </summary>
- MiniMind-V如期而至,首次开源
</details>
# 📌 Environment
仅是我个人的软硬件环境配置,自行酌情变动:
```bash
CPU: Intel(R) Core(TM) i9-10980XE CPU @ 3.00GHz
内存:128 GB
显卡:NVIDIA GeForce RTX 3090(24GB) * 2
环境:python 3.9 + Torch 2.1.2 + DDP单机多卡训练
```
* Ubuntu == 20.04
* Python == 3.9
* Pytorch == 2.1.2
* CUDA == 12.2
* [requirements.txt](./requirements.txt)
# 📌 Quick Test
1.克隆项目
```bash
# step 1
git clone https://github.com/jingyaogong/minimind-v & cd minimind-v
```
2.下载预训练的模型权重文件到项目根目录 `minimind-v-v1`
```bash
# step 2
git clone https://huggingface.co/jingyaogong/minimind-v-v1
```
3.下载预训练的`clip-vit-base-patch32` 模型,在 `model/clip_model` 目录下:
```bash
# step 3
cd model/clip_model & git clone https://hf-mirror.com/openai/clip-vit-base-patch32
```
4.启动聊天网页测试对话
```bash
# step 4
python web_server.py
```

# 📌 Quick Start Train
* 0、环境安装
```bash
pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple
```
* 1、克隆项目代码
```text
git clone https://github.com/jingyaogong/minimind-v
```
* 2、如果需要自己训练
* 2.1 下载[数据集](https://pan.baidu.com/s/1Nz36OBBvVBGEx-PwIb7ofg?pwd=6666)的所有内容到`./dataset`
目录下,解压`pretrain_images.zip` 和 `sft_images.zip`
* 2.2 在`./model/LMConfig.py` 中调整model的参数配置
> 这里仅需调整dim和n_layers参数,分别是`(512+8)`或`(768+16)`,对应于`minimind-v-v1-small`和`minimind-v-v1`
* 2.3 下载MiniMind语言模型的[预训练权重文件](https://pan.baidu.com/s/1LE1SPoPYGS7VNtT1tpf7DA?pwd=6666)
,放到到`./out/`目录下,命名为`*_llm.pth`
* 2.4 `python 1-pretrain_vlm.py` 执行预训练,得到 `*_vlm_pretrain.pth` 作为预训练的输出权重
* 2.5 `python 2-sft_vlm.py` 执行指令微调,得到 `*_vlm_sft.pth` 作为指令微调的输出权重
* 3、测试自己训练的模型推理效果
* 确保需要使用的,训练完成的参数权重`*.pth`文件位于`./out/`目录下
* 也可以直接去[训练完成的模型权重](https://pan.baidu.com/s/1LE1SPoPYGS7VNtT1tpf7DA?pwd=6666)
下载使用我训练好的`*.pth`权重文件
```text
minimind-v/out
├── 512_llm.pth
├── 512_vlm_pretrain.pth
├── 512_vlm_sft.pth
├── 768_llm.pth
├── 768_vlm_pretrain.pth
├── 768_vlm_sft.pth
```
* `python 3-eval_chat.py`测试模型的对话效果,其中测试图片在`./dataset/eval_images`下,可自行更换

🍭 【Tip】预训练和全参指令微调pretrain和sft均支持多卡加速
* 单机N卡启动训练(DDP)
```bash
torchrun --nproc_per_node N 1-pretrain_vlm.py
# and
torchrun --nproc_per_node N 2-sft_vlm.py
```
* 记录训练过程
```bash
torchrun --nproc_per_node N 1-pretrain_vlm.py --use_wandb
# and
python 1-pretrain_vlm.py --use_wandb
```
通过添加`--use_wandb`参数,可以记录训练过程,训练完成后,可以在wandb网站上查看训练过程。通过修改`wandb_project`
和`wandb_run_name`参数,可以指定项目名称和运行名称。
# 📌 VLM Detail
MiniMind-V (VLM)的基座语言模型MiniMind (LLM)来自孪生项目[minimind](https://github.com/jingyaogong/minimind),
具体的模型结构、训练细节、原理、测试效果等均可移步[minimind](https://github.com/jingyaogong/minimind)项目查阅。
此处为减少冗余,省略讨论LLM的相关部分,默认您已对MiniMind (LLM)的细节有基本的了解。
> PS: 即使您不希望了解MiniMind (LLM)的细节,也可直接参考Quick Test和Quick Start中快速测试或训练MiniMind-V,
> 这并不受太大影响。
MiniMind-V的结构几乎不变,仅增加Visual Encoder和特征投影两个子模块,增加模态混合分支,以支持多种模态信息的输入:


此时,不妨思考2个很有意思的问题:什么叫做**L**arge **L**anguage **M**odel(LLM)?什么叫做多模态模型?
* [这篇文章](https://www.jiqizhixin.com/articles/2024-09-15-3)完美吐露了本人的想法,即LLM这个命名很不准确!
> 大语言模型(LLM)名字虽然带有语言二字,但它们其实与语言关系不大,这只是历史问题,更确切的名字应该是自回归 Transformer
或者其他。
LLM 更多是一种统计建模的通用技术,它们主要通过自回归 Transformer 来模拟 token 流,而这些 token
可以代表文本、图片、音频、动作选择、甚至是分子等任何东西。
因此,只要能将问题转化为模拟一系列离散 token 的流程,理论上都可以应用 LLM 来解决。
实际上,随着大型语言模型技术栈的日益成熟,我们可能会看到越来越多的问题被纳入这种建模范式。也就是说,问题固定在使用 LLM
进行『下一个 token 的预测』,只是每个领域中 token 的用途和含义有所不同。
* [李玺老师](https://person.zju.edu.cn/xilics#694283)同样佐证了本人的观点(原话不记得了,大意如下):
> 文本、视频、语音、动作等在人类看来属于「多模态」信号,但所谓的「模态」其实只是人类在信息存储方式上的一种分类概念。
就像`.txt`和`.png`文件,虽然在视觉呈现和高级表现形式上有所不同,但它们本质上并没有根本区别。
之所以出现「多模态」这个概念,仅仅是因为人类在不同的感知层面上对这些信号的分类需求。
然而,对于机器来说,无论信号来自何种「模态」,最终它们都只是以一串二进制的「单模态」数字序列来呈现。
机器并不会区分这些信号的模态来源,而只是处理和分析这些序列背后所承载的信息内容。
---
私以为,**G**enerative **P**retrained **T**ransformer (GPT) 比 **L**arge **L**anguage **M**odel (LLM)更为贴切,
因此本人表达上更习惯用"GPT"去代表LLM/VLM/类GPT架构的系列模型,而非为了蹭OpenAI的热度。
---
至此,我们可以用一句话总结GPT的所作所为:
GPT模型根据现有token预测输出下一个下下一个下下下一个token ...,直到模型输出结束符;此处的"token"其实并不需要一定是文本!
---
* 对于LLM模型,如果需要理解"图片",我们只要把"图片"作为对一种特殊的从来没见过的"外国语言",通过"外语词典"翻译后即可作为特殊的语言输入LLM
* 对于LLM模型,如果需要理解"音频",我们只要把"音频"作为对一种特殊的从来没见过的"外国语言",通过"外语词典"翻译后即可作为特殊的语言输入LLM
* ...
---
<u>**所以,为了得到MiniMind-V,我们只需要完成2件事即可:**</u>
1. 借助擅长翻译图片的 **"外语词典"** ,把图片从 **"外国语言"** 翻译为模型便于理解的 **"LLM语言"**
2. 训练微调LLM,使其和 **"外语词典"** 度过磨合期,从而更好的理解图片
---
"外语词典"一般称之为Visual Encoder模型。
和LlaVA、Qwen-VL等视觉语言模型类似,MiniMind-V同样选用开源Clip系列模型作为Visual Encoder。
具体使用[clip-vit-base-patch32](https://huggingface.co/openai/clip-vit-base-patch32),
一种基于 ViT-B/32 架构的经典Visual Encoder用于描述图像文本信息。
输入的图像尺寸为224x224,因为划分的Patch是32×32,所以会产生7*7+1(cls_token)=50个token作为encoder编码层的输入,
最终产生1×768维的嵌入向量用于和文本对计算误差。
我们并不需要最终嵌入表示,因此只取encoder层的输出,也就是VIT核心主干的输出特征即可。
在代码中对应[./model/vision_utils.py](./model/vision_utils.py)的get_img_embedding中的hook函数。
它拿到前一层维度50×768大小的特征,我们把它作为50个visual token输入MiniMind-V。
也有clip-vit-large-patch14这种更大,图像理解能力更强的Clip模型,
但是单图片会产生257个token,对于minimind这种量级模型,图片token的上下文占比太长,反倒不利于训练。
与LLM的结合在获取图像encoder特征后,一方面需要把768维度的visual token对齐到LLM的文本token,
另一方面,要将图像特征映射到与文本embedding相同的空间,即文本token和原生的视觉token需要磨合并不能直接地一视同仁,
可以称之为跨模态的特征对齐。
[LlaVA-1](https://arxiv.org/pdf/2304.08485)使用简单的无偏线性变换完成了这一操作,效果很不错,MiniMind-V同样如此。

至此,MiniMind-V的内部结构变化已经呈现完毕。
---
下面,我们简单讨论MiniMind-V的外部输入输出的变化。
VLM的输入依然是一段文本,其中包含特殊的<image>占位符。
在计算文本嵌入后,可以将图像编码器生成的向量投影到该占位符对应的嵌入部分,替换掉原先的占位符embedding。
例如:
```text
<image>\n这个图像中有什么内容?
```
minimind-v使用50个字符组成的 `<<<...>>>` 占位符代替图像,
之所以是50个字符,前面有所提及:
任何图像都被clip模型encoder为50×768维的token。
因此minimind-v的prompt:
```text
<<<<<<<<<<<<<<<<<<<<<<<<<>>>>>>>>>>>>>>>>>>>>>>>>>\n这个图片描述的是什么内容?
```
计算完embedding和projection,并对图像部分token替换后
整个计算过程到输出则和LLM部分没有任何区别。

<u>至此,MiniMind-V的所有细节已经呈现完毕。</u>
<u>MiniMind-V的实现未参考 **任何** 第三方代码,完全基于MiniMind尽可能做最小改动产生,故代码实现和LlaVA等模型必然存在很大区别。
MiniMind-V与MiniMind的代码核心改动不超过100行,上手难度低。</u>
# 📌 Experiment
## 数据集
来源:[Chinese-LLaVA-Vision](https://huggingface.co/datasets/LinkSoul/Chinese-LLaVA-Vision-Instructions)
包含约60万张预训练图像和<10万张指令微调图像,来自CC-3M和COCO 2014,问答内容经过翻译,对中文支持更友好。并进一步经过resize和整理压缩。
预训练数据集格式:
```json
{
"id": "GCC_train_000644518",
"image": "GCC_train_000644518.jpg",
"conversations": [
{
"from": "human",
"value": "写一篇简短但有益的图片摘要.\n<image>"
},
{
"from": "gpt",
"value": "在黑色背景的金属锅中加入盐水,慢动作fps"
}
]
}
```
指令微调数据集格式:
```json
{
"id": "000000334872",
"image": "000000334872.jpg",
"conversations": [
{
"from": "human",
"value": "<image>\n照片中的人们在下山滑雪还是越野滑雪?"
},
{
"from": "gpt",
"value": "照片中的人们在森林里越野滑雪,因为他们在一条小径上而不是在陡坡上滑雪。"
}
]
}
```
注:对于指令微调,仅保留了一轮对话,训练单轮对话模型,防止小模型性能被长文本拉低。
最终的数据集下载地址:[百度网盘](https://pan.baidu.com/s/1Nz36OBBvVBGEx-PwIb7ofg?pwd=6666) | [HuggingFace](https://huggingface.co/datasets/jingyaogong/minimind-v_dataset)
## 训练
预训练从595K条数据集中学习图片的通用知识,比如鹿是鹿,狗是狗。
指令微调从230K条真实对话数据集中学习对图片提问的真实问答格式。
`1-pretrain_vlm.py` 执行预训练,得到 `*_vlm_pretrain.pth` 作为预训练的输出权重。
`2-sft_vlm.py` 执行指令微调,得到 `*_vlm_sft.pth` 作为指令微调的输出权重。
训练时均冻结visual encoder也就是clip模型,只微调Projection和LLM两部分。
> Pretrain 512+8 模型 (训练时间和Loss参考图)

> Pretrain 768+16 模型 (训练时间和Loss参考图)

> SFT 512+8 模型 (训练时间和Loss参考图)

> SFT 768+16 模型 (训练时间和Loss参考图)

## 训练完成的模型权重
(`.pth`权重文件) 下载地址:[百度网盘](https://pan.baidu.com/s/1a7_C7HdCMfnG2Dia3q85FQ?pwd=6666)
(`transformers`模型文件)
下载地址:[HuggingFace](https://huggingface.co/collections/jingyaogong/minimind-v-67000833fb60b3a2e1f3597d)
> 注:HuggingFace版本均为指令微调后的MiniMind-V模型
| Model Name | params | Config | file_name |
|---------------------|--------|-----------------------------|-----------------------------------------------------|
| minimind-v-v1-small | 27M | d_model=512<br/>n_layers=8 | 预训练:512_vllm_pretrain.pth<br/>指令微调:512_vllm_sft.pth |
| minimind-v-v1 | 109M | d_model=768<br/>n_layers=16 | 预训练:768_vllm_pretrain.pth<br/>指令微调:768_vllm_sft.pth |
# 📌 Test
### 效果测试
<table>
<thead>
<tr>
<th>图片</th>
<th>512_pretrain</th>
<th>512_sft</th>
<th>768_pretrain</th>
<th>768_sft</th>
</tr>
</thead>
<tbody>
<tr>
<td><img src="./dataset/eval_images/一个女子.png" alt="a-girl.png" style="width: 200px;"></td>
<td>头发和化妆,我喜欢她的自然头发!</td>
<td>这个图片描绘了一个年轻的女人,她穿着一套西装,戴着一条领带,这表明她可能正在参加一个特别的时装活动或庆祝活动。</td>
<td>人为出演员的冒险片。</td>
<td>这个图片描绘了一个女人的肖像,她穿着一件粉红色的裙子。</td>
</tr>
<tr>
<td><img src="./dataset/eval_images/一个海星.png" alt="a-girl.png" ></td>
<td>水中的化石, 一个由环绕的环形化石团组成的静止线.</td>
<td>图片显示一只大型的 octopus, 一个大型的 octopus, 可能是一个潜在的海洋生物, 它在水面上, 或在海洋中 。</td>
<td>海星和触角。</td>
<td>图片显示了海星在海滩上,包括海星,以及一个水下物体。</td>
</tr>
<tr>
<td><img src="./dataset/eval_images/一个熊.png" alt="a-girl.png" ></td>
<td>在野外,在山谷里。</td>
<td>图片中的植物和一只灰熊坐在草地上。</td>
<td>一只灰熊的近景</td>
<td>图片显示一只灰熊站在一片开放的草地上,周围有树木和草丛,还有一只背包放在上面。</td>
</tr>
<tr>
<td><img src="./dataset/eval_images/一些海豚.png" alt="a-girl.png" ></td>
<td>一群游客观看了这部电影。</td>
<td>这个图片描绘了一群海鸥在水面飞翔,在水面上。海鸥的出现表明,它们正在寻找食物。海鸥在水面上筑巢,可能是为了保护自己免受潜在的危险,如海鸥的尖锐牙齿和爬行动物。</td>
<td>一群海豚或绵羊在一天的航行中乘船捕鱼</td>
<td>这个图片显示一群人在海豚和海豚附近的大群中游泳。</td>
</tr>
<tr>
<td><img src="./dataset/eval_images/三个女孩.png" alt="a-girl.png" ></td>
<td>一个女孩和她的朋友坐在一张长凳上,穿着长长的白色长袍。</td>
<td>这个场景描绘了一个充满活力的年轻女孩,她们穿着一件黑色和白色的服装,在一群人中间站着,他们都穿着黑色和白色的服装,这表明他们的服装是生动的、优雅的,在他们身边。在场景中,有两个女孩在背后,一个女人在背后,另一个女人站着,他们都穿着黑色的服装。这表明他们正在享受他们的服装和服装,可能正在参加一个特别的节日或庆祝活动。</td>
<td>女孩们在城市的街道上。</td>
<td>这个图片描绘了一个穿着传统服装的男人和女人,站在他们旁边,他们正在一起度过一个家庭时光。在整个场景中,可以看到一个小男孩和一个女孩,他们穿着牛仔帽,这表明他们正在参加一个家庭聚会,这可能是一次聚会或庆祝,或者他们可能正在讨论一个有趣的活动或活动。</td>
</tr>
<tr>
<td><img src="./dataset/eval_images/两头鹿.png" alt="a-girl.png" ></td>
<td>这张照片中有几只鹿。</td>
<td>这个图片记录了一只白尾鹿, 它坐在草地上, 用它的照片来捕捉一只红鹿.</td>
<td>这只动物看起来好像准备躲在树后面,他看上去很威严,因为他无法控制自己。</td>
<td>这个图片描绘了一只母鹿和一只鹿,这只母鹿在树林中站着,一只羊和一只鹿。</td>
</tr>
<tr>
<td><img src="./dataset/eval_images/两朵红花.png" alt="a-girl.png" ></td>
<td>这个花束的花期几乎没有进数。</td>
<td>图片显示一只红色和黄色的花朵, 它们被称为“花瓶”。</td>
<td>花头的贴近。</td>
<td>图片显示了红色的花朵,周围有几个玫瑰花。</td>
</tr>
<tr>
<td><img src="./dataset/eval_images/太空宇航员.png" alt="a-girl.png" ></td>
<td>宇航员在太空任务中与地球相姿态。</td>
<td>这个图像描绘了一个充满活力的月球,在月球上散步。</td>
<td>宇航员在任务期间在摇篮上休息,与他的团队在背景。</td>
<td>这个图片描绘了一个宇航员在太空站的形象。</td>
</tr>
<tr>
<td><img src="./dataset/eval_images/老虎在水里.png" alt="a-girl.png" ></td>
<td>一只老虎在水里看着摄像机。</td>
<td>图片显示一只大棕色的海豹在水里游泳,在水里休息。</td>
<td>动物园里被囚禁的老虎</td>
<td>图片显示一只小熊,躺在一棵树枝上。</td>
</tr>
<tr>
<td><img src="./dataset/eval_images/豹子在悬崖.png" alt="a-girl.png" ></td>
<td>这个是濒危物种。</td>
<td>图片中,一只黑白的猫在岩石上散步。</td>
<td>野外云层的豹在洞穴外的岩石上,在日出时</td>
<td>该图片展示了一只小熊猫在岩石上散步的照片。</td>
</tr>
</tbody>
</table>
### 启动推理
```bash
python web_server.py
```


### 效果总结
---
根据提供的表格数据,四个模型的表现可以总结如下:
1. **512_pretrain**:
- **描述简略且不准确**:大部分描述无法清晰表达图像内容,常常给出一些不相关的叙述。例如,在海星的图像中描述为“水中的化石”,与实际内容偏差较大。
- **缺乏细节**:大多数情况下,只给出简单的、模糊的描述,无法深入解释图像的细节或背景。例如,对于老虎的图像,仅说“在水里看着摄像机”。
2. **512_sft**:
- **描述更具体**:相比512_pretrain,512_sft在描述图像内容时更加详细,并尝试捕捉场景的具体元素。比如描述女子图像时,提到了“西装”和“领带”,细节较为清晰。
- **偶尔出错或冗余**:部分描述显得过于复杂甚至与图片无关,如描述海豚图像时,提到海鸥、筑巢等不相关的内容。
3. **768_pretrain**:
- **信息不连贯**:该模型的表现较为散乱,描述经常模糊且不完整。例如,在描述女子图像时,只提到“人为出演员的冒险片”,没有清楚地解释图像内容。
- **部分准确,但总体信息量少**:一些描述虽然与图像相关,但非常简短。例如,海星的描述只有“海星和触角”,无法提供完整的画面感。
4. **768_sft**:
- **描述全面且具体**:该模型的描述是四个模型中最详细和精确的。比如,描述熊的图像时提到了“站在一片开放的草地上,周围有树木和草丛,还有一只背包”,能够准确捕捉到多个图像元素。
- **具备更强的理解力**:该模型能够识别图像的场景和背景,提供合理的解释和推测。例如,描述“家庭聚会”或“庆祝活动”,这些解释让图像更具上下文联系。
### 总结:
- **512_pretrain**的表现最差,描述简单且不准确。
- **512_sft**的描述详细度有所提升,但偶尔出现不相关信息。
- **768_pretrain**信息连贯性差,但在某些方面能提供基本描述。
- **768_sft**表现最佳,能够给出详细、准确的描述,并且能够很好地推测图像的上下文。
---
# 📌 Acknowledge
> [!TIP]
> 如果您觉得 `MiniMind-V`对您有所帮助,可以在 GitHub 上加一个⭐<br/>
> 篇幅不短水平有限难免纰漏,欢迎在Issues交流指正或提交PR改进项目<br/>
> 您的支持就是持续改进项目的动力
## 🤝[贡献者](https://github.com/jingyaogong/minimind/graphs/contributors)
<a href="https://github.com/jingyaogong"><img src="https://avatars.githubusercontent.com/u/62287848" width="70px" height="70px"/></a>
## 😊鸣谢
<details close>
<summary> <b>参考链接 & 感谢以下优秀的论文或项目</b> </summary>
- 排名不分任何先后顺序
- [LlaVA](https://arxiv.org/pdf/2304.08485)
- [LlaVA-VL](https://arxiv.org/pdf/2310.03744)
- [Chinese-LLaVA-Vision-Instructions](https://huggingface.co/datasets/LinkSoul/Chinese-LLaVA-Vision-Instructions)
</details>
## 🫶支持者
<a href="https://github.com/jingyaogong/minimind-v/stargazers">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://reporoster.com/stars/dark/jingyaogong/minimind-v"/>
<source media="(prefers-color-scheme: light)" srcset="https://reporoster.com/stars/jingyaogong/minimind-v"/>
<img alt="github contribution grid snake animation" src="https://reporoster.com/stars/jingyaogong/minimind-v"/>
</picture>
</a>
<a href="https://github.com/jingyaogong/minimind-v/network/members">
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://reporoster.com/forks/dark/jingyaogong/minimind-v"/>
<source media="(prefers-color-scheme: light)" srcset="https://reporoster.com/forks/jingyaogong/minimind-v"/>
<img alt="github contribution grid snake animation" src="https://reporoster.com/forks/jingyaogong/minimind-v"/>
</picture>
</a>
<picture>
<source media="(prefers-color-scheme: dark)" srcset="https://api.star-history.com/svg?repos=jingyaogong/minimind-v&type=Date&theme=dark"/>
<source media="(prefers-color-scheme: light)" srcset="https://api.star-history.com/svg?repos=jingyaogong/minimind-v&type=Date"/>
<img alt="Star History Chart" src="https://api.star-history.com/svg?repos=jingyaogong/minimind-v&type=Date"/>
</picture>
# License
This repository is licensed under the [Apache-2.0 License](LICENSE).
|
Triangle104/Hermes-Llama-3.2-CoT | Triangle104 | "2025-01-27T16:08:46Z" | 34 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:NousResearch/Hermes-3-Llama-3.2-3B",
"base_model:merge:NousResearch/Hermes-3-Llama-3.2-3B",
"base_model:prithivMLmods/Llama-Thinker-3B-Preview2",
"base_model:merge:prithivMLmods/Llama-Thinker-3B-Preview2",
"license:llama3.2",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-01-11T13:01:18Z" | ---
license: llama3.2
library_name: transformers
tags:
- mergekit
- merge
base_model:
- NousResearch/Hermes-3-Llama-3.2-3B
- prithivMLmods/Llama-Thinker-3B-Preview2
model-index:
- name: Hermes-Llama-3.2-CoT
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: IFEval (0-Shot)
type: HuggingFaceH4/ifeval
args:
num_few_shot: 0
metrics:
- type: inst_level_strict_acc and prompt_level_strict_acc
value: 41.78
name: strict accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Triangle104/Hermes-Llama-3.2-CoT
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: BBH (3-Shot)
type: BBH
args:
num_few_shot: 3
metrics:
- type: acc_norm
value: 23.8
name: normalized accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Triangle104/Hermes-Llama-3.2-CoT
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MATH Lvl 5 (4-Shot)
type: hendrycks/competition_math
args:
num_few_shot: 4
metrics:
- type: exact_match
value: 9.14
name: exact match
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Triangle104/Hermes-Llama-3.2-CoT
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GPQA (0-shot)
type: Idavidrein/gpqa
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 3.91
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Triangle104/Hermes-Llama-3.2-CoT
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MuSR (0-shot)
type: TAUR-Lab/MuSR
args:
num_few_shot: 0
metrics:
- type: acc_norm
value: 5.09
name: acc_norm
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Triangle104/Hermes-Llama-3.2-CoT
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU-PRO (5-shot)
type: TIGER-Lab/MMLU-Pro
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 21.63
name: accuracy
source:
url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=Triangle104/Hermes-Llama-3.2-CoT
name: Open LLM Leaderboard
---
# Merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
Hermes with some Chain of Thought running though its veins.
Quant: https://huggingface.co/Triangle104/Hermes-Llama-3.2-CoT-Q4_K_M-GGUF
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [NousResearch/Hermes-3-Llama-3.2-3B](https://huggingface.co/NousResearch/Hermes-3-Llama-3.2-3B)
* [prithivMLmods/Llama-Thinker-3B-Preview2](https://huggingface.co/prithivMLmods/Llama-Thinker-3B-Preview2)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: NousResearch/Hermes-3-Llama-3.2-3B
- model: prithivMLmods/Llama-Thinker-3B-Preview2
merge_method: slerp
base_model: NousResearch/Hermes-3-Llama-3.2-3B
dtype: bfloat16
parameters:
t: [0, 0.5, 0.7, 1, 0.7, 0.5, 0]
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/Triangle104__Hermes-Llama-3.2-CoT-details)
| Metric |Value|
|-------------------|----:|
|Avg. |17.56|
|IFEval (0-Shot) |41.78|
|BBH (3-Shot) |23.80|
|MATH Lvl 5 (4-Shot)| 9.14|
|GPQA (0-shot) | 3.91|
|MuSR (0-shot) | 5.09|
|MMLU-PRO (5-shot) |21.63|
|
amiepsa/tinypixel-Llama-2-7B-bf16-sharded-ft-prompts | amiepsa | "2023-12-14T15:10:22Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:TinyPixel/Llama-2-7B-bf16-sharded",
"base_model:adapter:TinyPixel/Llama-2-7B-bf16-sharded",
"region:us"
] | null | "2023-12-14T15:09:58Z" | ---
library_name: peft
base_model: TinyPixel/Llama-2-7B-bf16-sharded
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0 |
mradermacher/Kainoverse-7b-v0.1-GGUF | mradermacher | "2024-09-30T09:32:08Z" | 6 | 2 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"nsfw",
"rp",
"smart",
"en",
"base_model:kainatq/Kainoverse-7b-v0.1",
"base_model:quantized:kainatq/Kainoverse-7b-v0.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-09-30T02:48:11Z" | ---
base_model: kainatq/Kainoverse-7b-v0.1
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- mergekit
- merge
- nsfw
- rp
- smart
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/kainatq/Kainoverse-7b-v0.1
<!-- provided-files -->
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Kainoverse-7b-v0.1-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Kainoverse-7b-v0.1-GGUF/resolve/main/Kainoverse-7b-v0.1.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Kainoverse-7b-v0.1-GGUF/resolve/main/Kainoverse-7b-v0.1.IQ3_XS.gguf) | IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Kainoverse-7b-v0.1-GGUF/resolve/main/Kainoverse-7b-v0.1.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Kainoverse-7b-v0.1-GGUF/resolve/main/Kainoverse-7b-v0.1.IQ3_S.gguf) | IQ3_S | 3.3 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Kainoverse-7b-v0.1-GGUF/resolve/main/Kainoverse-7b-v0.1.IQ3_M.gguf) | IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/Kainoverse-7b-v0.1-GGUF/resolve/main/Kainoverse-7b-v0.1.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Kainoverse-7b-v0.1-GGUF/resolve/main/Kainoverse-7b-v0.1.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Kainoverse-7b-v0.1-GGUF/resolve/main/Kainoverse-7b-v0.1.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Kainoverse-7b-v0.1-GGUF/resolve/main/Kainoverse-7b-v0.1.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Kainoverse-7b-v0.1-GGUF/resolve/main/Kainoverse-7b-v0.1.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Kainoverse-7b-v0.1-GGUF/resolve/main/Kainoverse-7b-v0.1.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Kainoverse-7b-v0.1-GGUF/resolve/main/Kainoverse-7b-v0.1.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Kainoverse-7b-v0.1-GGUF/resolve/main/Kainoverse-7b-v0.1.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Kainoverse-7b-v0.1-GGUF/resolve/main/Kainoverse-7b-v0.1.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Kainoverse-7b-v0.1-GGUF/resolve/main/Kainoverse-7b-v0.1.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
RajuEEE/RewardModelForQuestionAnswering_LLama2 | RajuEEE | "2023-08-21T09:41:05Z" | 0 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-08-21T09:41:02Z" | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.5.0.dev0
|
xxx-Sophie-Rain-Spiderman-Video/Sophie.Rain.SpiderMan.Viral.Videos.Original.Leaked.Full.HD.X | xxx-Sophie-Rain-Spiderman-Video | "2025-02-12T18:27:51Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-02-12T18:26:32Z" | <p><a href="https://social.danielwellington.com/srain?12" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐖𝐚𝐭𝐜𝐡 𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨)</a></p>
<p><a href="https://social.danielwellington.com/srain?12" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤 )</a></p>
<p><a href="https://social.danielwellington.com/srain?12" rel="nofollow"><img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif"></a></p> |
KalaiselvanD/albert_model_03 | KalaiselvanD | "2024-05-02T12:36:40Z" | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-05-02T12:31:56Z" | ---
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: albert_model_03
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert_model_03
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6568
- Accuracy: 0.6973
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 310 | 0.6568 | 0.6973 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
KoichiYasuoka/deberta-base-japanese-wikipedia-ud-head | KoichiYasuoka | "2024-08-20T10:47:27Z" | 108 | 0 | transformers | [
"transformers",
"pytorch",
"deberta-v2",
"question-answering",
"japanese",
"wikipedia",
"dependency-parsing",
"ja",
"dataset:universal_dependencies",
"base_model:KoichiYasuoka/deberta-base-japanese-wikipedia",
"base_model:finetune:KoichiYasuoka/deberta-base-japanese-wikipedia",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
] | question-answering | "2022-06-25T13:03:09Z" | ---
language:
- "ja"
tags:
- "japanese"
- "wikipedia"
- "question-answering"
- "dependency-parsing"
base_model: KoichiYasuoka/deberta-base-japanese-wikipedia
datasets:
- "universal_dependencies"
license: "cc-by-sa-4.0"
pipeline_tag: "question-answering"
inference:
parameters:
align_to_words: false
widget:
- text: "国語"
context: "全学年にわたって小学校の国語の教科書に挿し絵が用いられている"
- text: "教科書"
context: "全学年にわたって小学校の国語の教科書に挿し絵が用いられている"
- text: "の"
context: "全学年にわたって小学校の国語[MASK]教科書に挿し絵が用いられている"
---
# deberta-base-japanese-wikipedia-ud-head
## Model Description
This is a DeBERTa(V2) model pretrained on Japanese Wikipedia and 青空文庫 texts for dependency-parsing (head-detection on long-unit-words) as question-answering, derived from [deberta-base-japanese-wikipedia](https://huggingface.co/KoichiYasuoka/deberta-base-japanese-wikipedia) and [UD_Japanese-GSDLUW](https://github.com/UniversalDependencies/UD_Japanese-GSDLUW). Use [MASK] inside `context` to avoid ambiguity when specifying a multiple-used word as `question`.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForQuestionAnswering,QuestionAnsweringPipeline
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/deberta-base-japanese-wikipedia-ud-head")
model=AutoModelForQuestionAnswering.from_pretrained("KoichiYasuoka/deberta-base-japanese-wikipedia-ud-head")
qap=QuestionAnsweringPipeline(tokenizer=tokenizer,model=model,align_to_words=False)
print(qap(question="国語",context="全学年にわたって小学校の国語の教科書に挿し絵が用いられている"))
```
or (with [ufal.chu-liu-edmonds](https://pypi.org/project/ufal.chu-liu-edmonds/))
```py
class TransformersUD(object):
def __init__(self,bert):
import os
from transformers import (AutoTokenizer,AutoModelForQuestionAnswering,
AutoModelForTokenClassification,AutoConfig,TokenClassificationPipeline)
self.tokenizer=AutoTokenizer.from_pretrained(bert)
self.model=AutoModelForQuestionAnswering.from_pretrained(bert)
x=AutoModelForTokenClassification.from_pretrained
if os.path.isdir(bert):
d,t=x(os.path.join(bert,"deprel")),x(os.path.join(bert,"tagger"))
else:
from transformers.utils import cached_file
c=AutoConfig.from_pretrained(cached_file(bert,"deprel/config.json"))
d=x(cached_file(bert,"deprel/pytorch_model.bin"),config=c)
s=AutoConfig.from_pretrained(cached_file(bert,"tagger/config.json"))
t=x(cached_file(bert,"tagger/pytorch_model.bin"),config=s)
self.deprel=TokenClassificationPipeline(model=d,tokenizer=self.tokenizer,
aggregation_strategy="simple")
self.tagger=TokenClassificationPipeline(model=t,tokenizer=self.tokenizer)
def __call__(self,text):
import numpy,torch,ufal.chu_liu_edmonds
w=[(t["start"],t["end"],t["entity_group"]) for t in self.deprel(text)]
z,n={t["start"]:t["entity"].split("|") for t in self.tagger(text)},len(w)
r,m=[text[s:e] for s,e,p in w],numpy.full((n+1,n+1),numpy.nan)
v,c=self.tokenizer(r,add_special_tokens=False)["input_ids"],[]
for i,t in enumerate(v):
q=[self.tokenizer.cls_token_id]+t+[self.tokenizer.sep_token_id]
c.append([q]+v[0:i]+[[self.tokenizer.mask_token_id]]+v[i+1:]+[[q[-1]]])
b=[[len(sum(x[0:j+1],[])) for j in range(len(x))] for x in c]
with torch.no_grad():
d=self.model(input_ids=torch.tensor([sum(x,[]) for x in c]),
token_type_ids=torch.tensor([[0]*x[0]+[1]*(x[-1]-x[0]) for x in b]))
s,e=d.start_logits.tolist(),d.end_logits.tolist()
for i in range(n):
for j in range(n):
m[i+1,0 if i==j else j+1]=s[i][b[i][j]]+e[i][b[i][j+1]-1]
h=ufal.chu_liu_edmonds.chu_liu_edmonds(m)[0]
if [0 for i in h if i==0]!=[0]:
i=([p for s,e,p in w]+["root"]).index("root")
j=i+1 if i<n else numpy.nanargmax(m[:,0])
m[0:j,0]=m[j+1:,0]=numpy.nan
h=ufal.chu_liu_edmonds.chu_liu_edmonds(m)[0]
u="# text = "+text.replace("\n"," ")+"\n"
for i,(s,e,p) in enumerate(w,1):
p="root" if h[i]==0 else "dep" if p=="root" else p
u+="\t".join([str(i),r[i-1],"_",z[s][0][2:],"_","|".join(z[s][1:]),
str(h[i]),p,"_","_" if i<n and e<w[i][0] else "SpaceAfter=No"])+"\n"
return u+"\n"
nlp=TransformersUD("KoichiYasuoka/deberta-base-japanese-wikipedia-ud-head")
print(nlp("全学年にわたって小学校の国語の教科書に挿し絵が用いられている"))
```
## Reference
安岡孝一: [青空文庫DeBERTaモデルによる国語研長単位係り受け解析](http://hdl.handle.net/2433/275409), 東洋学へのコンピュータ利用, 第35回研究セミナー (2022年7月), pp.29-43.
|
nttx/cbec91d2-36bc-4dba-9986-80b25e2eec2b | nttx | "2025-01-12T03:51:07Z" | 13 | 0 | peft | [
"peft",
"safetensors",
"phi",
"axolotl",
"generated_from_trainer",
"base_model:microsoft/phi-1_5",
"base_model:adapter:microsoft/phi-1_5",
"license:mit",
"region:us"
] | null | "2025-01-12T03:38:20Z" | ---
library_name: peft
license: mit
base_model: microsoft/phi-1_5
tags:
- axolotl
- generated_from_trainer
model-index:
- name: cbec91d2-36bc-4dba-9986-80b25e2eec2b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: microsoft/phi-1_5
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 78d892729ae2d55a_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/78d892729ae2d55a_train_data.json
type:
field_instruction: questions
field_output: answers
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: true
hub_model_id: nttx/cbec91d2-36bc-4dba-9986-80b25e2eec2b
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/78d892729ae2d55a_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 7ded3f98-bc26-4ed5-8801-6d5a32508b44
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 7ded3f98-bc26-4ed5-8801-6d5a32508b44
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# cbec91d2-36bc-4dba-9986-80b25e2eec2b
This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1222
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | 1.2723 |
| 1.3536 | 0.0103 | 50 | 1.1781 |
| 1.1536 | 0.0206 | 100 | 1.1413 |
| 1.1928 | 0.0310 | 150 | 1.1316 |
| 1.2459 | 0.0413 | 200 | 1.1222 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
machinelearnear/preguntale_al_candidato_BULLRICH | machinelearnear | "2023-10-06T15:19:15Z" | 2 | 0 | diffusers | [
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] | text-to-image | "2023-10-06T15:19:08Z" |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: photo of patriciabullrich
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
M2LabOrg/whisper-small-es | M2LabOrg | "2024-06-10T06:33:33Z" | 88 | 2 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"es",
"dataset:mozilla-foundation/common_voice_11_0",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-06-09T11:08:56Z" | ---
language:
- es
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper small es - Michel Mesquita
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: es
split: test
args: 'config: es, split: test'
metrics:
- name: Wer
type: wer
value: 13.695510735198438
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper small es - Michel Mesquita
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2369
- Wer: 13.6955
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0848 | 0.25 | 1000 | 0.2930 | 15.9772 |
| 0.1839 | 0.5 | 2000 | 0.2727 | 15.0436 |
| 0.21 | 0.75 | 3000 | 0.2464 | 14.2108 |
| 0.1791 | 1.0 | 4000 | 0.2369 | 13.6955 |
### Framework versions
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
gokulsrinivasagan/distilbert_lda_5_rte | gokulsrinivasagan | "2024-11-22T10:52:37Z" | 106 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"base_model:gokulsrinivasagan/distilbert_lda_5",
"base_model:finetune:gokulsrinivasagan/distilbert_lda_5",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-11-22T10:51:36Z" | ---
library_name: transformers
language:
- en
base_model: gokulsrinivasagan/distilbert_lda_5
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: distilbert_lda_5_rte
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE RTE
type: glue
args: rte
metrics:
- name: Accuracy
type: accuracy
value: 0.5270758122743683
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_lda_5_rte
This model is a fine-tuned version of [gokulsrinivasagan/distilbert_lda_5](https://huggingface.co/gokulsrinivasagan/distilbert_lda_5) on the GLUE RTE dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6917
- Accuracy: 0.5271
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.0045 | 1.0 | 10 | 1.0701 | 0.5271 |
| 0.7518 | 2.0 | 20 | 0.6926 | 0.5271 |
| 0.695 | 3.0 | 30 | 0.6927 | 0.5271 |
| 0.6948 | 4.0 | 40 | 0.6917 | 0.5271 |
| 0.6949 | 5.0 | 50 | 0.6953 | 0.5271 |
| 0.6974 | 6.0 | 60 | 0.6993 | 0.4729 |
| 0.694 | 7.0 | 70 | 0.6920 | 0.5271 |
| 0.694 | 8.0 | 80 | 0.6945 | 0.4729 |
| 0.6941 | 9.0 | 90 | 0.6922 | 0.5271 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.2.1+cu118
- Datasets 2.17.0
- Tokenizers 0.20.3
|
nathanialhunt/3d9ab95c-ff6e-4bce-974e-ec0f6bbb549d | nathanialhunt | "2025-01-12T17:43:44Z" | 17 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/SmolLM2-360M-Instruct",
"base_model:adapter:unsloth/SmolLM2-360M-Instruct",
"license:apache-2.0",
"region:us"
] | null | "2025-01-12T17:10:16Z" | ---
library_name: peft
license: apache-2.0
base_model: unsloth/SmolLM2-360M-Instruct
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 3d9ab95c-ff6e-4bce-974e-ec0f6bbb549d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/SmolLM2-360M-Instruct
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 44e4efb623579ce8_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/44e4efb623579ce8_train_data.json
type:
field_instruction: prompt
field_output: solution
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: nathanialhunt/3d9ab95c-ff6e-4bce-974e-ec0f6bbb549d
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/44e4efb623579ce8_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: d8b2b541-66ba-4405-ae0f-8e4e98e2e40d
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: d8b2b541-66ba-4405-ae0f-8e4e98e2e40d
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 3d9ab95c-ff6e-4bce-974e-ec0f6bbb549d
This model is a fine-tuned version of [unsloth/SmolLM2-360M-Instruct](https://huggingface.co/unsloth/SmolLM2-360M-Instruct) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.0000 | 1 | nan |
| 0.0 | 0.0001 | 3 | nan |
| 0.0 | 0.0001 | 6 | nan |
| 0.0 | 0.0002 | 9 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
minchyeom/AlphaTuring-test | minchyeom | "2024-09-13T23:52:15Z" | 9 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"dataset:starsnatched/SmolInstruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-09-13T23:18:20Z" | ---
library_name: transformers
license: apache-2.0
datasets:
- starsnatched/SmolInstruct
language:
- en
---
This was trained using my own training method. Only used 10 rows from the dataset. Just a testing purpose. |
phungkhaccuong/016417b1-8b1c-b5e3-255d-1696aa29db09 | phungkhaccuong | "2025-01-13T14:52:05Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:HuggingFaceH4/tiny-random-LlamaForCausalLM",
"base_model:adapter:HuggingFaceH4/tiny-random-LlamaForCausalLM",
"region:us"
] | null | "2025-01-13T14:50:55Z" | ---
library_name: peft
base_model: HuggingFaceH4/tiny-random-LlamaForCausalLM
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 016417b1-8b1c-b5e3-255d-1696aa29db09
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: HuggingFaceH4/tiny-random-LlamaForCausalLM
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 6be06b92f7563277_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/6be06b92f7563277_train_data.json
type:
field_instruction: prompt
field_output: video_title
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 5
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: phungkhaccuong/016417b1-8b1c-b5e3-255d-1696aa29db09
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 5
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/6be06b92f7563277_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 15d2a597-2a51-4a31-9e19-d9dbddc7d6c3
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 15d2a597-2a51-4a31-9e19-d9dbddc7d6c3
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 016417b1-8b1c-b5e3-255d-1696aa29db09
This model is a fine-tuned version of [HuggingFaceH4/tiny-random-LlamaForCausalLM](https://huggingface.co/HuggingFaceH4/tiny-random-LlamaForCausalLM) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.3755
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0018 | 1 | 10.3759 |
| 10.3764 | 0.0178 | 10 | 10.3758 |
| 10.3793 | 0.0355 | 20 | 10.3757 |
| 10.3777 | 0.0533 | 30 | 10.3756 |
| 10.3747 | 0.0710 | 40 | 10.3755 |
| 10.3756 | 0.0888 | 50 | 10.3755 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
anwesham/imdb-sentiment-baseline-distilbert | anwesham | "2022-05-14T03:58:39Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"unk",
"dataset:anwesham/autotrain-data-imdb-sentiment-analysis",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-05-14T03:06:07Z" | ---
language: unk
datasets:
- anwesham/autotrain-data-imdb-sentiment-analysis
---
## Description
- Problem type: Binary Classification
## Validation Metrics
- Loss: 0.17481304705142975
- Accuracy: 0.936
- Precision: 0.9526578073089701
- Recall: 0.9176
- AUC: 0.9841454399999999
- F1: 0.93480032599837
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/anwesham/autotrain-imdb-sentiment-analysis-864927555
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("anwesham/autotrain-imdb-sentiment-analysis-864927555", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("anwesham/autotrain-imdb-sentiment-analysis-864927555", use_auth_token=True)
inputs = tokenizer("I love to eat good food and watch Moana.", return_tensors="pt")
outputs = model(**inputs)
``` |
hakutaku/mergekit-ties-udksbmq | hakutaku | "2024-06-06T16:13:20Z" | 7 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2306.01708",
"base_model:Sao10K/L3-8B-Stheno-v3.2",
"base_model:merge:Sao10K/L3-8B-Stheno-v3.2",
"base_model:haqishen/Llama-3-8B-Japanese-Instruct",
"base_model:merge:haqishen/Llama-3-8B-Japanese-Instruct",
"base_model:shenzhi-wang/Llama3-8B-Chinese-Chat",
"base_model:merge:shenzhi-wang/Llama3-8B-Chinese-Chat",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-06T16:07:30Z" | ---
base_model:
- Sao10K/L3-8B-Stheno-v3.2
- haqishen/Llama-3-8B-Japanese-Instruct
- shenzhi-wang/Llama3-8B-Chinese-Chat
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [Sao10K/L3-8B-Stheno-v3.2](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2) as a base.
### Models Merged
The following models were included in the merge:
* [haqishen/Llama-3-8B-Japanese-Instruct](https://huggingface.co/haqishen/Llama-3-8B-Japanese-Instruct)
* [shenzhi-wang/Llama3-8B-Chinese-Chat](https://huggingface.co/shenzhi-wang/Llama3-8B-Chinese-Chat)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Sao10K/L3-8B-Stheno-v3.2
# no parameters necessary for base model
- model: shenzhi-wang/Llama3-8B-Chinese-Chat
parameters:
density: 0.5
weight: 0.5
- model: haqishen/Llama-3-8B-Japanese-Instruct
parameters:
density: 0.5
weight: 0.3
merge_method: ties
base_model: Sao10K/L3-8B-Stheno-v3.2
parameters:
normalize: true
dtype: float16
```
|
martyyz/llama3-8b-oig-unsloth-merged | martyyz | "2024-04-19T20:59:46Z" | 16 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-19T14:54:45Z" | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** martyyz
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
CiroN2022/mtv-logo-90-s | CiroN2022 | "2023-08-23T11:49:27Z" | 5 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"region:us"
] | text-to-image | "2023-08-23T11:49:24Z" | ---
license: other
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: 90_mtv
widget:
- text: 90_mtv
---
# MTV Logo 90's

None
## Image examples for the model:









|
mradermacher/MT3-Gen3-IF-gemma-2-MTM2MUS4-9B-GGUF | mradermacher | "2024-12-12T02:52:59Z" | 6 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:zelk12/MT3-Gen3-IF-gemma-2-MTM2MUS4-9B",
"base_model:quantized:zelk12/MT3-Gen3-IF-gemma-2-MTM2MUS4-9B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-12-12T02:23:41Z" | ---
base_model: zelk12/MT3-Gen3-IF-gemma-2-MTM2MUS4-9B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/zelk12/MT3-Gen3-IF-gemma-2-MTM2MUS4-9B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MT3-Gen3-IF-gemma-2-MTM2MUS4-9B-GGUF/resolve/main/MT3-Gen3-IF-gemma-2-MTM2MUS4-9B.Q2_K.gguf) | Q2_K | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/MT3-Gen3-IF-gemma-2-MTM2MUS4-9B-GGUF/resolve/main/MT3-Gen3-IF-gemma-2-MTM2MUS4-9B.Q3_K_S.gguf) | Q3_K_S | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/MT3-Gen3-IF-gemma-2-MTM2MUS4-9B-GGUF/resolve/main/MT3-Gen3-IF-gemma-2-MTM2MUS4-9B.Q3_K_M.gguf) | Q3_K_M | 4.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MT3-Gen3-IF-gemma-2-MTM2MUS4-9B-GGUF/resolve/main/MT3-Gen3-IF-gemma-2-MTM2MUS4-9B.Q3_K_L.gguf) | Q3_K_L | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/MT3-Gen3-IF-gemma-2-MTM2MUS4-9B-GGUF/resolve/main/MT3-Gen3-IF-gemma-2-MTM2MUS4-9B.IQ4_XS.gguf) | IQ4_XS | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/MT3-Gen3-IF-gemma-2-MTM2MUS4-9B-GGUF/resolve/main/MT3-Gen3-IF-gemma-2-MTM2MUS4-9B.Q4_0_4_4.gguf) | Q4_0_4_4 | 5.5 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/MT3-Gen3-IF-gemma-2-MTM2MUS4-9B-GGUF/resolve/main/MT3-Gen3-IF-gemma-2-MTM2MUS4-9B.Q4_K_S.gguf) | Q4_K_S | 5.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MT3-Gen3-IF-gemma-2-MTM2MUS4-9B-GGUF/resolve/main/MT3-Gen3-IF-gemma-2-MTM2MUS4-9B.Q4_K_M.gguf) | Q4_K_M | 5.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MT3-Gen3-IF-gemma-2-MTM2MUS4-9B-GGUF/resolve/main/MT3-Gen3-IF-gemma-2-MTM2MUS4-9B.Q5_K_S.gguf) | Q5_K_S | 6.6 | |
| [GGUF](https://huggingface.co/mradermacher/MT3-Gen3-IF-gemma-2-MTM2MUS4-9B-GGUF/resolve/main/MT3-Gen3-IF-gemma-2-MTM2MUS4-9B.Q5_K_M.gguf) | Q5_K_M | 6.7 | |
| [GGUF](https://huggingface.co/mradermacher/MT3-Gen3-IF-gemma-2-MTM2MUS4-9B-GGUF/resolve/main/MT3-Gen3-IF-gemma-2-MTM2MUS4-9B.Q6_K.gguf) | Q6_K | 7.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MT3-Gen3-IF-gemma-2-MTM2MUS4-9B-GGUF/resolve/main/MT3-Gen3-IF-gemma-2-MTM2MUS4-9B.Q8_0.gguf) | Q8_0 | 9.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/MT3-Gen3-IF-gemma-2-MTM2MUS4-9B-GGUF/resolve/main/MT3-Gen3-IF-gemma-2-MTM2MUS4-9B.f16.gguf) | f16 | 18.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
jphan32/Zero2Story | jphan32 | "2023-10-10T09:25:22Z" | 6 | 0 | diffusers | [
"diffusers",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-10-09T22:28:53Z" | ---
license: creativeml-openrail-m
---
|
erkam/sd-clevr-text2im | erkam | "2023-04-20T23:13:53Z" | 0 | 0 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-2",
"base_model:adapter:stabilityai/stable-diffusion-2",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | "2023-04-13T08:53:45Z" |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - erkam/sd-clevr-text2im
These are LoRA adaption weights for stabilityai/stable-diffusion-2. The weights were fine-tuned on the erkam/clevr-with-depth-full-v2 dataset. You can find some example images in the following.




|
cmatc13/Meta-Llama-3.1-8B-darwin-finetune | cmatc13 | "2024-08-02T13:26:28Z" | 75 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-08-02T11:19:04Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Lots-of-LoRAs/Mistral-7B-Instruct-v0.2-4b-r16-task1409 | Lots-of-LoRAs | "2024-07-03T20:28:48Z" | 0 | 0 | pytorch | [
"pytorch",
"safetensors",
"en",
"arxiv:1910.09700",
"arxiv:2407.00066",
"license:mit",
"region:us"
] | null | "2024-06-18T20:13:47Z" | ---
language: en
license: mit
library_name: pytorch
---
# Model Card for Mistral-7B-Instruct-v0.2-4b-r16-task1409
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
LoRA trained on task1409_dart_text_generation
- **Developed by:** bruel
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** LoRA
- **Language(s) (NLP):** en
- **License:** mit
- **Finetuned from model [optional]:** mistralai/Mistral-7B-Instruct-v0.2
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/bruel-gabrielsson
- **Paper [optional]:** "Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead" (2024), Rickard Brüel Gabrielsson, Jiacheng Zhu, Onkar Bhardwaj, Leshem Choshen, Kristjan Greenewald, Mikhail Yurochkin and Justin Solomon
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
https://huggingface.co/datasets/Lots-of-LoRAs/task1409_dart_text_generation sourced from https://github.com/allenai/natural-instructions
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
@misc{brüelgabrielsson2024compressserveservingthousands,
title={Compress then Serve: Serving Thousands of LoRA Adapters with Little Overhead},
author={Rickard Brüel-Gabrielsson and Jiacheng Zhu and Onkar Bhardwaj and Leshem Choshen and Kristjan Greenewald and Mikhail Yurochkin and Justin Solomon},
year={2024},
eprint={2407.00066},
archivePrefix={arXiv},
primaryClass={cs.DC},
url={https://arxiv.org/abs/2407.00066},
}
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
TransferGraph/SetFit_distilbert-base-uncased__sst2__train-32-9-finetuned-lora-tweet_eval_hate | TransferGraph | "2024-02-29T13:40:28Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:SetFit/distilbert-base-uncased__sst2__train-32-9",
"base_model:adapter:SetFit/distilbert-base-uncased__sst2__train-32-9",
"license:apache-2.0",
"model-index",
"region:us"
] | text-classification | "2024-02-29T13:40:23Z" | ---
license: apache-2.0
library_name: peft
tags:
- parquet
- text-classification
datasets:
- tweet_eval
metrics:
- accuracy
base_model: SetFit/distilbert-base-uncased__sst2__train-32-9
model-index:
- name: SetFit_distilbert-base-uncased__sst2__train-32-9-finetuned-lora-tweet_eval_hate
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: hate
split: validation
args: hate
metrics:
- type: accuracy
value: 0.747
name: accuracy
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SetFit_distilbert-base-uncased__sst2__train-32-9-finetuned-lora-tweet_eval_hate
This model is a fine-tuned version of [SetFit/distilbert-base-uncased__sst2__train-32-9](https://huggingface.co/SetFit/distilbert-base-uncased__sst2__train-32-9) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- accuracy: 0.747
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| accuracy | train_loss | epoch |
|:--------:|:----------:|:-----:|
| 0.547 | None | 0 |
| 0.709 | 0.5387 | 0 |
| 0.734 | 0.4619 | 1 |
| 0.738 | 0.4336 | 2 |
| 0.747 | 0.4119 | 3 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.16.1
- Tokenizers 0.15.2 |
bartowski/Nous-Hermes-2-Yi-34B-exl2 | bartowski | "2023-12-26T09:00:50Z" | 4 | 1 | null | [
"yi",
"instruct",
"finetune",
"chatml",
"gpt4",
"synthetic data",
"distillation",
"text-generation",
"en",
"base_model:01-ai/Yi-34B",
"base_model:finetune:01-ai/Yi-34B",
"license:apache-2.0",
"region:us"
] | text-generation | "2023-12-26T03:03:49Z" | ---
base_model: 01-ai/Yi-34B
tags:
- yi
- instruct
- finetune
- chatml
- gpt4
- synthetic data
- distillation
model-index:
- name: Nous-Hermes-2-Yi-34B
results: []
license: apache-2.0
language:
- en
quantized_by: bartowski
pipeline_tag: text-generation
---
## Exllama v2 Quantizations of Nous-Hermes-2-Yi-34B
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.11">turboderp's ExLlamaV2 v0.0.11</a> for quantization.
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Conversion was done using the default calibration dataset.
Default arguments used except when the bits per weight is above 6.0, at that point the lm_head layer is quantized at 8 bits per weight instead of the default 6.
Original model: https://huggingface.co/NousResearch/Nous-Hermes-2-Yi-34B
<a href="https://huggingface.co/bartowski/Nous-Hermes-2-Yi-34B-exl2/tree/3_75">3.75 bits per weight</a>
<a href="https://huggingface.co/bartowski/Nous-Hermes-2-Yi-34B-exl2/tree/4_25">4.25 bits per weight</a>
<a href="https://huggingface.co/bartowski/Nous-Hermes-2-Yi-34B-exl2/tree/5_0">5.0 bits per weight</a>
<a href="https://huggingface.co/bartowski/Nous-Hermes-2-Yi-34B-exl2/tree/6_0">6.0 bits per weight</a>
## Download instructions
With git:
```shell
git clone --single-branch --branch 4_0 https://huggingface.co/bartowski/Nous-Hermes-2-Yi-34B-exl2
```
With huggingface hub (credit to TheBloke for instructions):
```shell
pip3 install huggingface-hub
```
To download the `main` (only useful if you only care about measurement.json) branch to a folder called `Nous-Hermes-2-Yi-34B-exl2`:
```shell
mkdir Nous-Hermes-2-Yi-34B-exl2
huggingface-cli download bartowski/Nous-Hermes-2-Yi-34B-exl2 --local-dir Nous-Hermes-2-Yi-34B-exl2 --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir Nous-Hermes-2-Yi-34B-exl2
huggingface-cli download bartowski/Nous-Hermes-2-Yi-34B-exl2 --revision 4_0 --local-dir Nous-Hermes-2-Yi-34B-exl2 --local-dir-use-symlinks False
```
|
fbaldassarri/HuggingFaceTB_SmolLM2-360M-auto_awq-int4-gs64-sym | fbaldassarri | "2025-01-04T20:31:49Z" | 78 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"autoround",
"auto-round",
"intel",
"gptq",
"auto-awq",
"autoawq",
"awq",
"woq",
"pytorch",
"onnx",
"transformers.js",
"en",
"base_model:HuggingFaceTB/SmolLM2-360M",
"base_model:quantized:HuggingFaceTB/SmolLM2-360M",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"region:us"
] | text-generation | "2025-01-04T14:42:16Z" | ---
language:
- en
license: apache-2.0
library_name: transformers
tags:
- autoround
- auto-round
- intel
- gptq
- auto-awq
- autoawq
- awq
- woq
- pytorch
- transformers
- safetensors
- onnx
- transformers.js
model_name: SmolLM2 360M
base_model: HuggingFaceTB/SmolLM2-360M
inference: false
model_creator: HuggingFaceTB
pipeline_tag: text-generation
prompt_template: '{prompt}
'
quantized_by: fbaldassarri
---
## Model Information
Quantized version of [HuggingFaceTB/SmolLM2-360M](https://huggingface.co/HuggingFaceTB/SmolLM2-360M) using torch.float32 for quantization tuning.
- 4 bits (INT4)
- group size = 128
- Symmetrical Quantization
- Method AutoAWQ
Quantization framework: [Intel AutoRound](https://github.com/intel/auto-round) v0.4.3
Note: this INT4 version of SmolLM2-360M has been quantized to run inference through CPU.
## Replication Recipe
### Step 1 Install Requirements
I suggest to install requirements into a dedicated python-virtualenv or a conda enviroment.
```
wget https://github.com/intel/auto-round/archive/refs/tags/v0.4.3.tar.gz
tar -xvzf v0.4.3.tar.gz
cd auto-round-0.4.3
pip install -r requirements-cpu.txt --upgrade
```
### Step 2 Build Intel AutoRound wheel from sources
```
pip install -vvv --no-build-isolation -e .[cpu]
```
### Step 3 Script for Quantization
```
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "HuggingFaceTB/SmolLM2-360M"
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
from auto_round import AutoRound
bits, group_size, sym, device, amp = 4, 64, True, 'cpu', False
autoround = AutoRound(model, tokenizer, nsamples=128, iters=200, seqlen=512, batch_size=4, bits=bits, group_size=group_size, sym=sym, device=device, amp=amp)
autoround.quantize()
output_dir = "./AutoRound/HuggingFaceTB_SmolLM2-360M-autoawq-int4-gs64-sym"
autoround.save_quantized(output_dir, format='auto_awq', inplace=True)
```
## License
[Apache 2.0 License](https://choosealicense.com/licenses/apache-2.0/)
## Disclaimer
This quantized model comes with no warranty. It has been developed only for research purposes.
|
CodyKilpatrick/q-FrozenLake-v1-4x4-noSlippery | CodyKilpatrick | "2023-05-26T18:04:32Z" | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-05-26T18:04:29Z" | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="CodyKilpatrick/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ribesstefano/RuleBert-v0.1-k4 | ribesstefano | "2024-01-07T17:05:04Z" | 94 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"text-classification",
"Italian",
"legal ruling",
"generated_from_trainer",
"base_model:classla/xlm-roberta-base-multilingual-text-genre-classifier",
"base_model:finetune:classla/xlm-roberta-base-multilingual-text-genre-classifier",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-01-07T16:57:21Z" | ---
license: mit
base_model: classla/xlm-roberta-base-multilingual-text-genre-classifier
tags:
- Italian
- legal ruling
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: ribesstefano/RuleBert-v0.1-k4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ribesstefano/RuleBert-v0.1-k4
This model is a fine-tuned version of [classla/xlm-roberta-base-multilingual-text-genre-classifier](https://huggingface.co/classla/xlm-roberta-base-multilingual-text-genre-classifier) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3547
- F1: 0.4940
- Roc Auc: 0.6712
- Accuracy: 0.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| 0.3524 | 0.14 | 250 | 0.3526 | 0.4888 | 0.6731 | 0.0 |
| 0.3318 | 0.27 | 500 | 0.3489 | 0.4885 | 0.6698 | 0.0 |
| 0.3278 | 0.41 | 750 | 0.3517 | 0.4870 | 0.6709 | 0.0 |
| 0.3165 | 0.54 | 1000 | 0.3506 | 0.4953 | 0.6726 | 0.0 |
| 0.3243 | 0.68 | 1250 | 0.3501 | 0.4904 | 0.6693 | 0.0 |
| 0.3072 | 0.82 | 1500 | 0.3529 | 0.4979 | 0.6715 | 0.0 |
| 0.311 | 0.95 | 1750 | 0.3527 | 0.4855 | 0.6664 | 0.0 |
| 0.3277 | 1.09 | 2000 | 0.3542 | 0.4900 | 0.6693 | 0.0 |
| 0.3102 | 1.22 | 2250 | 0.3535 | 0.4881 | 0.6679 | 0.0 |
| 0.3159 | 1.36 | 2500 | 0.3533 | 0.4839 | 0.6663 | 0.0 |
| 0.3073 | 1.49 | 2750 | 0.3531 | 0.4994 | 0.6726 | 0.0 |
| 0.3108 | 1.63 | 3000 | 0.3542 | 0.4929 | 0.6701 | 0.0 |
| 0.3093 | 1.77 | 3250 | 0.3546 | 0.4925 | 0.6702 | 0.0 |
| 0.2981 | 1.9 | 3500 | 0.3547 | 0.4933 | 0.6703 | 0.0 |
| 0.3046 | 2.04 | 3750 | 0.3547 | 0.4929 | 0.6707 | 0.0 |
| 0.3085 | 2.17 | 4000 | 0.3547 | 0.4940 | 0.6712 | 0.0 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
andreass123/gemma-2b-translation-v0.150-Q4_K_M-GGUF | andreass123 | "2024-04-30T01:14:20Z" | 8 | 0 | transformers | [
"transformers",
"gguf",
"gemma",
"pytorch",
"instruct",
"finetune",
"translation",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"ko",
"base_model:lemon-mint/gemma-ko-1.1-2b-it",
"base_model:quantized:lemon-mint/gemma-ko-1.1-2b-it",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2024-04-30T01:14:11Z" | ---
language:
- ko
license: gemma
library_name: transformers
tags:
- gemma
- pytorch
- instruct
- finetune
- translation
- llama-cpp
- gguf-my-repo
base_model: lemon-mint/gemma-ko-1.1-2b-it
widget:
- messages:
- role: user
content: Translate into Korean:Hamsters don't eat cats.
pipeline_tag: text-generation
---
# andreass123/gemma-2b-translation-v0.150-Q4_K_M-GGUF
This model was converted to GGUF format from [`lemon-mint/gemma-2b-translation-v0.150`](https://huggingface.co/lemon-mint/gemma-2b-translation-v0.150) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/lemon-mint/gemma-2b-translation-v0.150) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo andreass123/gemma-2b-translation-v0.150-Q4_K_M-GGUF --model gemma-2b-translation-v0.150.Q4_K_M.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo andreass123/gemma-2b-translation-v0.150-Q4_K_M-GGUF --model gemma-2b-translation-v0.150.Q4_K_M.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m gemma-2b-translation-v0.150.Q4_K_M.gguf -n 128
```
|
Coobiw/InternLM-XComposer2_Enhanced | Coobiw | "2025-02-14T03:15:32Z" | 0 | 0 | null | [
"pytorch",
"internlmxcomposer2",
"visual-question-answering",
"custom_code",
"arxiv:2401.16420",
"license:other",
"region:us"
] | visual-question-answering | "2025-02-13T20:15:40Z" | ---
license: other
pipeline_tag: visual-question-answering
---
**This repo is based on [InternLM-XComposer2 Official](https://huggingface.co/internlm/internlm-xcomposer2-7b). I support `batchified training` and `flash-attn` for acceleration. Welcome to have a try and give me some advice~**
<div align="center">
<h1>Original InternLM-XC2 README</h1>
</div>
<p align="center">
<img src="logo_en.png" width="400"/>
<p>
<p align="center">
<b><font size="6">InternLM-XComposer2</font></b>
<p>
<div align="center">
[💻Github Repo](https://github.com/InternLM/InternLM-XComposer)
[Paper](https://arxiv.org/abs/2401.16420)
</div>
**InternLM-XComposer2** is a vision-language large model (VLLM) based on [InternLM2](https://github.com/InternLM/InternLM) for advanced text-image comprehension and composition.
We release InternLM-XComposer2 series in two versions:
- InternLM-XComposer2-VL: The pretrained VLLM model with InternLM2 as the initialization of the LLM, achieving strong performance on various multimodal benchmarks.
- InternLM-XComposer2: The finetuned VLLM for *Free-from Interleaved Text-Image Composition*.
### Import from Transformers
To load the InternLM-XComposer2-VL-7B model using Transformers, use the following code:
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
ckpt_path = "internlm/internlm-xcomposer2-vl-7b"
tokenizer = AutoTokenizer.from_pretrained(ckpt_path, trust_remote_code=True).cuda()
# Set `torch_dtype=torch.float16` to load model in float16, otherwise it will be loaded as float32 and might cause OOM Error.
model = AutoModelForCausalLM.from_pretrained(ckpt_path, torch_dtype=torch.float16, trust_remote_code=True).cuda()
model = model.eval()
```
## Quickstart
We provide a simple example to show how to use InternLM-XComposer with 🤗 Transformers.
```python
import torch
from transformers import AutoModel, AutoTokenizer
torch.set_grad_enabled(False)
# init model and tokenizer
model = AutoModel.from_pretrained('internlm/internlm-xcomposer2-vl-7b', trust_remote_code=True).cuda().eval()
tokenizer = AutoTokenizer.from_pretrained('internlm/internlm-xcomposer2-vl-7b', trust_remote_code=True)
query = '<ImageHere>Please describe this image in detail.'
image = './image1.webp'
with torch.cuda.amp.autocast():
response, _ = model.chat(tokenizer, query=query, image=image, history=[], do_sample=False)
print(response)
#The image features a quote by Oscar Wilde, "Live life with no excuses, travel with no regret,"
# set against a backdrop of a breathtaking sunset. The sky is painted in hues of pink and orange,
# creating a serene atmosphere. Two silhouetted figures stand on a cliff, overlooking the horizon.
# They appear to be hiking or exploring, embodying the essence of the quote.
# The overall scene conveys a sense of adventure and freedom, encouraging viewers to embrace life without hesitation or regrets.
```
### Open Source License
The code is licensed under Apache-2.0, while model weights are fully open for academic research and also allow free commercial usage. To apply for a commercial license, please fill in the application form (English)/申请表(中文). For other questions or collaborations, please contact [email protected]. |
davidschulte/ESM_Divyanshu__indicxnli_pa | davidschulte | "2024-12-08T15:34:27Z" | 8 | 0 | null | [
"safetensors",
"embedding_space_map",
"BaseLM:bert-base-multilingual-uncased",
"dataset:Divyanshu/indicxnli",
"arxiv:2410.15148",
"base_model:google-bert/bert-base-multilingual-uncased",
"base_model:finetune:google-bert/bert-base-multilingual-uncased",
"license:apache-2.0",
"region:us"
] | null | "2024-12-08T15:34:23Z" | ---
base_model: bert-base-multilingual-uncased
datasets:
- Divyanshu/indicxnli
license: apache-2.0
tags:
- embedding_space_map
- BaseLM:bert-base-multilingual-uncased
---
# ESM Divyanshu/indicxnli
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
ESM
- **Developed by:** David Schulte
- **Model type:** ESM
- **Base Model:** bert-base-multilingual-uncased
- **Intermediate Task:** Divyanshu/indicxnli
- **ESM architecture:** linear
- **Language(s) (NLP):** [More Information Needed]
- **License:** Apache-2.0 license
## Training Details
### Intermediate Task
- **Task ID:** Divyanshu/indicxnli
- **Subset [optional]:** pa
- **Text Column:** ['premise', 'hypothesis']
- **Label Column:** label
- **Dataset Split:** train
- **Sample size [optional]:** 10000
- **Sample seed [optional]:** 42
### Training Procedure [optional]
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Language Model Training Hyperparameters [optional]
- **Epochs:** 3
- **Batch size:** 32
- **Learning rate:** 2e-05
- **Weight Decay:** 0.01
- **Optimizer**: AdamW
### ESM Training Hyperparameters [optional]
- **Epochs:** 10
- **Batch size:** 32
- **Learning rate:** 0.001
- **Weight Decay:** 0.01
- **Optimizer**: AdamW
### Additional trainiung details [optional]
## Model evaluation
### Evaluation of fine-tuned language model [optional]
### Evaluation of ESM [optional]
MSE:
### Additional evaluation details [optional]
## What are Embedding Space Maps?
<!-- This section describes the evaluation protocols and provides the results. -->
Embedding Space Maps (ESMs) are neural networks that approximate the effect of fine-tuning a language model on a task. They can be used to quickly transform embeddings from a base model to approximate how a fine-tuned model would embed the the input text.
ESMs can be used for intermediate task selection with the ESM-LogME workflow.
## How can I use Embedding Space Maps for Intermediate Task Selection?
[](https://pypi.org/project/hf-dataset-selector)
We release **hf-dataset-selector**, a Python package for intermediate task selection using Embedding Space Maps.
**hf-dataset-selector** fetches ESMs for a given language model and uses it to find the best dataset for applying intermediate training to the target task. ESMs are found by their tags on the Huggingface Hub.
```python
from hfselect import Dataset, compute_task_ranking
# Load target dataset from the Hugging Face Hub
dataset = Dataset.from_hugging_face(
name="stanfordnlp/imdb",
split="train",
text_col="text",
label_col="label",
is_regression=False,
num_examples=1000,
seed=42
)
# Fetch ESMs and rank tasks
task_ranking = compute_task_ranking(
dataset=dataset,
model_name="bert-base-multilingual-uncased"
)
# Display top 5 recommendations
print(task_ranking[:5])
```
For more information on how to use ESMs please have a look at the [official Github repository](https://github.com/davidschulte/hf-dataset-selector).
## Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
If you are using this Embedding Space Maps, please cite our [paper](https://arxiv.org/abs/2410.15148).
**BibTeX:**
```
@misc{schulte2024moreparameterefficientselectionintermediate,
title={Less is More: Parameter-Efficient Selection of Intermediate Tasks for Transfer Learning},
author={David Schulte and Felix Hamborg and Alan Akbik},
year={2024},
eprint={2410.15148},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2410.15148},
}
```
**APA:**
```
Schulte, D., Hamborg, F., & Akbik, A. (2024). Less is More: Parameter-Efficient Selection of Intermediate Tasks for Transfer Learning. arXiv preprint arXiv:2410.15148.
```
## Additional Information
|
Domeed/bert-finetuned-ner | Domeed | "2023-04-29T09:28:59Z" | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2023-04-29T07:40:14Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
config: conll2003
split: validation
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.05441708283106596
- name: Recall
type: recall
value: 0.005586394654032457
- name: F1
type: f1
value: 0.010132589421704905
- name: Accuracy
type: accuracy
value: 0.028674468762153946
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1994
- Precision: 0.0544
- Recall: 0.0056
- F1: 0.0101
- Accuracy: 0.0287
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.3372 | 1.0 | 1756 | 0.2798 | 0.0564 | 0.0048 | 0.0089 | 0.0211 |
| 0.1801 | 2.0 | 3512 | 0.2153 | 0.0627 | 0.0061 | 0.0112 | 0.0281 |
| 0.1377 | 3.0 | 5268 | 0.1994 | 0.0544 | 0.0056 | 0.0101 | 0.0287 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
EaindraKyaw/t5-small-squad-qg | EaindraKyaw | "2024-12-24T08:06:58Z" | 113 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-12-20T06:57:16Z" | ---
library_name: transformers
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
model-index:
- name: t5-small-squad-qg
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-squad-qg
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3
### Framework versions
- Transformers 4.47.1
- Pytorch 2.5.1+cu121
- Datasets 2.13.1
- Tokenizers 0.21.0
|
HachiML/ReasoningVector-Mistral-Small-24B-Instruct-2501-reasoning | HachiML | "2025-02-18T15:35:33Z" | 0 | 0 | null | [
"safetensors",
"mistral",
"reasoning",
"reasoning-vector",
"weight-diff",
"mistra",
"deepseek",
"en",
"ja",
"license:apache-2.0",
"region:us"
] | null | "2025-02-18T13:34:38Z" | ---
tags:
- reasoning
- reasoning-vector
- weight-diff
- mistra
- deepseek
license: apache-2.0
language:
- en
- ja
---
# Reasoning Vector
## 概要
**Reasoning Vector** は、ベースモデルとReasoningモデル間の重みの差分を抽出する手法により生成されたモデルです。
本モデルは、ChatVectorと同様の生成方法を採用しており、追加学習したベースモデルに対して推論能力(Reasoning)を追加するために使用されます。
単体では使用できません。
### モデル
- **Base Model**: [mistralai/Mistral-Small-24B-Instruct-2501](https://huggingface.co/mistralai/Mistral-Small-24B-Instruct-2501)
- **Reasoning Model**: [yentinglin/Mistral-Small-24B-Instruct-2501-reasoning](https://huggingface.co/yentinglin/Mistral-Small-24B-Instruct-2501-reasoning)
## 使用方法
以下は、ベースモデルにReasoning Vectorを適用して推論モデルを生成する際の例です。
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
# ベースモデルのロード
base_model = AutoModelForCausalLM.from_pretrained("your-base-model")
tokenizer = AutoTokenizer.from_pretrained("your-base-model")
# Reasoning Vectorのロード(差分パラメータ)
reasoning_vector = AutoModelForCausalLM.from_pretrained("HachiML/ReasoningVector-Mistral-Small-24B-Instruct-2501-reasoning")
# ベースモデルに差分を適用(実装に応じた適用方法を記載)
# 除外対象
skip_layers = ["model.embed_tokens.weight", "model.norm.weight", "lm_head.weight"]
for k, v in base_model.state_dict().items():
# layernormも除外
if (k in skip_layers) or ("layernorm" in k):
continue
new_v += reasoning_vector.state_dict()[k].to(v.device)
v.copy_(new_v)
# 推論の実行例
inputs = tokenizer("推論したいテキストを入力", return_tensors="pt")
outputs = base_model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
``` |
justinhoang/ppo-SnowballTarget | justinhoang | "2023-06-26T05:02:46Z" | 16 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | "2023-06-26T05:02:44Z" | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: justinhoang/ppo-SnowballTarget
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
alessiodm/ppo-LunarLander-v2-custom | alessiodm | "2023-11-04T07:52:23Z" | 0 | 0 | null | [
"tensorboard",
"LunarLander-v2",
"ppo",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"deep-rl-course",
"model-index",
"region:us"
] | reinforcement-learning | "2023-11-04T07:49:47Z" | ---
tags:
- LunarLander-v2
- ppo
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
- deep-rl-course
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -156.24 +/- 69.44
name: mean_reward
verified: false
---
# PPO Agent Playing LunarLander-v2
This is a trained model of a PPO agent playing LunarLander-v2.
# Hyperparameters
```python
{'f': '/root/.local/share/jupyter/runtime/kernel-d239687f-db31-4b3f-898c-3541ecbe7918.json'
'exp_name': 'ppo'
'seed': 1
'torch_deterministic': True
'cuda': True
'track': False
'wandb_project_name': 'cleanRL'
'wandb_entity': None
'capture_video': False
'env_id': 'LunarLander-v2'
'total_timesteps': 50000
'learning_rate': 0.00025
'num_envs': 4
'num_steps': 128
'anneal_lr': True
'gae': True
'gamma': 0.99
'gae_lambda': 0.95
'num_minibatches': 4
'update_epochs': 4
'norm_adv': True
'clip_coef': 0.2
'clip_vloss': True
'ent_coef': 0.01
'vf_coef': 0.5
'max_grad_norm': 0.5
'target_kl': None
'repo_id': 'alessiodm/ppo-LunarLander-v2-custom'
'batch_size': 512
'minibatch_size': 128}
```
|
jotabeartesvisuais/01 | jotabeartesvisuais | "2024-03-04T17:56:18Z" | 0 | 0 | null | [
"license:other",
"region:us"
] | null | "2024-03-04T17:56:18Z" | ---
license: other
license_name: noname
license_link: LICENSE
---
|
Dynosaur/llama3-8b-math-sft-mix-8-1 | Dynosaur | "2024-11-25T12:46:33Z" | 7 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"alignment-handbook",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"dataset:hexuan21/math-sft-mix-8-1",
"base_model:Dynosaur/llama3-8b-math-sft",
"base_model:finetune:Dynosaur/llama3-8b-math-sft",
"license:llama3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-11-25T09:20:52Z" | ---
library_name: transformers
license: llama3
base_model: Dynosaur/llama3-8b-math-sft
tags:
- alignment-handbook
- trl
- sft
- generated_from_trainer
- trl
- sft
- generated_from_trainer
datasets:
- hexuan21/math-sft-mix-8-1
model-index:
- name: llama3-8b-math-sft-mix-8-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama3-8b-math-sft-mix-8-1
This model is a fine-tuned version of [Dynosaur/llama3-8b-math-sft](https://huggingface.co/Dynosaur/llama3-8b-math-sft) on the hexuan21/math-sft-mix-8-1 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- total_eval_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.46.3
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.20.3
|
joshnguyen/mformer-fairness | joshnguyen | "2024-01-11T16:17:58Z" | 482 | 3 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-09-15T03:06:50Z" | ---
license: mit
language:
- en
library_name: transformers
--- |
mrvincenzo/dqn-SpaceInvadersNoFrameskip-v4 | mrvincenzo | "2023-08-13T09:48:54Z" | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-08-13T09:48:13Z" | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 872.00 +/- 417.93
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mrvincenzo -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mrvincenzo -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga mrvincenzo
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
nilz1999/Llama-2-7b-FT-multi-label-ft-merged | nilz1999 | "2024-04-22T07:49:56Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-22T07:47:15Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/SOLAR-10.7B-v1.4-i1-GGUF | mradermacher | "2025-02-10T02:22:58Z" | 360 | 0 | transformers | [
"transformers",
"gguf",
"SOLAR-10.7B",
"ko",
"base_model:hyeogi/SOLAR-10.7B-v1.4",
"base_model:quantized:hyeogi/SOLAR-10.7B-v1.4",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2025-02-10T01:10:28Z" | ---
base_model: hyeogi/SOLAR-10.7B-v1.4
language:
- ko
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
tags:
- SOLAR-10.7B
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/hyeogi/SOLAR-10.7B-v1.4
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/SOLAR-10.7B-v1.4-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/SOLAR-10.7B-v1.4-i1-GGUF/resolve/main/SOLAR-10.7B-v1.4.i1-IQ1_S.gguf) | i1-IQ1_S | 2.5 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/SOLAR-10.7B-v1.4-i1-GGUF/resolve/main/SOLAR-10.7B-v1.4.i1-IQ1_M.gguf) | i1-IQ1_M | 2.7 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/SOLAR-10.7B-v1.4-i1-GGUF/resolve/main/SOLAR-10.7B-v1.4.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/SOLAR-10.7B-v1.4-i1-GGUF/resolve/main/SOLAR-10.7B-v1.4.i1-IQ2_XS.gguf) | i1-IQ2_XS | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/SOLAR-10.7B-v1.4-i1-GGUF/resolve/main/SOLAR-10.7B-v1.4.i1-IQ2_S.gguf) | i1-IQ2_S | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/SOLAR-10.7B-v1.4-i1-GGUF/resolve/main/SOLAR-10.7B-v1.4.i1-IQ2_M.gguf) | i1-IQ2_M | 3.8 | |
| [GGUF](https://huggingface.co/mradermacher/SOLAR-10.7B-v1.4-i1-GGUF/resolve/main/SOLAR-10.7B-v1.4.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.9 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/SOLAR-10.7B-v1.4-i1-GGUF/resolve/main/SOLAR-10.7B-v1.4.i1-Q2_K.gguf) | i1-Q2_K | 4.1 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/SOLAR-10.7B-v1.4-i1-GGUF/resolve/main/SOLAR-10.7B-v1.4.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 4.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/SOLAR-10.7B-v1.4-i1-GGUF/resolve/main/SOLAR-10.7B-v1.4.i1-IQ3_XS.gguf) | i1-IQ3_XS | 4.6 | |
| [GGUF](https://huggingface.co/mradermacher/SOLAR-10.7B-v1.4-i1-GGUF/resolve/main/SOLAR-10.7B-v1.4.i1-Q3_K_S.gguf) | i1-Q3_K_S | 4.8 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/SOLAR-10.7B-v1.4-i1-GGUF/resolve/main/SOLAR-10.7B-v1.4.i1-IQ3_S.gguf) | i1-IQ3_S | 4.8 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/SOLAR-10.7B-v1.4-i1-GGUF/resolve/main/SOLAR-10.7B-v1.4.i1-IQ3_M.gguf) | i1-IQ3_M | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/SOLAR-10.7B-v1.4-i1-GGUF/resolve/main/SOLAR-10.7B-v1.4.i1-Q3_K_M.gguf) | i1-Q3_K_M | 5.3 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/SOLAR-10.7B-v1.4-i1-GGUF/resolve/main/SOLAR-10.7B-v1.4.i1-Q3_K_L.gguf) | i1-Q3_K_L | 5.8 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/SOLAR-10.7B-v1.4-i1-GGUF/resolve/main/SOLAR-10.7B-v1.4.i1-IQ4_XS.gguf) | i1-IQ4_XS | 5.9 | |
| [GGUF](https://huggingface.co/mradermacher/SOLAR-10.7B-v1.4-i1-GGUF/resolve/main/SOLAR-10.7B-v1.4.i1-Q4_0.gguf) | i1-Q4_0 | 6.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/SOLAR-10.7B-v1.4-i1-GGUF/resolve/main/SOLAR-10.7B-v1.4.i1-IQ4_NL.gguf) | i1-IQ4_NL | 6.2 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/SOLAR-10.7B-v1.4-i1-GGUF/resolve/main/SOLAR-10.7B-v1.4.i1-Q4_K_S.gguf) | i1-Q4_K_S | 6.3 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/SOLAR-10.7B-v1.4-i1-GGUF/resolve/main/SOLAR-10.7B-v1.4.i1-Q4_K_M.gguf) | i1-Q4_K_M | 6.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SOLAR-10.7B-v1.4-i1-GGUF/resolve/main/SOLAR-10.7B-v1.4.i1-Q4_1.gguf) | i1-Q4_1 | 6.9 | |
| [GGUF](https://huggingface.co/mradermacher/SOLAR-10.7B-v1.4-i1-GGUF/resolve/main/SOLAR-10.7B-v1.4.i1-Q5_K_S.gguf) | i1-Q5_K_S | 7.6 | |
| [GGUF](https://huggingface.co/mradermacher/SOLAR-10.7B-v1.4-i1-GGUF/resolve/main/SOLAR-10.7B-v1.4.i1-Q5_K_M.gguf) | i1-Q5_K_M | 7.8 | |
| [GGUF](https://huggingface.co/mradermacher/SOLAR-10.7B-v1.4-i1-GGUF/resolve/main/SOLAR-10.7B-v1.4.i1-Q6_K.gguf) | i1-Q6_K | 9.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
farid1088/Legal_GQA_7_BERT_augmented_100 | farid1088 | "2024-03-05T04:45:40Z" | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | question-answering | "2024-03-05T04:39:09Z" | ---
tags:
- generated_from_trainer
model-index:
- name: Legal_GQA_7_BERT_augmented_100
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Legal_GQA_7_BERT_augmented_100
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.8854
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 4 | 2.9375 |
| No log | 2.0 | 8 | 2.5861 |
| No log | 3.0 | 12 | 3.0236 |
| No log | 4.0 | 16 | 2.3145 |
| No log | 5.0 | 20 | 2.7110 |
| No log | 6.0 | 24 | 2.4009 |
| No log | 7.0 | 28 | 2.6089 |
| No log | 8.0 | 32 | 2.5080 |
| No log | 9.0 | 36 | 2.6943 |
| No log | 10.0 | 40 | 2.6713 |
| No log | 11.0 | 44 | 3.0227 |
| No log | 12.0 | 48 | 2.8381 |
| No log | 13.0 | 52 | 3.2355 |
| No log | 14.0 | 56 | 2.9510 |
| No log | 15.0 | 60 | 3.3167 |
| No log | 16.0 | 64 | 3.2990 |
| No log | 17.0 | 68 | 3.4914 |
| No log | 18.0 | 72 | 3.5478 |
| No log | 19.0 | 76 | 3.7819 |
| No log | 20.0 | 80 | 3.7423 |
| No log | 21.0 | 84 | 3.7653 |
| No log | 22.0 | 88 | 3.9264 |
| No log | 23.0 | 92 | 3.7901 |
| No log | 24.0 | 96 | 4.0258 |
| No log | 25.0 | 100 | 4.1388 |
| No log | 26.0 | 104 | 4.1338 |
| No log | 27.0 | 108 | 4.0925 |
| No log | 28.0 | 112 | 4.0685 |
| No log | 29.0 | 116 | 4.2066 |
| No log | 30.0 | 120 | 4.3976 |
| No log | 31.0 | 124 | 4.2297 |
| No log | 32.0 | 128 | 4.4429 |
| No log | 33.0 | 132 | 4.4769 |
| No log | 34.0 | 136 | 4.6924 |
| No log | 35.0 | 140 | 4.5341 |
| No log | 36.0 | 144 | 4.4352 |
| No log | 37.0 | 148 | 4.4956 |
| No log | 38.0 | 152 | 4.5124 |
| No log | 39.0 | 156 | 4.4433 |
| No log | 40.0 | 160 | 4.5376 |
| No log | 41.0 | 164 | 4.4187 |
| No log | 42.0 | 168 | 4.6840 |
| No log | 43.0 | 172 | 4.8962 |
| No log | 44.0 | 176 | 4.6352 |
| No log | 45.0 | 180 | 4.6857 |
| No log | 46.0 | 184 | 4.7973 |
| No log | 47.0 | 188 | 4.8357 |
| No log | 48.0 | 192 | 4.8215 |
| No log | 49.0 | 196 | 4.8593 |
| No log | 50.0 | 200 | 4.7425 |
| No log | 51.0 | 204 | 4.6979 |
| No log | 52.0 | 208 | 4.7642 |
| No log | 53.0 | 212 | 4.9259 |
| No log | 54.0 | 216 | 5.0124 |
| No log | 55.0 | 220 | 5.1167 |
| No log | 56.0 | 224 | 5.0260 |
| No log | 57.0 | 228 | 4.8341 |
| No log | 58.0 | 232 | 4.8657 |
| No log | 59.0 | 236 | 4.8196 |
| No log | 60.0 | 240 | 4.7984 |
| No log | 61.0 | 244 | 5.0060 |
| No log | 62.0 | 248 | 4.9326 |
| No log | 63.0 | 252 | 4.7038 |
| No log | 64.0 | 256 | 4.7326 |
| No log | 65.0 | 260 | 5.0008 |
| No log | 66.0 | 264 | 5.1227 |
| No log | 67.0 | 268 | 4.8750 |
| No log | 68.0 | 272 | 4.6740 |
| No log | 69.0 | 276 | 4.9472 |
| No log | 70.0 | 280 | 5.0634 |
| No log | 71.0 | 284 | 4.9791 |
| No log | 72.0 | 288 | 4.9960 |
| No log | 73.0 | 292 | 4.9437 |
| No log | 74.0 | 296 | 4.8558 |
| No log | 75.0 | 300 | 4.8548 |
| No log | 76.0 | 304 | 4.9371 |
| No log | 77.0 | 308 | 4.8281 |
| No log | 78.0 | 312 | 4.8555 |
| No log | 79.0 | 316 | 5.0903 |
| No log | 80.0 | 320 | 5.1344 |
| No log | 81.0 | 324 | 5.0305 |
| No log | 82.0 | 328 | 4.9848 |
| No log | 83.0 | 332 | 4.9658 |
| No log | 84.0 | 336 | 4.8907 |
| No log | 85.0 | 340 | 4.8319 |
| No log | 86.0 | 344 | 4.8355 |
| No log | 87.0 | 348 | 4.8083 |
| No log | 88.0 | 352 | 4.8290 |
| No log | 89.0 | 356 | 4.9148 |
| No log | 90.0 | 360 | 4.9964 |
| No log | 91.0 | 364 | 5.0250 |
| No log | 92.0 | 368 | 4.9765 |
| No log | 93.0 | 372 | 4.9332 |
| No log | 94.0 | 376 | 4.9085 |
| No log | 95.0 | 380 | 4.8835 |
| No log | 96.0 | 384 | 4.8701 |
| No log | 97.0 | 388 | 4.8764 |
| No log | 98.0 | 392 | 4.8855 |
| No log | 99.0 | 396 | 4.8869 |
| No log | 100.0 | 400 | 4.8854 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.14.7
- Tokenizers 0.15.0
|
aao331/Carpincho-30b-qlora | aao331 | "2023-09-20T22:49:44Z" | 0 | 0 | null | [
"en",
"es",
"arxiv:2302.13971",
"region:us"
] | null | "2023-06-06T00:09:55Z" | ---
language:
- en
- es
---
# Model Card for Carpincho-30b
<!-- Provide a quick summary of what the model is/does. -->
This is Carpincho-30B qlora 4-bit checkpoint, an Instruction-tuned LLM based on LLama-30B. It is trained to answer in colloquial spanish Argentine language.
It was trained on 2x3090 (48G) for 120 hs using huggingface QLoRA code (4-bit quantization)
## Model Details
The model is provided in LoRA format.
## Usage
Here is example inference code, you will need to install the following requirements:
```
bitsandbytes==0.39.0
transformers @ git+https://github.com/huggingface/transformers.git
peft @ git+https://github.com/huggingface/peft.git
accelerate @ git+https://github.com/huggingface/accelerate.git
einops==0.6.1
evaluate==0.4.0
scikit-learn==1.2.2
sentencepiece==0.1.99
wandb==0.15.3
```
```
import time
import torch
from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer, LlamaTokenizer, StoppingCriteria, StoppingCriteriaList, TextIteratorStreamer
model_name = "models/huggyllama_llama-30b/"
adapters_name = 'carpincho-30b-qlora'
print(f"Starting to load the model {model_name} into memory")
model = AutoModelForCausalLM.from_pretrained(
model_name,
load_in_4bit=True,
torch_dtype=torch.bfloat16,
device_map="sequential"
)
print(f"Loading {adapters_name} into memory")
model = PeftModel.from_pretrained(model, adapters_name)
tokenizer = LlamaTokenizer.from_pretrained(model_name)
tokenizer.bos_token_id = 1
stop_token_ids = [0]
print(f"Successfully loaded the model {model_name} into memory")
def main(tokenizer):
prompt = '''Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
%s
### Response:
''' % "Hola, como estas?"
batch = tokenizer(prompt, return_tensors="pt")
batch = {k: v.cuda() for k, v in batch.items()}
with torch.no_grad():
generated = model.generate(inputs=batch["input_ids"],
do_sample=True, use_cache=True,
repetition_penalty=1.1,
max_new_tokens=100,
temperature=0.9,
top_p=0.95,
top_k=40,
return_dict_in_generate=True,
output_attentions=False,
output_hidden_states=False,
output_scores=False)
result_text = tokenizer.decode(generated['sequences'].cpu().tolist()[0])
print(result_text)
main(tokenizer)
```
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Alfredo Ortega (@ortegaalfredo)
- **Model type:** 30B LLM QLoRA
- **Language(s):** (NLP): English and colloquial Argentine Spanish
- **License:** Free for non-commercial use, but I'm not the police.
- **Finetuned from model:** https://huggingface.co/huggyllama/llama-30b
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://huggingface.co/huggyllama/llama-30b
- **Paper [optional]:** https://arxiv.org/abs/2302.13971
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
This is a generic LLM chatbot that can be used to interact directly with humans.
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
This bot is uncensored and may provide shocking answers. Also it contains bias present in the training material.
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
## Model Card Contact
Contact the creator at @ortegaalfredo on twitter/github |
DrRhino/sonya-lora | DrRhino | "2024-12-28T16:53:52Z" | 6 | 1 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2024-12-28T16:53:51Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: AISONYA
---
# Sonya Lora
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `AISONYA` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('DrRhino/sonya-lora', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
abhayesian/llama-3.3-70b-honly-rank_8-small_lr-no_hhh | abhayesian | "2025-01-28T21:52:10Z" | 10 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"base_model:adapter:togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference",
"region:us"
] | null | "2025-01-24T06:10:43Z" | ---
base_model: togethercomputer/Meta-Llama-3.3-70B-Instruct-Reference
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.12.0 |
ch5/IA | ch5 | "2023-09-06T09:26:34Z" | 0 | 0 | asteroid | [
"asteroid",
"arxiv:1910.09700",
"license:openrail",
"region:us"
] | null | "2023-09-06T09:11:09Z" | ---
license: openrail
library_name: asteroid
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Triangle104/Virtuoso-Small-v2-Q8_0-GGUF | Triangle104 | "2025-01-30T19:06:15Z" | 10 | 0 | transformers | [
"transformers",
"gguf",
"llama-cpp",
"gguf-my-repo",
"base_model:arcee-ai/Virtuoso-Small-v2",
"base_model:quantized:arcee-ai/Virtuoso-Small-v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-01-30T19:03:33Z" | ---
base_model: arcee-ai/Virtuoso-Small-v2
library_name: transformers
license: apache-2.0
tags:
- llama-cpp
- gguf-my-repo
---
# Triangle104/Virtuoso-Small-v2-Q8_0-GGUF
This model was converted to GGUF format from [`arcee-ai/Virtuoso-Small-v2`](https://huggingface.co/arcee-ai/Virtuoso-Small-v2) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/arcee-ai/Virtuoso-Small-v2) for more details on the model.
---
Virtuoso-Small-v2 (14B) is our next-generation, 14-billion-parameter language model that builds upon the original Virtuoso-Small architecture. This version is distilled from Deepseek-v3, leveraging an expanded dataset of 5B+ tokens worth of logits.
Model Details
Architecture Base: Qwen-2.5-14B
Parameter Count: 14B
Tokenizer:
Initially integrated with Deepseek-v3 tokenizer for logit extraction.
Final alignment uses the Qwen tokenizer, using specialized “tokenizer surgery” for cross-architecture compatibility.
Distillation Data:
~1.1B tokens/logits from Deepseek-v3’s training data.
Logit-level distillation using a proprietary “fusion merging” approach afterwards for maximum fidelity.
License: Apache-2.0
Background on Deepseek Distillation
Deepseek-v3 serves as the teacher model, from which we capture logits across billions of tokens. Rather than standard supervised fine-tuning, we apply a full logit-level replication. This ensures more precise transference of knowledge, including advanced reasoning in:
Technical and scientific queries
Complex code generation
Mathematical problem-solving
How to Use
Below is a sample code snippet using transformers:
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "arcee-ai/Virtuoso-Small-v2"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
prompt = "Provide a concise summary of quantum entanglement."
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=150)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Training & Fine-Tuning
Initial Training: Began with Qwen-14B, calibrated for large-scale text ingestion.
Distillation & Merging:
Trained on ~1.1B tokens worth of Deepseek-v3 logits.
Employed “fusion merging” to retain as much teacher expertise as possible.
Final step included DPO to improve alignment and reduce model hallucinations.
Continuous Development: Additional R1 distillations are in progress to further enhance performance and specialization.
Limitations
Context Length: 128k Tokens
Knowledge Cut-off: Training data may not reflect the latest events or developments, leading to gaps in current knowledge beyond June 2024.
Ethical Considerations
Content Generation Risks: Like any language model, Virtuoso-Small-v2 can potentially generate harmful or biased content if prompted in certain ways.
License
Virtuoso-Small-v2 (14B) is released under the Apache-2.0 License. You are free to use, modify, and distribute this model in both commercial and non-commercial applications, subject to the terms and conditions of the license.
If you have questions or would like to share your experiences using these models, please connect with us on social media. We’re excited to see what you build—and how these models help you innovate!
---
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Triangle104/Virtuoso-Small-v2-Q8_0-GGUF --hf-file virtuoso-small-v2-q8_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Triangle104/Virtuoso-Small-v2-Q8_0-GGUF --hf-file virtuoso-small-v2-q8_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Triangle104/Virtuoso-Small-v2-Q8_0-GGUF --hf-file virtuoso-small-v2-q8_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Triangle104/Virtuoso-Small-v2-Q8_0-GGUF --hf-file virtuoso-small-v2-q8_0.gguf -c 2048
```
|
ITACHIXD/dummy_model | ITACHIXD | "2024-02-21T04:29:35Z" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"codegen",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-02-21T04:28:10Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Schneiderleid/distilbert-base-uncased-distilled-clinc | Schneiderleid | "2024-10-21T13:57:17Z" | 106 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-10-20T17:00:52Z" | ---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1695
- Accuracy: 0.9439
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 1.2650 | 0.7590 |
| 1.5241 | 2.0 | 636 | 0.6665 | 0.8771 |
| 1.5241 | 3.0 | 954 | 0.3849 | 0.9210 |
| 0.6014 | 4.0 | 1272 | 0.2654 | 0.9332 |
| 0.2764 | 5.0 | 1590 | 0.2137 | 0.9384 |
| 0.2764 | 6.0 | 1908 | 0.1932 | 0.9403 |
| 0.1798 | 7.0 | 2226 | 0.1810 | 0.9432 |
| 0.1461 | 8.0 | 2544 | 0.1753 | 0.9439 |
| 0.1461 | 9.0 | 2862 | 0.1704 | 0.9439 |
| 0.1336 | 10.0 | 3180 | 0.1695 | 0.9439 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.1+cu121
- Datasets 3.0.1
- Tokenizers 0.19.1
|
deadlyhacker/investopedia_finbert | deadlyhacker | "2023-05-15T19:28:06Z" | 78 | 0 | transformers | [
"transformers",
"tf",
"bert",
"fill-mask",
"generated_from_keras_callback",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2023-05-13T00:13:40Z" | ---
tags:
- generated_from_keras_callback
model-index:
- name: deadlyhacker/investopedia_finbert
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# deadlyhacker/investopedia_finbert
This model is a fine-tuned version of [ProsusAI/finbert](https://huggingface.co/ProsusAI/finbert) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.2476
- Train Accuracy: 0.1092
- Validation Loss: 1.2565
- Validation Accuracy: 0.1097
- Epoch: 10
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 2.5600 | 0.0855 | 1.7046 | 0.1015 | 0 |
| 1.7150 | 0.1006 | 1.5251 | 0.1041 | 1 |
| 1.5635 | 0.1032 | 1.4501 | 0.1053 | 2 |
| 1.4843 | 0.1046 | 1.3938 | 0.1064 | 3 |
| 1.4215 | 0.1058 | 1.3624 | 0.1075 | 4 |
| 1.3870 | 0.1062 | 1.3373 | 0.1077 | 5 |
| 1.3483 | 0.1071 | 1.3053 | 0.1082 | 6 |
| 1.3187 | 0.1079 | 1.3033 | 0.1090 | 7 |
| 1.2964 | 0.1083 | 1.2965 | 0.1081 | 8 |
| 1.2720 | 0.1088 | 1.2652 | 0.1088 | 9 |
| 1.2476 | 0.1092 | 1.2565 | 0.1097 | 10 |
### Framework versions
- Transformers 4.29.1
- TensorFlow 2.10.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
lesso02/6b7e7b40-ea04-4864-b6b8-1cc7b3b2f729 | lesso02 | "2025-01-25T12:54:04Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"phi3",
"axolotl",
"generated_from_trainer",
"base_model:Xenova/tiny-random-Phi3ForCausalLM",
"base_model:adapter:Xenova/tiny-random-Phi3ForCausalLM",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-01-25T12:52:53Z" | ---
library_name: peft
base_model: Xenova/tiny-random-Phi3ForCausalLM
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 6b7e7b40-ea04-4864-b6b8-1cc7b3b2f729
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Xenova/tiny-random-Phi3ForCausalLM
bf16: auto
chat_template: llama3
datasets:
- data_files:
- 9c729d125314253a_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/9c729d125314253a_train_data.json
type:
field_input: rational_answer
field_instruction: question
field_output: answer
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 1
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: lesso02/6b7e7b40-ea04-4864-b6b8-1cc7b3b2f729
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 5.0e-05
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/9c729d125314253a_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 1
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: a24eb70d-9c84-4201-82c8-541dd6562dc8
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: a24eb70d-9c84-4201-82c8-541dd6562dc8
warmup_steps: 5
weight_decay: 0.01
xformers_attention: true
```
</details><br>
# 6b7e7b40-ea04-4864-b6b8-1cc7b3b2f729
This model is a fine-tuned version of [Xenova/tiny-random-Phi3ForCausalLM](https://huggingface.co/Xenova/tiny-random-Phi3ForCausalLM) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.0 | 0.2284 | 200 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
paruwka/llama-1b-hypersenses-peft | paruwka | "2025-01-05T00:21:45Z" | 161 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-01-04T17:44:42Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
arjunanand13/LADP_Florence-10e | arjunanand13 | "2024-10-14T07:10:22Z" | 104 | 0 | transformers | [
"transformers",
"safetensors",
"florence2",
"text-generation",
"custom_code",
"arxiv:1910.09700",
"autotrain_compatible",
"region:us"
] | text-generation | "2024-10-14T07:09:22Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
xiaohan1/oalima-agent | xiaohan1 | "2024-03-02T06:01:44Z" | 3 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"llama-factory",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-03-02T04:44:47Z" | ---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
microsoft/xtremedistil-l12-h384-uncased | microsoft | "2021-08-05T17:49:31Z" | 1,132 | 15 | transformers | [
"transformers",
"pytorch",
"tf",
"bert",
"feature-extraction",
"text-classification",
"en",
"arxiv:2106.04563",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-03-02T23:29:05Z" | ---
language: en
thumbnail: https://huggingface.co/front/thumbnails/microsoft.png
tags:
- text-classification
license: mit
---
# XtremeDistilTransformers for Distilling Massive Neural Networks
XtremeDistilTransformers is a distilled task-agnostic transformer model that leverages task transfer for learning a small universal model that can be applied to arbitrary tasks and languages as outlined in the paper [XtremeDistilTransformers: Task Transfer for Task-agnostic Distillation](https://arxiv.org/abs/2106.04563).
We leverage task transfer combined with multi-task distillation techniques from the papers [XtremeDistil: Multi-stage Distillation for Massive Multilingual Models](https://www.aclweb.org/anthology/2020.acl-main.202.pdf) and [MiniLM: Deep Self-Attention Distillation for Task-Agnostic Compression of Pre-Trained Transformers](https://proceedings.neurips.cc/paper/2020/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf) with the following [Github code](https://github.com/microsoft/xtreme-distil-transformers).
This l6-h384 checkpoint with **6** layers, **384** hidden size, **12** attention heads corresponds to **22 million** parameters with **5.3x** speedup over BERT-base.
Other available checkpoints: [xtremedistil-l6-h256-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h256-uncased) and [xtremedistil-l6-h384-uncased](https://huggingface.co/microsoft/xtremedistil-l6-h384-uncased)
The following table shows the results on GLUE dev set and SQuAD-v2.
| Models | #Params | Speedup | MNLI | QNLI | QQP | RTE | SST | MRPC | SQUAD2 | Avg |
|----------------|--------|---------|------|------|------|------|------|------|--------|-------|
| BERT | 109 | 1x | 84.5 | 91.7 | 91.3 | 68.6 | 93.2 | 87.3 | 76.8 | 84.8 |
| DistilBERT | 66 | 2x | 82.2 | 89.2 | 88.5 | 59.9 | 91.3 | 87.5 | 70.7 | 81.3 |
| TinyBERT | 66 | 2x | 83.5 | 90.5 | 90.6 | 72.2 | 91.6 | 88.4 | 73.1 | 84.3 |
| MiniLM | 66 | 2x | 84.0 | 91.0 | 91.0 | 71.5 | 92.0 | 88.4 | 76.4 | 84.9 |
| MiniLM | 22 | 5.3x | 82.8 | 90.3 | 90.6 | 68.9 | 91.3 | 86.6 | 72.9 | 83.3 |
| XtremeDistil-l6-h256 | 13 | 8.7x | 83.9 | 89.5 | 90.6 | 80.1 | 91.2 | 90.0 | 74.1 | 85.6 |
| XtremeDistil-l6-h384 | 22 | 5.3x | 85.4 | 90.3 | 91.0 | 80.9 | 92.3 | 90.0 | 76.6 | 86.6 |
| XtremeDistil-l12-h384 | 33 | 2.7x | 87.2 | 91.9 | 91.3 | 85.6 | 93.1 | 90.4 | 80.2 | 88.5 |
Tested with `tensorflow 2.3.1, transformers 4.1.1, torch 1.6.0`
If you use this checkpoint in your work, please cite:
``` latex
@misc{mukherjee2021xtremedistiltransformers,
title={XtremeDistilTransformers: Task Transfer for Task-agnostic Distillation},
author={Subhabrata Mukherjee and Ahmed Hassan Awadallah and Jianfeng Gao},
year={2021},
eprint={2106.04563},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
Ramavill/twBETO_v0 | Ramavill | "2024-11-23T00:51:47Z" | 109 | 1 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2022-08-11T19:10:16Z" | ---
language:
- es
tags:
- roberta
license: apache-2.0
---
# Please use 'Roberta' related functions to load this model!
This repository contains the resources in our paper
**[Social Context in Political Stance Detection: Impact and Extrapolation]**
*Ramon Villa-Cox, Evan Williams, Kathleen M. Carley*
We pre-trained a BERT language model, we call *TwBETO_v0* following the robust approach introduced in RoBERTa. We opted for the smaller architecture dimensions introduced in DistilBERT, namely, 6 hidden layers with 12 attention heads. We also reduce the model's maximum sequence length to 128 tokens, following another BERT instantiation trained on English Twitter data (*BERTweet*). We utilize the RoBERTa implementation in the Hugging Face library and optimize the model using Adam with weight decay, a linear schedule with warmup and a maximum learning rate of 2e-4. We use a global batch size (via gradient accumulation) of 5k across 4 Titan XP GPUs (12 GB RAM each) and trained the model for 650 hours.
The model was trained with a corpus comprised of 155M Spanish tweets (4.5B words tokens), as determined by Twitter's API, and includes only original tweets (retweets are filtered out) with more than 6 tokens, while long tweets were truncated to 64 word tokens. The data was compiled from the following sources:
- 110M Tweets (3B word tokens) from the South American protests collected from September 20 to December 31 of 2019.
- 25M (0.7B word tokens) Tweets collected around the Coronavirus pandemic from April 01 to December 31 of 2020.
- 3M (0.3B word tokens) Tweets collected around the Chilean referendum from September 25 to November 10 of 2020.
- 17M (0.5B word tokens) rehydrated targets across all the collections listed.
Tweets are pretokenized using the "TweetTokenizer" from the NLTK toolkit and use the emoji package to translate emotion icons into word tokens (in Spanish). We also preprocess the Tweets by replacing user mentions with "*USER_AT*" and, using the tweet JSON, we replace media urls with "*HTTPMEDIA*" and web urls with "*HTTPURL*". We found that this new model produced significantly better quality embeddings than other available Spanish BERT variants for Twitter (e.g.: *TWilBert*). We hypothesize that this is a result of the latter being trained mainly on European Spanish with fewer data and it not applying the RoBERTa pretraining framework.
|
Nhoodie/Meta-Llama-3-8b-Configurable-Lexi-Uninstruct-function-calling-json-mode-Task-Arithmetic-v0.0A | Nhoodie | "2024-04-26T17:16:05Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"hiieu/Meta-Llama-3-8B-Instruct-function-calling-json-mode",
"Orenguteng/Lexi-Llama-3-8B-Uncensored",
"NousResearch/Meta-Llama-3-8B",
"vicgalle/Configurable-Llama-3-8B-v0.3",
"NousResearch/Meta-Llama-3-8B-Instruct",
"conversational",
"base_model:NousResearch/Meta-Llama-3-8B",
"base_model:merge:NousResearch/Meta-Llama-3-8B",
"base_model:NousResearch/Meta-Llama-3-8B-Instruct",
"base_model:merge:NousResearch/Meta-Llama-3-8B-Instruct",
"base_model:Orenguteng/Llama-3-8B-Lexi-Uncensored",
"base_model:merge:Orenguteng/Llama-3-8B-Lexi-Uncensored",
"base_model:hiieu/Meta-Llama-3-8B-Instruct-function-calling-json-mode",
"base_model:merge:hiieu/Meta-Llama-3-8B-Instruct-function-calling-json-mode",
"base_model:vicgalle/Configurable-Llama-3-8B-v0.3",
"base_model:merge:vicgalle/Configurable-Llama-3-8B-v0.3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-26T07:25:57Z" | ---
tags:
- merge
- mergekit
- lazymergekit
- hiieu/Meta-Llama-3-8B-Instruct-function-calling-json-mode
- Orenguteng/Lexi-Llama-3-8B-Uncensored
- NousResearch/Meta-Llama-3-8B
- vicgalle/Configurable-Llama-3-8B-v0.3
- NousResearch/Meta-Llama-3-8B-Instruct
base_model:
- hiieu/Meta-Llama-3-8B-Instruct-function-calling-json-mode
- Orenguteng/Lexi-Llama-3-8B-Uncensored
- NousResearch/Meta-Llama-3-8B
- vicgalle/Configurable-Llama-3-8B-v0.3
- NousResearch/Meta-Llama-3-8B-Instruct
---
# Meta-Llama-3-8b-Configurable-Lexi-Uninstruct-function-calling-json-mode-Task-Arithmetic-v0.0A
Meta-Llama-3-8b-Configurable-Lexi-Uninstruct-function-calling-json-mode-Task-Arithmetic-v0.0A is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [hiieu/Meta-Llama-3-8B-Instruct-function-calling-json-mode](https://huggingface.co/hiieu/Meta-Llama-3-8B-Instruct-function-calling-json-mode)
* [Orenguteng/Lexi-Llama-3-8B-Uncensored](https://huggingface.co/Orenguteng/Lexi-Llama-3-8B-Uncensored)
* [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B)
* [vicgalle/Configurable-Llama-3-8B-v0.3](https://huggingface.co/vicgalle/Configurable-Llama-3-8B-v0.3)
* [NousResearch/Meta-Llama-3-8B-Instruct](https://huggingface.co/NousResearch/Meta-Llama-3-8B-Instruct)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: hiieu/Meta-Llama-3-8B-Instruct-function-calling-json-mode
parameters:
weight: 1
layer_range: [0, 32]
- model: Orenguteng/Lexi-Llama-3-8B-Uncensored
parameters:
weight: 0.9
layer_range: [0, 32]
- model: NousResearch/Meta-Llama-3-8B
parameters:
weight: 0.6
layer_range: [0, 32]
- model: vicgalle/Configurable-Llama-3-8B-v0.3
parameters:
weight: 0.8
layer_range: [0, 32]
- model: NousResearch/Meta-Llama-3-8B-Instruct
parameters:
weight: 0.7
layer_range: [0, 32]
merge_method: task_arithmetic
base_model: NousResearch/Meta-Llama-3-8B-Instruct
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Nhoodie/Meta-Llama-3-8b-Configurable-Lexi-Uninstruct-function-calling-json-mode-Task-Arithmetic-v0.0A"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
zelk12/MT-Gen1-MU-gemma-2-Av4cMT1-9B | zelk12 | "2024-10-23T13:25:24Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:lemon07r/Gemma-2-Ataraxy-v4c-9B",
"base_model:merge:lemon07r/Gemma-2-Ataraxy-v4c-9B",
"base_model:zelk12/MT1-gemma-2-9B",
"base_model:merge:zelk12/MT1-gemma-2-9B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-10-23T13:13:20Z" | ---
base_model:
- lemon07r/Gemma-2-Ataraxy-v4c-9B
- zelk12/MT1-gemma-2-9B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [lemon07r/Gemma-2-Ataraxy-v4c-9B](https://huggingface.co/lemon07r/Gemma-2-Ataraxy-v4c-9B)
* [zelk12/MT1-gemma-2-9B](https://huggingface.co/zelk12/MT1-gemma-2-9B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: lemon07r/Gemma-2-Ataraxy-v4c-9B
- model: zelk12/MT1-gemma-2-9B
merge_method: slerp
base_model: lemon07r/Gemma-2-Ataraxy-v4c-9B
dtype: bfloat16
parameters:
t: 0.5
```
|
Tritkoman/EnglishtoOldEnglishV5 | Tritkoman | "2023-02-23T12:44:28Z" | 13 | 0 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain",
"translation",
"unk",
"dataset:Tritkoman/autotrain-data-oldenglish5",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | "2023-02-23T12:36:45Z" | ---
tags:
- autotrain
- translation
language:
- unk
- unk
datasets:
- Tritkoman/autotrain-data-oldenglish5
co2_eq_emissions:
emissions: 10.382242558236783
---
# Model Trained Using AutoTrain
- Problem type: Translation
- Model ID: 3684798314
- CO2 Emissions (in grams): 10.3822
## Validation Metrics
- Loss: 2.959
- SacreBLEU: 11.287
- Gen len: 13.759 |
cleanrl/ChopperCommand-v5-cleanba_ppo_envpool_machado_atari_wrapper-seed3 | cleanrl | "2023-03-09T22:33:21Z" | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"ChopperCommand-v5",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-03-09T22:33:20Z" | ---
tags:
- ChopperCommand-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: ChopperCommand-v5
type: ChopperCommand-v5
metrics:
- type: mean_reward
value: 7040.00 +/- 3284.87
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **ChopperCommand-v5**
This is a trained model of a PPO agent playing ChopperCommand-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_ppo_envpool_machado_atari_wrapper.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_ppo_envpool_machado_atari_wrapper --env-id ChopperCommand-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/ChopperCommand-v5-cleanba_ppo_envpool_machado_atari_wrapper-seed3/raw/main/cleanba_ppo_envpool_machado_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/ChopperCommand-v5-cleanba_ppo_envpool_machado_atari_wrapper-seed3/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/ChopperCommand-v5-cleanba_ppo_envpool_machado_atari_wrapper-seed3/raw/main/poetry.lock
poetry install --all-extras
python cleanba_ppo_envpool_machado_atari_wrapper.py --distributed --learner-device-ids 1 2 3 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id ChopperCommand-v5 --seed 3
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 15360,
'capture_video': False,
'clip_coef': 0.1,
'concurrency': True,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'ChopperCommand-v5',
'exp_name': 'cleanba_ppo_envpool_machado_atari_wrapper',
'gae_lambda': 0.95,
'gamma': 0.99,
'global_learner_decices': ['gpu:1',
'gpu:2',
'gpu:3',
'gpu:5',
'gpu:6',
'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3],
'learner_devices': ['gpu:1', 'gpu:2', 'gpu:3'],
'learning_rate': 0.00025,
'local_batch_size': 7680,
'local_minibatch_size': 1920,
'local_num_envs': 60,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 3840,
'norm_adv': True,
'num_envs': 120,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 3255,
'profile': False,
'save_model': True,
'seed': 3,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 2}
```
|
featherless-ai-quants/Locutusque-Hyperion-3.0-Mistral-7B-DPO-GGUF | featherless-ai-quants | "2024-11-10T19:52:26Z" | 5 | 0 | null | [
"gguf",
"text-generation",
"base_model:Locutusque/Hyperion-3.0-Mistral-7B-DPO",
"base_model:quantized:Locutusque/Hyperion-3.0-Mistral-7B-DPO",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-11-08T04:30:30Z" | ---
base_model: Locutusque/Hyperion-3.0-Mistral-7B-DPO
pipeline_tag: text-generation
quantized_by: featherless-ai-quants
---
# Locutusque/Hyperion-3.0-Mistral-7B-DPO GGUF Quantizations 🚀

*Optimized GGUF quantization files for enhanced model performance*
> Powered by [Featherless AI](https://featherless.ai) - run any model you'd like for a simple small fee.
---
## Available Quantizations 📊
| Quantization Type | File | Size |
|-------------------|------|------|
| IQ4_XS | [Locutusque-Hyperion-3.0-Mistral-7B-DPO-IQ4_XS.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Hyperion-3.0-Mistral-7B-DPO-GGUF/blob/main/Locutusque-Hyperion-3.0-Mistral-7B-DPO-IQ4_XS.gguf) | 3761.66 MB |
| Q2_K | [Locutusque-Hyperion-3.0-Mistral-7B-DPO-Q2_K.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Hyperion-3.0-Mistral-7B-DPO-GGUF/blob/main/Locutusque-Hyperion-3.0-Mistral-7B-DPO-Q2_K.gguf) | 2593.27 MB |
| Q3_K_L | [Locutusque-Hyperion-3.0-Mistral-7B-DPO-Q3_K_L.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Hyperion-3.0-Mistral-7B-DPO-GGUF/blob/main/Locutusque-Hyperion-3.0-Mistral-7B-DPO-Q3_K_L.gguf) | 3644.97 MB |
| Q3_K_M | [Locutusque-Hyperion-3.0-Mistral-7B-DPO-Q3_K_M.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Hyperion-3.0-Mistral-7B-DPO-GGUF/blob/main/Locutusque-Hyperion-3.0-Mistral-7B-DPO-Q3_K_M.gguf) | 3355.97 MB |
| Q3_K_S | [Locutusque-Hyperion-3.0-Mistral-7B-DPO-Q3_K_S.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Hyperion-3.0-Mistral-7B-DPO-GGUF/blob/main/Locutusque-Hyperion-3.0-Mistral-7B-DPO-Q3_K_S.gguf) | 3017.97 MB |
| Q4_K_M | [Locutusque-Hyperion-3.0-Mistral-7B-DPO-Q4_K_M.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Hyperion-3.0-Mistral-7B-DPO-GGUF/blob/main/Locutusque-Hyperion-3.0-Mistral-7B-DPO-Q4_K_M.gguf) | 4166.07 MB |
| Q4_K_S | [Locutusque-Hyperion-3.0-Mistral-7B-DPO-Q4_K_S.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Hyperion-3.0-Mistral-7B-DPO-GGUF/blob/main/Locutusque-Hyperion-3.0-Mistral-7B-DPO-Q4_K_S.gguf) | 3948.57 MB |
| Q5_K_M | [Locutusque-Hyperion-3.0-Mistral-7B-DPO-Q5_K_M.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Hyperion-3.0-Mistral-7B-DPO-GGUF/blob/main/Locutusque-Hyperion-3.0-Mistral-7B-DPO-Q5_K_M.gguf) | 4893.69 MB |
| Q5_K_S | [Locutusque-Hyperion-3.0-Mistral-7B-DPO-Q5_K_S.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Hyperion-3.0-Mistral-7B-DPO-GGUF/blob/main/Locutusque-Hyperion-3.0-Mistral-7B-DPO-Q5_K_S.gguf) | 4766.19 MB |
| Q6_K | [Locutusque-Hyperion-3.0-Mistral-7B-DPO-Q6_K.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Hyperion-3.0-Mistral-7B-DPO-GGUF/blob/main/Locutusque-Hyperion-3.0-Mistral-7B-DPO-Q6_K.gguf) | 5666.80 MB |
| Q8_0 | [Locutusque-Hyperion-3.0-Mistral-7B-DPO-Q8_0.gguf](https://huggingface.co/featherless-ai-quants/Locutusque-Hyperion-3.0-Mistral-7B-DPO-GGUF/blob/main/Locutusque-Hyperion-3.0-Mistral-7B-DPO-Q8_0.gguf) | 7339.34 MB |
---
## ⚡ Powered by [Featherless AI](https://featherless.ai)
### Key Features
- 🔥 **Instant Hosting** - Deploy any Llama model on HuggingFace instantly
- 🛠️ **Zero Infrastructure** - No server setup or maintenance required
- 📚 **Vast Compatibility** - Support for 2400+ models and counting
- 💎 **Affordable Pricing** - Starting at just $10/month
---
**Links:**
[Get Started](https://featherless.ai) | [Documentation](https://featherless.ai/docs) | [Models](https://featherless.ai/models) |
AI-Engine/gemma-2-9b-it-GGUF | AI-Engine | "2024-07-22T17:04:48Z" | 25 | 0 | null | [
"gguf",
"base_model:google/gemma-2-9b-it",
"base_model:quantized:google/gemma-2-9b-it",
"license:gemma",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-07-16T22:12:54Z" | ---
license: gemma
base_model: google/gemma-2-9b-it
---
GGUF [llama.cpp](https://github.com/ggerganov/llama.cpp) quantized version of:
- Original model: [gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it)
- Model creator: [Google](https://huggingface.co/google)
- [License](https://www.kaggle.com/models/google/gemma/license/consent?verifyToken=CfDJ8OV3w-Vr_2dIpZxXY9wVZZnpWKdFS3kJvSU2XkwpfOZICBFcOxoYJFb12HJj1BQs9FHgrjqpbEoqYjxdMwgaew-eH8JJmsLOgj56rjNeDFWaxTA36ggVQ1RJsKmH0mbl74o1qgioqSV5ktl-J5ebL9ep3JmOojU1HdBDSScB6WyGDSIuAcw8MWuy9LEE74Ze)
## Recommended Prompt Format (Gemma)
```
<start_of_turn>model
Provide some context and/or instructions to the model.<end_of_turn>model
<start_of_turn>user
The user’s message goes here<end_of_turn>
<start_of_turn>model
AI message goes here<end_of_turn>model
```
Quant Version: [b3405](https://github.com/ggerganov/llama.cpp/releases/tag/b3405) with [imatrix](https://github.com/ggerganov/llama.cpp/discussions/5263#discussioncomment-8395384) |
adarksky/president-gpt2 | adarksky | "2024-07-04T11:34:43Z" | 12 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"en",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-07-04T11:24:07Z" | ---
license: mit
language:
- en
pipeline_tag: text-generation
widget:
- text: 'We all are '
- text: 'Americans '
- text: 'This is '
inference:
parameters:
min_length: 500
max_length: 1000
temperature: 0.7
--- |
ogbrandt/mistral7b-pjf-ft-v0 | ogbrandt | "2024-01-17T00:24:34Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-01-17T00:24:27Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Rojic/VulRoBERTa | Rojic | "2024-05-04T04:28:01Z" | 130 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-04-28T06:13:11Z" | ---
license: apache-2.0
---
This RoBERTa model is trained on Devign for code vulnerability detection. It is a binary classification model.
Code example:
from transformers import AutoTokenizer, AutoModelForSequenceClassification
from transformers import pipeline
tokenizer = AutoTokenizer.from_pretrained("Rojic/VulRoBERTa",trust_remote_code=True)
model = AutoModelForSequenceClassification.from_pretrained("Rojic/VulRoBERTa")
pipe = pipeline("text-classification", tokenizer=tokenizer,model=model, trust_remote_code=True, return_all_scores=True)
#pipe(code)
pipe("static void filter_mirror_setup(NetFilterState *nf, Error **errp)\n{\n MirrorState *s = FILTER_MIRROR(nf);\n Chardev *chr;\n chr = qemu_chr_find(s->outdev);\n if (chr == NULL) {\n error_set(errp, ERROR_CLASS_DEVICE_NOT_FOUND,\n \"Device '%s' not found\", s->outdev);\n qemu_chr_fe_init(&s->chr_out, chr, errp);") |
sujitvasanth/TheBloke-openchat-3.5-0106-GPTQ | sujitvasanth | "2024-02-04T08:10:24Z" | 7 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"openchat",
"C-RLFT",
"conversational",
"arxiv:2309.11235",
"arxiv:2303.08774",
"base_model:openchat/openchat-3.5-0106",
"base_model:quantized:openchat/openchat-3.5-0106",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] | text-generation | "2024-02-04T08:10:24Z" | ---
base_model: openchat/openchat-3.5-0106
inference: false
library_name: transformers
license: apache-2.0
model_creator: OpenChat
model_name: Openchat 3.5 0106
model_type: mistral
pipeline_tag: text-generation
prompt_template: 'GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant:
'
quantized_by: TheBloke
tags:
- openchat
- mistral
- C-RLFT
---
<!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Openchat 3.5 0106 - GPTQ
- Model creator: [OpenChat](https://huggingface.co/openchat)
- Original model: [Openchat 3.5 0106](https://huggingface.co/openchat/openchat-3.5-0106)
<!-- description start -->
# Description
This repo contains GPTQ model files for [OpenChat's Openchat 3.5 0106](https://huggingface.co/openchat/openchat-3.5-0106).
Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them.
These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/).
<!-- description end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/openchat-3.5-0106-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/openchat-3.5-0106-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/openchat-3.5-0106-GGUF)
* [OpenChat's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/openchat/openchat-3.5-0106)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: OpenChat-Correct
```
GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant:
```
<!-- prompt-template end -->
<!-- README_GPTQ.md-compatible clients start -->
## Known compatible clients / servers
GPTQ models are currently supported on Linux (NVidia/AMD) and Windows (NVidia only). macOS users: please use GGUF models.
These GPTQ models are known to work in the following inference servers/webuis.
- [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
- [KoboldAI United](https://github.com/henk717/koboldai)
- [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui)
- [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference)
This may not be a complete list; if you know of others, please let me know!
<!-- README_GPTQ.md-compatible clients end -->
<!-- README_GPTQ.md-provided-files start -->
## Provided files, and GPTQ parameters
Multiple quantisation parameters are provided, to allow you to choose the best one for your hardware and requirements.
Each separate quant is in a different branch. See below for instructions on fetching from different branches.
Most GPTQ files are made with AutoGPTQ. Mistral models are currently made with Transformers.
<details>
<summary>Explanation of GPTQ parameters</summary>
- Bits: The bit size of the quantised model.
- GS: GPTQ group size. Higher numbers use less VRAM, but have lower quantisation accuracy. "None" is the lowest possible value.
- Act Order: True or False. Also known as `desc_act`. True results in better quantisation accuracy. Some GPTQ clients have had issues with models that use Act Order plus Group Size, but this is generally resolved now.
- Damp %: A GPTQ parameter that affects how samples are processed for quantisation. 0.01 is default, but 0.1 results in slightly better accuracy.
- GPTQ dataset: The calibration dataset used during quantisation. Using a dataset more appropriate to the model's training can improve quantisation accuracy. Note that the GPTQ calibration dataset is not the same as the dataset used to train the model - please refer to the original model repo for details of the training dataset(s).
- Sequence Length: The length of the dataset sequences used for quantisation. Ideally this is the same as the model sequence length. For some very long sequence models (16+K), a lower sequence length may have to be used. Note that a lower sequence length does not limit the sequence length of the quantised model. It only impacts the quantisation accuracy on longer inference sequences.
- ExLlama Compatibility: Whether this file can be loaded with ExLlama, which currently only supports Llama and Mistral models in 4-bit.
</details>
| Branch | Bits | GS | Act Order | Damp % | GPTQ Dataset | Seq Len | Size | ExLlama | Desc |
| ------ | ---- | -- | --------- | ------ | ------------ | ------- | ---- | ------- | ---- |
| [main](https://huggingface.co/TheBloke/openchat-3.5-0106-GPTQ/tree/main) | 4 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 4.16 GB | Yes | 4-bit, with Act Order and group size 128g. Uses even less VRAM than 64g, but with slightly lower accuracy. |
| [gptq-4bit-32g-actorder_True](https://huggingface.co/TheBloke/openchat-3.5-0106-GPTQ/tree/gptq-4bit-32g-actorder_True) | 4 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 4.57 GB | Yes | 4-bit, with Act Order and group size 32g. Gives highest possible inference quality, with maximum VRAM usage. |
| [gptq-8bit--1g-actorder_True](https://huggingface.co/TheBloke/openchat-3.5-0106-GPTQ/tree/gptq-8bit--1g-actorder_True) | 8 | None | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 7.52 GB | No | 8-bit, with Act Order. No group size, to lower VRAM requirements. |
| [gptq-8bit-128g-actorder_True](https://huggingface.co/TheBloke/openchat-3.5-0106-GPTQ/tree/gptq-8bit-128g-actorder_True) | 8 | 128 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 7.68 GB | No | 8-bit, with group size 128g for higher inference quality and with Act Order for even higher accuracy. |
| [gptq-8bit-32g-actorder_True](https://huggingface.co/TheBloke/openchat-3.5-0106-GPTQ/tree/gptq-8bit-32g-actorder_True) | 8 | 32 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 8.17 GB | No | 8-bit, with group size 32g and Act Order for maximum inference quality. |
| [gptq-4bit-64g-actorder_True](https://huggingface.co/TheBloke/openchat-3.5-0106-GPTQ/tree/gptq-4bit-64g-actorder_True) | 4 | 64 | Yes | 0.1 | [VMware Open Instruct](https://huggingface.co/datasets/VMware/open-instruct/viewer/) | 4096 | 4.30 GB | Yes | 4-bit, with Act Order and group size 64g. Uses less VRAM than 32g, but with slightly lower accuracy. |
<!-- README_GPTQ.md-provided-files end -->
<!-- README_GPTQ.md-download-from-branches start -->
## How to download, including from branches
### In text-generation-webui
To download from the `main` branch, enter `TheBloke/openchat-3.5-0106-GPTQ` in the "Download model" box.
To download from another branch, add `:branchname` to the end of the download name, eg `TheBloke/openchat-3.5-0106-GPTQ:gptq-4bit-32g-actorder_True`
### From the command line
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
To download the `main` branch to a folder called `openchat-3.5-0106-GPTQ`:
```shell
mkdir openchat-3.5-0106-GPTQ
huggingface-cli download TheBloke/openchat-3.5-0106-GPTQ --local-dir openchat-3.5-0106-GPTQ --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
```shell
mkdir openchat-3.5-0106-GPTQ
huggingface-cli download TheBloke/openchat-3.5-0106-GPTQ --revision gptq-4bit-32g-actorder_True --local-dir openchat-3.5-0106-GPTQ --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model.
The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`.
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
mkdir openchat-3.5-0106-GPTQ
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/openchat-3.5-0106-GPTQ --local-dir openchat-3.5-0106-GPTQ --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
### With `git` (**not** recommended)
To clone a specific branch with `git`, use a command like this:
```shell
git clone --single-branch --branch gptq-4bit-32g-actorder_True https://huggingface.co/TheBloke/openchat-3.5-0106-GPTQ
```
Note that using Git with HF repos is strongly discouraged. It will be much slower than using `huggingface-hub`, and will use twice as much disk space as it has to store the model files twice (it stores every byte both in the intended target folder, and again in the `.git` folder as a blob.)
<!-- README_GPTQ.md-download-from-branches end -->
<!-- README_GPTQ.md-text-generation-webui start -->
## How to easily download and use this model in [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
Please make sure you're using the latest version of [text-generation-webui](https://github.com/oobabooga/text-generation-webui).
It is strongly recommended to use the text-generation-webui one-click-installers unless you're sure you know how to make a manual install.
1. Click the **Model tab**.
2. Under **Download custom model or LoRA**, enter `TheBloke/openchat-3.5-0106-GPTQ`.
- To download from a specific branch, enter for example `TheBloke/openchat-3.5-0106-GPTQ:gptq-4bit-32g-actorder_True`
- see Provided Files above for the list of branches for each option.
3. Click **Download**.
4. The model will start downloading. Once it's finished it will say "Done".
5. In the top left, click the refresh icon next to **Model**.
6. In the **Model** dropdown, choose the model you just downloaded: `openchat-3.5-0106-GPTQ`
7. The model will automatically load, and is now ready for use!
8. If you want any custom settings, set them and then click **Save settings for this model** followed by **Reload the Model** in the top right.
- Note that you do not need to and should not set manual GPTQ parameters any more. These are set automatically from the file `quantize_config.json`.
9. Once you're ready, click the **Text Generation** tab and enter a prompt to get started!
<!-- README_GPTQ.md-text-generation-webui end -->
<!-- README_GPTQ.md-use-from-tgi start -->
## Serving this model from Text Generation Inference (TGI)
It's recommended to use TGI version 1.1.0 or later. The official Docker container is: `ghcr.io/huggingface/text-generation-inference:1.1.0`
Example Docker parameters:
```shell
--model-id TheBloke/openchat-3.5-0106-GPTQ --port 3000 --quantize gptq --max-input-length 3696 --max-total-tokens 4096 --max-batch-prefill-tokens 4096
```
Example Python code for interfacing with TGI (requires huggingface-hub 0.17.0 or later):
```shell
pip3 install huggingface-hub
```
```python
from huggingface_hub import InferenceClient
endpoint_url = "https://your-endpoint-url-here"
prompt = "Tell me about AI"
prompt_template=f'''GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant:
'''
client = InferenceClient(endpoint_url)
response = client.text_generation(
prompt_template,
max_new_tokens=128,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(f"Model output: {response}")
```
<!-- README_GPTQ.md-use-from-tgi end -->
<!-- README_GPTQ.md-use-from-python start -->
## Python code example: inference from this GPTQ model
### Install the necessary packages
Requires: Transformers 4.33.0 or later, Optimum 1.12.0 or later, and AutoGPTQ 0.4.2 or later.
```shell
pip3 install --upgrade transformers optimum
# If using PyTorch 2.1 + CUDA 12.x:
pip3 install --upgrade auto-gptq
# or, if using PyTorch 2.1 + CUDA 11.x:
pip3 install --upgrade auto-gptq --extra-index-url https://huggingface.github.io/autogptq-index/whl/cu118/
```
If you are using PyTorch 2.0, you will need to install AutoGPTQ from source. Likewise if you have problems with the pre-built wheels, you should try building from source:
```shell
pip3 uninstall -y auto-gptq
git clone https://github.com/PanQiWei/AutoGPTQ
cd AutoGPTQ
git checkout v0.5.1
pip3 install .
```
### Example Python code
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
model_name_or_path = "TheBloke/openchat-3.5-0106-GPTQ"
# To use a different branch, change revision
# For example: revision="gptq-4bit-32g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
device_map="auto",
trust_remote_code=False,
revision="main")
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)
prompt = "Write a story about llamas"
system_message = "You are a story writing assistant"
prompt_template=f'''GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant:
'''
print("\n\n*** Generate:")
input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))
# Inference can also be done using transformers' pipeline
print("*** Pipeline:")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
top_k=40,
repetition_penalty=1.1
)
print(pipe(prompt_template)[0]['generated_text'])
```
<!-- README_GPTQ.md-use-from-python end -->
<!-- README_GPTQ.md-compatibility start -->
## Compatibility
The files provided are tested to work with Transformers. For non-Mistral models, AutoGPTQ can also be used directly.
[ExLlama](https://github.com/turboderp/exllama) is compatible with Llama architecture models (including Mistral, Yi, DeepSeek, SOLAR, etc) in 4-bit. Please see the Provided Files table above for per-file compatibility.
For a list of clients/servers, please see "Known compatible clients / servers", above.
<!-- README_GPTQ.md-compatibility end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Michael Levine, 阿明, Trailburnt, Nikolai Manek, John Detwiler, Randy H, Will Dee, Sebastain Graf, NimbleBox.ai, Eugene Pentland, Emad Mostaque, Ai Maven, Jim Angel, Jeff Scroggin, Michael Davis, Manuel Alberto Morcote, Stephen Murray, Robert, Justin Joy, Luke @flexchar, Brandon Frisco, Elijah Stavena, S_X, Dan Guido, Undi ., Komninos Chatzipapas, Shadi, theTransient, Lone Striker, Raven Klaugh, jjj, Cap'n Zoog, Michel-Marie MAUDET (LINAGORA), Matthew Berman, David, Fen Risland, Omer Bin Jawed, Luke Pendergrass, Kalila, OG, Erik Bjäreholt, Rooh Singh, Joseph William Delisle, Dan Lewis, TL, John Villwock, AzureBlack, Brad, Pedro Madruga, Caitlyn Gatomon, K, jinyuan sun, Mano Prime, Alex, Jeffrey Morgan, Alicia Loh, Illia Dulskyi, Chadd, transmissions 11, fincy, Rainer Wilmers, ReadyPlayerEmma, knownsqashed, Mandus, biorpg, Deo Leter, Brandon Phillips, SuperWojo, Sean Connelly, Iucharbius, Jack West, Harry Royden McLaughlin, Nicholas, terasurfer, Vitor Caleffi, Duane Dunston, Johann-Peter Hartmann, David Ziegler, Olakabola, Ken Nordquist, Trenton Dambrowitz, Tom X Nguyen, Vadim, Ajan Kanaga, Leonard Tan, Clay Pascal, Alexandros Triantafyllidis, JM33133, Xule, vamX, ya boyyy, subjectnull, Talal Aujan, Alps Aficionado, wassieverse, Ari Malik, James Bentley, Woland, Spencer Kim, Michael Dempsey, Fred von Graf, Elle, zynix, William Richards, Stanislav Ovsiannikov, Edmond Seymore, Jonathan Leane, Martin Kemka, usrbinkat, Enrico Ros
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
# Original model card: OpenChat's Openchat 3.5 0106
<div align="center">
<img src="https://raw.githubusercontent.com/imoneoi/openchat/master/assets/logo_new.png" style="width: 65%">
<h1>Advancing Open-source Language Models with Mixed-Quality Data</h1>
</div>
<p align="center" style="margin-top: 0px;">
<a href="https://openchat.team">
<img src="https://github.com/alpayariyak/openchat/blob/master/assets/logo_nobg.png?raw=true" alt="OpenChat Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 10px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style=" margin-right: 5px;">Online Demo</span>
</a> |
<a href="https://github.com/imoneoi/openchat">
<img src="https://github.githubassets.com/assets/GitHub-Mark-ea2971cee799.png" alt="GitHub Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style=" margin-right: 5px;">GitHub</span>
</a> |
<a href="https://arxiv.org/pdf/2309.11235.pdf">
<img src="https://github.com/alpayariyak/openchat/blob/master/assets/arxiv-logomark-small-square-border.png?raw=true" alt="ArXiv Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text" style="margin-right: 5px;">Paper</span>
</a> |
<a href="https://discord.gg/pQjnXvNKHY">
<img src="https://cloud.githubusercontent.com/assets/6291467/26705903/96c2d66e-477c-11e7-9f4e-f3c0efe96c9a.png" alt="Discord Logo" style="width:20px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
<span class="link-text">Discord</span>
</a>
</p>
<p align="center" style="margin-top: 0px;">
<span class="link-text" style=" margin-right: 0px; font-size: 0.8em">Sponsored by RunPod</span>
<img src="https://styles.redditmedia.com/t5_6075m3/styles/profileIcon_71syco7c5lt81.png?width=256&height=256&frame=1&auto=webp&crop=256:256,smart&s=24bd3c71dc11edc5d4f88d0cbc1da72ed7ae1969" alt="RunPod Logo" style="width:30px; vertical-align: middle; display: inline-block; margin-right: 5px; margin-left: 5px; margin-top: 0px; margin-bottom: 0px;"/>
</p>
<div style="background-color: white; padding: 0.7em; border-radius: 0.5em; color: black; display: flex; flex-direction: column; justify-content: center; text-align: center; ont-size: 0.5em; border: 0.8em solid #864AF9;">
<a href="https://huggingface.co/openchat/openchat-3.5-0106" style="text-decoration: none; color: black;">
<span style="font-size: 1.7em; font-family: 'Helvetica'; letter-spacing: 0.1em; font-weight: bold; color: black;">OPENCHAT</span><span style="font-size: 1.8em; font-family: 'Helvetica'; color: #3c72db; ">3.5</span>
<span style="font-size: 1.0em; font-family: 'Helvetica'; color: white; background-color: #864AF9; vertical-align: top; border-radius: 6em; padding: 0.066em 0.4em; letter-spacing: 0.1em; font-weight: bold;">0106</span>
<span style="font-size: 0.85em; font-family: 'Helvetica'; color: black;">
<br> 🏆 The Overall Best Performing Open Source 7B Model 🏆
<br> 🤖 Outperforms <span style="font-weight: bold;">ChatGPT</span> (March) and <span style="font-weight: bold;">Grok-1</span> 🤖
<br> 🚀<span style="font-size: 1em; font-family: 'Helvetica'; color: black; font-weight: bold;">15</span>-point improvement in Coding over <span style="font-size: 0.9em;
font-family: 'Helvetica'; color: black; font-weight: bold;">OpenChat-3.5🚀</span>
<br><br><span style="font-size: 1em; font-family: 'Helvetica'; color: #3c72db; font-weight: bold;">New Features</span>
<br> 💡 2 Modes: Coding + Generalist, Mathematical Reasoning 💡
<br> 🧑⚖️ Experimental support for Evaluator and Feedback capabilities 🧑⚖️
</span>
</a>
</div>
<div style="display: flex; justify-content: center; align-items: center">
<img src="https://raw.githubusercontent.com/imoneoi/openchat/master/assets/openchat-bench-0106.png" style="width: 100%; border-radius: 1em">
</div>
<div>
<h3> Table of Contents</h3>
</div>
1. [Usage](#usage)
2. [Benchmarks](#benchmarks)
3. [Limitations](#limitations)
4. [License](#license)
6. [Citation](#citation)
7. [Acknowledgements](#acknowledgements)
<div align="center">
<h2> Usage </h2>
</div>
To use this model, we highly recommend installing the OpenChat package by following the [installation guide](https://github.com/imoneoi/openchat#installation) in our repository and using the OpenChat OpenAI-compatible API server by running the serving command from the table below. The server is optimized for high-throughput deployment using [vLLM](https://github.com/vllm-project/vllm) and can run on a consumer GPU with 24GB RAM. To enable tensor parallelism, append `--tensor-parallel-size N` to the serving command.
Once started, the server listens at `localhost:18888` for requests and is compatible with the [OpenAI ChatCompletion API specifications](https://platform.openai.com/docs/api-reference/chat). Please refer to the example request below for reference. Additionally, you can use the [OpenChat Web UI](https://github.com/imoneoi/openchat#web-ui) for a user-friendly experience.
If you want to deploy the server as an online service, you can use `--api-keys sk-KEY1 sk-KEY2 ...` to specify allowed API keys and `--disable-log-requests --disable-log-stats --log-file openchat.log` for logging only to a file. For security purposes, we recommend using an [HTTPS gateway](https://fastapi.tiangolo.com/es/deployment/concepts/#security-https) in front of the server.
| Model | Size | Context | Weights | Serving |
|-------------------|------|---------|------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------|
| OpenChat-3.5-0106 | 7B | 8192 | [Huggingface](https://huggingface.co/openchat/openchat-3.5-0106) | `python -m ochat.serving.openai_api_server --model openchat/openchat-3.5-0106 --engine-use-ray --worker-use-ray` |
<details>
<summary>Example request (click to expand)</summary>
💡 **Default Mode (GPT4 Correct)**: Best for coding, chat and general tasks
```bash
curl http://localhost:18888/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "openchat_3.5",
"messages": [{"role": "user", "content": "You are a large language model named OpenChat. Write a poem to describe yourself"}]
}'
```
🧮 **Mathematical Reasoning Mode**: Tailored for solving math problems
```bash
curl http://localhost:18888/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "openchat_3.5",
"condition": "Math Correct",
"messages": [{"role": "user", "content": "10.3 − 7988.8133 = "}]
}'
```
</details>
### Conversation templates
💡 **Default Mode (GPT4 Correct)**: Best for coding, chat and general tasks
```
GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant: Hi<|end_of_turn|>GPT4 Correct User: How are you today?<|end_of_turn|>GPT4 Correct Assistant:
```
🧮 **Mathematical Reasoning Mode**: Tailored for solving math problems
```
Math Correct User: 10.3 − 7988.8133=<|end_of_turn|>Math Correct Assistant:
```
⚠️ **Notice:** Remember to set `<|end_of_turn|>` as end of generation token.
The default (GPT4 Correct) template is also available as the integrated `tokenizer.chat_template`,
which can be used instead of manually specifying the template:
```python
messages = [
{"role": "user", "content": "Hello"},
{"role": "assistant", "content": "Hi"},
{"role": "user", "content": "How are you today?"}
]
tokens = tokenizer.apply_chat_template(messages, add_generation_prompt=True)
assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747, 15359, 32000, 420, 6316, 28781, 3198, 3123, 1247, 28747, 1602, 460, 368, 3154, 28804, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747]
```
<div align="center">
<h2> (Experimental) Evaluator / Feedback Capabilities </h2>
</div>
We've included evaluator capabilities in this release to advance open-source models as evaluators. You can use `Default Mode (GPT4 Correct)` with the following prompt (same as [Prometheus](https://huggingface.co/datasets/kaist-ai/Feedback-Collection)) to evaluate a response.
```
###Task Description:
An instruction (might include an Input inside it), a response to evaluate, a reference answer that gets a score of 5, and a score rubric representing a evaluation criteria are given.
1. Write a detailed feedback that assess the quality of the response strictly based on the given score rubric, not evaluating in general.
2. After writing a feedback, write a score that is an integer between 1 and 5. You should refer to the score rubric.
3. The output format should look as follows: "Feedback: (write a feedback for criteria) [RESULT] (an integer number between 1 and 5)"
4. Please do not generate any other opening, closing, and explanations.
###The instruction to evaluate:
{orig_instruction}
###Response to evaluate:
{orig_response}
###Reference Answer (Score 5):
{orig_reference_answer}
###Score Rubrics:
[{orig_criteria}]
Score 1: {orig_score1_description}
Score 2: {orig_score2_description}
Score 3: {orig_score3_description}
Score 4: {orig_score4_description}
Score 5: {orig_score5_description}
###Feedback:
```
<div align="center">
<h2> Benchmarks </h2>
</div>
| Model | # Params | Average | MT-Bench | HumanEval | BBH MC | AGIEval | TruthfulQA | MMLU | GSM8K | BBH CoT |
|-----------------------|----------|----------|----------|-----------|----------|----------|------------|----------|----------|----------|
| **OpenChat-3.5-0106** | **7B** | **64.5** | 7.8 | **71.3** | **51.5** | **49.1** | 61.0 | 65.8 | **77.4** | 62.2 |
| OpenChat-3.5-1210 | **7B** | 63.8 | 7.76 | 68.9 | 49.5 | 48.0 | **61.8** | 65.3 | 77.3 | 61.8 |
| OpenChat-3.5 | **7B** | 61.6 | 7.81 | 55.5 | 47.6 | 47.4 | 59.1 | 64.3 | 77.3 | 63.5 |
| ChatGPT (March)* | ???B | 61.5 | **7.94** | 48.1 | 47.6 | 47.1 | 57.7 | **67.3** | 74.9 | **70.1** |
| | | | | | | | | | | |
| OpenHermes 2.5 | 7B | 59.3 | 7.54 | 48.2 | 49.4 | 46.5 | 57.5 | 63.8 | 73.5 | 59.9 |
| OpenOrca Mistral | 7B | 52.7 | 6.86 | 38.4 | 49.4 | 42.9 | 45.9 | 59.3 | 59.1 | 58.1 |
| Zephyr-β^ | 7B | 34.6 | 7.34 | 22.0 | 40.6 | 39.0 | 40.8 | 39.8 | 5.1 | 16.0 |
| Mistral | 7B | - | 6.84 | 30.5 | 39.0 | 38.0 | - | 60.1 | 52.2 | - |
<details>
<summary>Evaluation Details(click to expand)</summary>
*: ChatGPT (March) results are from [GPT-4 Technical Report](https://arxiv.org/abs/2303.08774), [Chain-of-Thought Hub](https://github.com/FranxYao/chain-of-thought-hub), and our evaluation. Please note that ChatGPT is not a fixed baseline and evolves rapidly over time.
^: Zephyr-β often fails to follow few-shot CoT instructions, likely because it was aligned with only chat data but not trained on few-shot data.
**: Mistral and Open-source SOTA results are taken from reported results in instruction-tuned model papers and official repositories.
All models are evaluated in chat mode (e.g. with the respective conversation template applied). All zero-shot benchmarks follow the same setting as in the AGIEval paper and Orca paper. CoT tasks use the same configuration as Chain-of-Thought Hub, HumanEval is evaluated with EvalPlus, and MT-bench is run using FastChat. To reproduce our results, follow the instructions in [our repository](https://github.com/imoneoi/openchat/#benchmarks).
</details>
<div>
<h3>HumanEval+</h3>
</div>
| Model | Size | HumanEval+ pass@1 |
|-----------------------------|--------|-------------------|
| **OpenChat-3.5-0106** | **7B** | **65.9** |
| ChatGPT (December 12, 2023) | ???B | 64.6 |
| WizardCoder-Python-34B-V1.0 | 34B | 64.6 |
| OpenChat 3.5 1210 | 7B | 63.4 |
| OpenHermes 2.5 | 7B | 41.5 |
<div>
<h3>OpenChat-3.5 vs. Grok</h3>
</div>
🔥 OpenChat-3.5-0106 (7B) now outperforms Grok-0 (33B) on **all 4 benchmarks** and Grok-1 (???B) on average and **3/4 benchmarks**.
| | License | # Param | Average | MMLU | HumanEval | MATH | GSM8k |
|-----------------------|-------------|---------|----------|--------|-----------|----------|----------|
| **OpenChat-3.5-0106** | Apache-2.0 | **7B** | **61.0** | 65.8 | **71.3** | **29.3** | **77.4** |
| OpenChat-3.5-1210 | Apache-2.0 | **7B** | 60.1 | 65.3 | 68.9 | 28.9 | 77.3 |
| OpenChat-3.5 | Apache-2.0 | **7B** | 56.4 | 64.3 | 55.5 | 28.6 | 77.3 |
| Grok-0 | Proprietary | 33B | 44.5 | 65.7 | 39.7 | 15.7 | 56.8 |
| Grok-1 | Proprietary | ???B | 55.8 | **73** | 63.2 | 23.9 | 62.9 |
*: Grok results are reported by [X.AI](https://x.ai/).
<div align="center">
<h2> Limitations </h2>
</div>
**Foundation Model Limitations**
Despite its advanced capabilities, OpenChat is still bound by the limitations inherent in its foundation models. These limitations may impact the model's performance in areas such as:
- Complex reasoning
- Mathematical and arithmetic tasks
- Programming and coding challenges
**Hallucination of Non-existent Information**
OpenChat may sometimes generate information that does not exist or is not accurate, also known as "hallucination". Users should be aware of this possibility and verify any critical information obtained from the model.
**Safety**
OpenChat may sometimes generate harmful, hate speech, biased responses, or answer unsafe questions. It's crucial to apply additional AI safety measures in use cases that require safe and moderated responses.
<div align="center">
<h2> License </h2>
</div>
Our OpenChat 3.5 code and models are distributed under the Apache License 2.0.
<div align="center">
<h2> Citation </h2>
</div>
```
@article{wang2023openchat,
title={OpenChat: Advancing Open-source Language Models with Mixed-Quality Data},
author={Wang, Guan and Cheng, Sijie and Zhan, Xianyuan and Li, Xiangang and Song, Sen and Liu, Yang},
journal={arXiv preprint arXiv:2309.11235},
year={2023}
}
```
<div align="center">
<h2> 💌 Main Contributor </h2>
</div>
* Wang Guan [[email protected]], Cheng Sijie [[email protected]], Alpay Ariyak [[email protected]]
* We look forward to hearing you and collaborating on this exciting project!
|
huggingtweets/realjameswoods | huggingtweets | "2021-05-22T20:33:57Z" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2022-03-02T23:29:05Z" | ---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<link rel="stylesheet" href="https://unpkg.com/@tailwindcss/[email protected]/dist/typography.min.css">
<style>
@media (prefers-color-scheme: dark) {
.prose { color: #E2E8F0 !important; }
.prose h2, .prose h3, .prose a, .prose thead { color: #F7FAFC !important; }
}
</style>
<section class='prose'>
<div>
<div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/796482667340382211/CoV8077b_400x400.jpg')">
</div>
<div style="margin-top: 8px; font-size: 19px; font-weight: 800">James Woods 🤖 AI Bot </div>
<div style="font-size: 15px; color: #657786">@realjameswoods bot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on [@realjameswoods's tweets](https://twitter.com/realjameswoods).
<table style='border-width:0'>
<thead style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #CBD5E0'>
<th style='border-width:0'>Data</th>
<th style='border-width:0'>Quantity</th>
</tr>
</thead>
<tbody style='border-width:0'>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Tweets downloaded</td>
<td style='border-width:0'>3248</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Retweets</td>
<td style='border-width:0'>171</td>
</tr>
<tr style='border-width:0 0 1px 0; border-color: #E2E8F0'>
<td style='border-width:0'>Short tweets</td>
<td style='border-width:0'>405</td>
</tr>
<tr style='border-width:0'>
<td style='border-width:0'>Tweets kept</td>
<td style='border-width:0'>2672</td>
</tr>
</tbody>
</table>
[Explore the data](https://app.wandb.ai/wandb/huggingtweets/runs/2gdf0h5l/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @realjameswoods's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://app.wandb.ai/wandb/huggingtweets/runs/23h52qc6) for full transparency and reproducibility.
At the end of training, [the final model](https://app.wandb.ai/wandb/huggingtweets/runs/23h52qc6/artifacts) is logged and versioned.
## Intended uses & limitations
### How to use
You can use this model directly with a pipeline for text generation:
<pre><code><span style="color:#03A9F4">from</span> transformers <span style="color:#03A9F4">import</span> pipeline
generator = pipeline(<span style="color:#FF9800">'text-generation'</span>,
model=<span style="color:#FF9800">'huggingtweets/realjameswoods'</span>)
generator(<span style="color:#FF9800">"My dream is"</span>, num_return_sequences=<span style="color:#8BC34A">5</span>)</code></pre>
### Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
</section>
[](https://twitter.com/intent/follow?screen_name=borisdayma)
<section class='prose'>
For more details, visit the project repository.
</section>
[](https://github.com/borisdayma/huggingtweets)
<!--- random size file --> |
actionpace/chronolima-airo-grad-l2-13B | actionpace | "2023-09-04T12:39:08Z" | 1 | 0 | null | [
"gguf",
"en",
"license:other",
"endpoints_compatible",
"region:us"
] | null | "2023-09-04T12:11:47Z" | ---
license: other
language:
- en
---
**Some of my own quants:**
* chronolima-airo-grad-l2-13B_Q5_1_4K.gguf
* chronolima-airo-grad-l2-13B_Q5_1_8K.gguf
**Source:** [kingbri](https://huggingface.co/kingbri)
**Source Model:** [chronolima-airo-grad-l2-13B](https://huggingface.co/kingbri/chronolima-airo-grad-l2-13B)
**Source models for kingbri/chronolima-airo-grad-l2-13B (Merge)**
- [elinas/chronos-13b-v2](https://huggingface.co/elinas/chronos-13b-v2) ([Ref](https://huggingface.co/actionpace/chronos-13b-v2))
- [jondurbin/airoboros-l2-13b-gpt4-2.0](https://huggingface.co/jondurbin/airoboros-l2-13b-gpt4-2.0) ([Ref](https://huggingface.co/actionpace/airoboros-l2-13b-gpt4-2.0))
- [lemonilia/limarp-llama2](https://huggingface.co/lemonilia/limarp-llama2) (Lora)
|
Aryanne/Westest-7B | Aryanne | "2024-03-04T14:45:48Z" | 158 | 2 | transformers | [
"transformers",
"safetensors",
"gguf",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:chargoddard/piano-medley-7b",
"base_model:merge:chargoddard/piano-medley-7b",
"base_model:senseable/WestLake-7B-v2",
"base_model:merge:senseable/WestLake-7B-v2",
"license:cc-by-sa-4.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-01-26T04:29:52Z" | ---
license: cc-by-sa-4.0
tags:
- mergekit
- merge
base_model:
- chargoddard/piano-medley-7b
- senseable/WestLake-7B-v2
model-index:
- name: Westest-7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 72.18
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/Westest-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.52
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/Westest-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.43
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/Westest-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 66.72
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/Westest-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 86.58
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/Westest-7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.73
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=Aryanne/Westest-7B
name: Open LLM Leaderboard
---
# merged
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the task_anysize merge method using [senseable/WestLake-7B-v2](https://huggingface.co/senseable/WestLake-7B-v2) as a base.
### Models Merged
The following models were included in the merge:
* [chargoddard/piano-medley-7b](https://huggingface.co/chargoddard/piano-medley-7b)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model:
model:
path: senseable/WestLake-7B-v2
dtype: bfloat16
merge_method: task_anysize
slices:
- sources:
- layer_range: [0, 32]
model:
model:
path: chargoddard/piano-medley-7b
parameters:
weight: 0.55
- layer_range: [0, 32]
model:
model:
path: senseable/WestLake-7B-v2
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_Aryanne__Westest-7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |74.03|
|AI2 Reasoning Challenge (25-Shot)|72.18|
|HellaSwag (10-Shot) |88.52|
|MMLU (5-Shot) |64.43|
|TruthfulQA (0-shot) |66.72|
|Winogrande (5-shot) |86.58|
|GSM8k (5-shot) |65.73|
|
MayBashendy/ASAP_FineTuningBERT_AugV4_k15_task1_organization_fold4 | MayBashendy | "2024-11-24T21:59:52Z" | 161 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-11-24T20:35:41Z" | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: ASAP_FineTuningBERT_AugV4_k15_task1_organization_fold4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ASAP_FineTuningBERT_AugV4_k15_task1_organization_fold4
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5421
- Qwk: 0.5747
- Mse: 0.5421
- Rmse: 0.7363
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:------:|:-------:|:------:|
| No log | 0.0014 | 2 | 11.5817 | 0.0031 | 11.5817 | 3.4032 |
| No log | 0.0028 | 4 | 9.3418 | 0.0018 | 9.3418 | 3.0564 |
| No log | 0.0042 | 6 | 8.4753 | 0.0018 | 8.4753 | 2.9112 |
| No log | 0.0056 | 8 | 7.7065 | 0.0 | 7.7065 | 2.7761 |
| No log | 0.0070 | 10 | 6.9545 | 0.0 | 6.9545 | 2.6371 |
| No log | 0.0084 | 12 | 6.2764 | 0.0 | 6.2764 | 2.5053 |
| No log | 0.0098 | 14 | 5.6195 | 0.0096 | 5.6195 | 2.3706 |
| No log | 0.0112 | 16 | 4.9082 | 0.0079 | 4.9082 | 2.2154 |
| No log | 0.0126 | 18 | 4.2162 | 0.0040 | 4.2162 | 2.0533 |
| No log | 0.0140 | 20 | 3.6264 | 0.0040 | 3.6264 | 1.9043 |
| No log | 0.0154 | 22 | 3.0843 | 0.0040 | 3.0843 | 1.7562 |
| No log | 0.0168 | 24 | 2.4981 | 0.0034 | 2.4981 | 1.5805 |
| No log | 0.0182 | 26 | 2.0187 | 0.0420 | 2.0187 | 1.4208 |
| No log | 0.0196 | 28 | 1.6058 | 0.0316 | 1.6058 | 1.2672 |
| No log | 0.0210 | 30 | 1.3004 | 0.0316 | 1.3004 | 1.1404 |
| No log | 0.0224 | 32 | 1.0653 | 0.0212 | 1.0653 | 1.0321 |
| No log | 0.0238 | 34 | 0.8996 | 0.2433 | 0.8996 | 0.9485 |
| No log | 0.0252 | 36 | 0.8494 | 0.1043 | 0.8494 | 0.9216 |
| No log | 0.0266 | 38 | 0.8762 | 0.0454 | 0.8762 | 0.9360 |
| No log | 0.0280 | 40 | 0.9983 | 0.0454 | 0.9983 | 0.9992 |
| No log | 0.0294 | 42 | 1.1207 | 0.0454 | 1.1207 | 1.0586 |
| No log | 0.0308 | 44 | 1.2291 | 0.0454 | 1.2291 | 1.1086 |
| No log | 0.0322 | 46 | 1.2966 | 0.0303 | 1.2966 | 1.1387 |
| No log | 0.0336 | 48 | 0.8435 | 0.1096 | 0.8435 | 0.9184 |
| No log | 0.0350 | 50 | 0.8430 | 0.0738 | 0.8430 | 0.9181 |
| No log | 0.0364 | 52 | 0.9323 | 0.0454 | 0.9323 | 0.9656 |
| No log | 0.0378 | 54 | 0.9291 | 0.0454 | 0.9291 | 0.9639 |
| No log | 0.0392 | 56 | 1.1189 | 0.0454 | 1.1189 | 1.0578 |
| No log | 0.0406 | 58 | 1.1789 | 0.0454 | 1.1789 | 1.0858 |
| No log | 0.0420 | 60 | 0.9760 | 0.0454 | 0.9760 | 0.9879 |
| No log | 0.0434 | 62 | 0.7793 | 0.1373 | 0.7793 | 0.8828 |
| No log | 0.0448 | 64 | 0.8070 | 0.0999 | 0.8070 | 0.8983 |
| No log | 0.0462 | 66 | 0.8589 | 0.0849 | 0.8589 | 0.9268 |
| No log | 0.0476 | 68 | 1.0671 | 0.0454 | 1.0671 | 1.0330 |
| No log | 0.0490 | 70 | 1.2021 | 0.0496 | 1.2021 | 1.0964 |
| No log | 0.0504 | 72 | 1.0652 | 0.1700 | 1.0652 | 1.0321 |
| No log | 0.0518 | 74 | 0.7260 | 0.1615 | 0.7260 | 0.8521 |
| No log | 0.0532 | 76 | 0.6942 | 0.2055 | 0.6942 | 0.8332 |
| No log | 0.0546 | 78 | 0.7907 | 0.1317 | 0.7907 | 0.8892 |
| No log | 0.0560 | 80 | 1.0966 | 0.2913 | 1.0966 | 1.0472 |
| No log | 0.0574 | 82 | 0.9020 | 0.1066 | 0.9020 | 0.9497 |
| No log | 0.0588 | 84 | 0.6939 | 0.2287 | 0.6939 | 0.8330 |
| No log | 0.0602 | 86 | 0.7359 | 0.1837 | 0.7359 | 0.8578 |
| No log | 0.0616 | 88 | 0.7982 | 0.1169 | 0.7982 | 0.8934 |
| No log | 0.0630 | 90 | 0.6682 | 0.2888 | 0.6682 | 0.8174 |
| No log | 0.0644 | 92 | 0.6579 | 0.3040 | 0.6579 | 0.8111 |
| No log | 0.0658 | 94 | 0.8056 | 0.2438 | 0.8056 | 0.8976 |
| No log | 0.0672 | 96 | 1.2355 | 0.2716 | 1.2355 | 1.1115 |
| No log | 0.0686 | 98 | 1.1274 | 0.2692 | 1.1274 | 1.0618 |
| No log | 0.0700 | 100 | 0.7940 | 0.2547 | 0.7940 | 0.8911 |
| No log | 0.0714 | 102 | 0.8830 | 0.3125 | 0.8830 | 0.9397 |
| No log | 0.0728 | 104 | 1.1723 | 0.2603 | 1.1723 | 1.0827 |
| No log | 0.0742 | 106 | 1.2435 | 0.2494 | 1.2435 | 1.1151 |
| No log | 0.0756 | 108 | 1.0176 | 0.2773 | 1.0176 | 1.0087 |
| No log | 0.0770 | 110 | 0.6718 | 0.2865 | 0.6718 | 0.8196 |
| No log | 0.0784 | 112 | 0.6419 | 0.3612 | 0.6419 | 0.8012 |
| No log | 0.0798 | 114 | 0.6941 | 0.3660 | 0.6941 | 0.8331 |
| No log | 0.0812 | 116 | 0.8081 | 0.3515 | 0.8081 | 0.8989 |
| No log | 0.0826 | 118 | 0.8907 | 0.3242 | 0.8907 | 0.9438 |
| No log | 0.0840 | 120 | 0.8632 | 0.3061 | 0.8632 | 0.9291 |
| No log | 0.0854 | 122 | 0.8793 | 0.3061 | 0.8793 | 0.9377 |
| No log | 0.0868 | 124 | 0.8057 | 0.3206 | 0.8057 | 0.8976 |
| No log | 0.0882 | 126 | 0.8044 | 0.3558 | 0.8044 | 0.8969 |
| No log | 0.0896 | 128 | 1.0299 | 0.2874 | 1.0299 | 1.0149 |
| No log | 0.0910 | 130 | 1.1954 | 0.2648 | 1.1954 | 1.0934 |
| No log | 0.0924 | 132 | 0.8219 | 0.3675 | 0.8219 | 0.9066 |
| No log | 0.0938 | 134 | 0.7326 | 0.3744 | 0.7326 | 0.8559 |
| No log | 0.0952 | 136 | 0.9355 | 0.3217 | 0.9355 | 0.9672 |
| No log | 0.0966 | 138 | 1.0014 | 0.3105 | 1.0014 | 1.0007 |
| No log | 0.0980 | 140 | 0.8304 | 0.4285 | 0.8304 | 0.9113 |
| No log | 0.0994 | 142 | 0.7418 | 0.4969 | 0.7418 | 0.8613 |
| No log | 0.1008 | 144 | 1.0215 | 0.3520 | 1.0215 | 1.0107 |
| No log | 0.1022 | 146 | 1.0189 | 0.3687 | 1.0189 | 1.0094 |
| No log | 0.1036 | 148 | 0.6310 | 0.5819 | 0.6310 | 0.7944 |
| No log | 0.1050 | 150 | 0.5778 | 0.5815 | 0.5778 | 0.7601 |
| No log | 0.1064 | 152 | 0.6556 | 0.5279 | 0.6556 | 0.8097 |
| No log | 0.1078 | 154 | 0.8265 | 0.4576 | 0.8265 | 0.9091 |
| No log | 0.1092 | 156 | 0.6059 | 0.5741 | 0.6059 | 0.7784 |
| No log | 0.1106 | 158 | 0.5555 | 0.5801 | 0.5555 | 0.7453 |
| No log | 0.1120 | 160 | 0.5560 | 0.5903 | 0.5560 | 0.7457 |
| No log | 0.1134 | 162 | 0.5731 | 0.6115 | 0.5731 | 0.7570 |
| No log | 0.1148 | 164 | 0.6086 | 0.5767 | 0.6086 | 0.7801 |
| No log | 0.1162 | 166 | 0.6113 | 0.5848 | 0.6113 | 0.7818 |
| No log | 0.1176 | 168 | 0.6356 | 0.5648 | 0.6356 | 0.7972 |
| No log | 0.1190 | 170 | 0.6273 | 0.5502 | 0.6273 | 0.7920 |
| No log | 0.1204 | 172 | 0.6430 | 0.5292 | 0.6430 | 0.8019 |
| No log | 0.1218 | 174 | 0.7969 | 0.4659 | 0.7969 | 0.8927 |
| No log | 0.1232 | 176 | 0.6069 | 0.5268 | 0.6069 | 0.7791 |
| No log | 0.1246 | 178 | 0.5744 | 0.4927 | 0.5744 | 0.7579 |
| No log | 0.1261 | 180 | 0.5544 | 0.4947 | 0.5544 | 0.7446 |
| No log | 0.1275 | 182 | 0.8059 | 0.4421 | 0.8059 | 0.8977 |
| No log | 0.1289 | 184 | 0.9577 | 0.3931 | 0.9577 | 0.9786 |
| No log | 0.1303 | 186 | 0.7324 | 0.4443 | 0.7324 | 0.8558 |
| No log | 0.1317 | 188 | 0.5451 | 0.5151 | 0.5451 | 0.7383 |
| No log | 0.1331 | 190 | 0.5839 | 0.5304 | 0.5839 | 0.7641 |
| No log | 0.1345 | 192 | 0.5432 | 0.5166 | 0.5432 | 0.7370 |
| No log | 0.1359 | 194 | 0.6187 | 0.5075 | 0.6187 | 0.7866 |
| No log | 0.1373 | 196 | 0.8434 | 0.4525 | 0.8434 | 0.9184 |
| No log | 0.1387 | 198 | 0.8247 | 0.4597 | 0.8247 | 0.9081 |
| No log | 0.1401 | 200 | 0.6462 | 0.5046 | 0.6462 | 0.8039 |
| No log | 0.1415 | 202 | 0.5630 | 0.5487 | 0.5630 | 0.7503 |
| No log | 0.1429 | 204 | 0.9639 | 0.3678 | 0.9639 | 0.9818 |
| No log | 0.1443 | 206 | 1.0211 | 0.3500 | 1.0211 | 1.0105 |
| No log | 0.1457 | 208 | 0.7586 | 0.4521 | 0.7586 | 0.8709 |
| No log | 0.1471 | 210 | 0.5435 | 0.5478 | 0.5435 | 0.7373 |
| No log | 0.1485 | 212 | 0.7834 | 0.4252 | 0.7834 | 0.8851 |
| No log | 0.1499 | 214 | 1.0022 | 0.3599 | 1.0022 | 1.0011 |
| No log | 0.1513 | 216 | 1.1045 | 0.3210 | 1.1045 | 1.0509 |
| No log | 0.1527 | 218 | 0.9288 | 0.3529 | 0.9288 | 0.9637 |
| No log | 0.1541 | 220 | 0.7267 | 0.4146 | 0.7267 | 0.8525 |
| No log | 0.1555 | 222 | 0.6943 | 0.4050 | 0.6943 | 0.8332 |
| No log | 0.1569 | 224 | 0.6778 | 0.4133 | 0.6778 | 0.8233 |
| No log | 0.1583 | 226 | 0.7928 | 0.3817 | 0.7928 | 0.8904 |
| No log | 0.1597 | 228 | 0.7995 | 0.3875 | 0.7995 | 0.8941 |
| No log | 0.1611 | 230 | 0.7093 | 0.4297 | 0.7093 | 0.8422 |
| No log | 0.1625 | 232 | 0.7208 | 0.4381 | 0.7208 | 0.8490 |
| No log | 0.1639 | 234 | 0.9123 | 0.3936 | 0.9123 | 0.9551 |
| No log | 0.1653 | 236 | 1.0547 | 0.3516 | 1.0547 | 1.0270 |
| No log | 0.1667 | 238 | 0.9881 | 0.3534 | 0.9881 | 0.9940 |
| No log | 0.1681 | 240 | 0.7253 | 0.3656 | 0.7253 | 0.8516 |
| No log | 0.1695 | 242 | 0.6207 | 0.3427 | 0.6207 | 0.7878 |
| No log | 0.1709 | 244 | 0.6133 | 0.3413 | 0.6133 | 0.7831 |
| No log | 0.1723 | 246 | 0.6241 | 0.3385 | 0.6241 | 0.7900 |
| No log | 0.1737 | 248 | 0.6424 | 0.3654 | 0.6424 | 0.8015 |
| No log | 0.1751 | 250 | 0.6523 | 0.3743 | 0.6523 | 0.8077 |
| No log | 0.1765 | 252 | 0.6096 | 0.3744 | 0.6096 | 0.7808 |
| No log | 0.1779 | 254 | 0.5904 | 0.4119 | 0.5904 | 0.7684 |
| No log | 0.1793 | 256 | 0.5806 | 0.4626 | 0.5806 | 0.7620 |
| No log | 0.1807 | 258 | 0.5800 | 0.4755 | 0.5800 | 0.7616 |
| No log | 0.1821 | 260 | 0.5840 | 0.5003 | 0.5840 | 0.7642 |
| No log | 0.1835 | 262 | 0.5842 | 0.5109 | 0.5842 | 0.7643 |
| No log | 0.1849 | 264 | 0.5883 | 0.5166 | 0.5883 | 0.7670 |
| No log | 0.1863 | 266 | 0.6046 | 0.5162 | 0.6046 | 0.7775 |
| No log | 0.1877 | 268 | 0.5983 | 0.4881 | 0.5983 | 0.7735 |
| No log | 0.1891 | 270 | 0.5959 | 0.5093 | 0.5959 | 0.7720 |
| No log | 0.1905 | 272 | 0.6133 | 0.4992 | 0.6133 | 0.7831 |
| No log | 0.1919 | 274 | 0.6026 | 0.5140 | 0.6026 | 0.7763 |
| No log | 0.1933 | 276 | 0.6110 | 0.5509 | 0.6110 | 0.7817 |
| No log | 0.1947 | 278 | 0.6448 | 0.4964 | 0.6448 | 0.8030 |
| No log | 0.1961 | 280 | 0.7580 | 0.4613 | 0.7580 | 0.8706 |
| No log | 0.1975 | 282 | 0.6152 | 0.5439 | 0.6152 | 0.7843 |
| No log | 0.1989 | 284 | 0.6421 | 0.4916 | 0.6421 | 0.8013 |
| No log | 0.2003 | 286 | 0.5838 | 0.5422 | 0.5838 | 0.7640 |
| No log | 0.2017 | 288 | 0.7741 | 0.4424 | 0.7741 | 0.8798 |
| No log | 0.2031 | 290 | 0.8401 | 0.4338 | 0.8401 | 0.9166 |
| No log | 0.2045 | 292 | 0.6045 | 0.5176 | 0.6045 | 0.7775 |
| No log | 0.2059 | 294 | 0.6414 | 0.4909 | 0.6414 | 0.8009 |
| No log | 0.2073 | 296 | 0.6095 | 0.4931 | 0.6095 | 0.7807 |
| No log | 0.2087 | 298 | 0.6139 | 0.4870 | 0.6139 | 0.7835 |
| No log | 0.2101 | 300 | 0.7507 | 0.4005 | 0.7507 | 0.8664 |
| No log | 0.2115 | 302 | 0.6596 | 0.4205 | 0.6596 | 0.8121 |
| No log | 0.2129 | 304 | 0.5828 | 0.4493 | 0.5828 | 0.7634 |
| No log | 0.2143 | 306 | 0.6134 | 0.4624 | 0.6134 | 0.7832 |
| No log | 0.2157 | 308 | 0.5870 | 0.4787 | 0.5870 | 0.7662 |
| No log | 0.2171 | 310 | 0.5889 | 0.4442 | 0.5889 | 0.7674 |
| No log | 0.2185 | 312 | 0.6153 | 0.4706 | 0.6153 | 0.7844 |
| No log | 0.2199 | 314 | 0.6161 | 0.4670 | 0.6161 | 0.7849 |
| No log | 0.2213 | 316 | 0.5723 | 0.5194 | 0.5723 | 0.7565 |
| No log | 0.2227 | 318 | 0.5904 | 0.5084 | 0.5904 | 0.7684 |
| No log | 0.2241 | 320 | 0.6373 | 0.4875 | 0.6373 | 0.7983 |
| No log | 0.2255 | 322 | 0.6532 | 0.4776 | 0.6532 | 0.8082 |
| No log | 0.2269 | 324 | 0.5871 | 0.5561 | 0.5871 | 0.7662 |
| No log | 0.2283 | 326 | 0.6081 | 0.5482 | 0.6081 | 0.7798 |
| No log | 0.2297 | 328 | 0.6734 | 0.4955 | 0.6734 | 0.8206 |
| No log | 0.2311 | 330 | 0.6610 | 0.5083 | 0.6610 | 0.8130 |
| No log | 0.2325 | 332 | 0.6968 | 0.5043 | 0.6968 | 0.8348 |
| No log | 0.2339 | 334 | 0.6736 | 0.5172 | 0.6736 | 0.8207 |
| No log | 0.2353 | 336 | 0.7257 | 0.5008 | 0.7257 | 0.8519 |
| No log | 0.2367 | 338 | 0.7948 | 0.4489 | 0.7948 | 0.8915 |
| No log | 0.2381 | 340 | 0.8350 | 0.4235 | 0.8350 | 0.9138 |
| No log | 0.2395 | 342 | 0.7540 | 0.4400 | 0.7540 | 0.8683 |
| No log | 0.2409 | 344 | 0.6577 | 0.4560 | 0.6577 | 0.8110 |
| No log | 0.2423 | 346 | 0.7404 | 0.4317 | 0.7404 | 0.8605 |
| No log | 0.2437 | 348 | 0.7644 | 0.4315 | 0.7644 | 0.8743 |
| No log | 0.2451 | 350 | 0.6767 | 0.4785 | 0.6767 | 0.8226 |
| No log | 0.2465 | 352 | 0.8068 | 0.4522 | 0.8068 | 0.8982 |
| No log | 0.2479 | 354 | 0.9071 | 0.4341 | 0.9071 | 0.9524 |
| No log | 0.2493 | 356 | 0.7395 | 0.5048 | 0.7395 | 0.8600 |
| No log | 0.2507 | 358 | 0.7325 | 0.4953 | 0.7325 | 0.8559 |
| No log | 0.2521 | 360 | 0.6945 | 0.5204 | 0.6945 | 0.8334 |
| No log | 0.2535 | 362 | 0.6407 | 0.5379 | 0.6407 | 0.8004 |
| No log | 0.2549 | 364 | 0.5565 | 0.5585 | 0.5565 | 0.7460 |
| No log | 0.2563 | 366 | 0.5515 | 0.5693 | 0.5515 | 0.7427 |
| No log | 0.2577 | 368 | 0.6541 | 0.5202 | 0.6541 | 0.8088 |
| No log | 0.2591 | 370 | 0.6886 | 0.5002 | 0.6886 | 0.8298 |
| No log | 0.2605 | 372 | 0.5681 | 0.5335 | 0.5681 | 0.7537 |
| No log | 0.2619 | 374 | 0.5574 | 0.5450 | 0.5574 | 0.7466 |
| No log | 0.2633 | 376 | 0.5553 | 0.5344 | 0.5553 | 0.7452 |
| No log | 0.2647 | 378 | 0.6142 | 0.5310 | 0.6142 | 0.7837 |
| No log | 0.2661 | 380 | 0.8770 | 0.4404 | 0.8770 | 0.9365 |
| No log | 0.2675 | 382 | 0.8143 | 0.4627 | 0.8143 | 0.9024 |
| No log | 0.2689 | 384 | 0.5772 | 0.5610 | 0.5772 | 0.7598 |
| No log | 0.2703 | 386 | 0.5481 | 0.5673 | 0.5481 | 0.7404 |
| No log | 0.2717 | 388 | 0.6138 | 0.5363 | 0.6138 | 0.7835 |
| No log | 0.2731 | 390 | 0.5975 | 0.5429 | 0.5975 | 0.7730 |
| No log | 0.2745 | 392 | 0.5545 | 0.5690 | 0.5545 | 0.7447 |
| No log | 0.2759 | 394 | 0.5590 | 0.5766 | 0.5590 | 0.7476 |
| No log | 0.2773 | 396 | 0.5635 | 0.5716 | 0.5635 | 0.7507 |
| No log | 0.2787 | 398 | 0.6260 | 0.5293 | 0.6260 | 0.7912 |
| No log | 0.2801 | 400 | 0.6409 | 0.5158 | 0.6409 | 0.8006 |
| No log | 0.2815 | 402 | 0.5414 | 0.5689 | 0.5414 | 0.7358 |
| No log | 0.2829 | 404 | 0.5552 | 0.5556 | 0.5552 | 0.7451 |
| No log | 0.2843 | 406 | 0.5577 | 0.5310 | 0.5577 | 0.7468 |
| No log | 0.2857 | 408 | 0.6141 | 0.4977 | 0.6141 | 0.7837 |
| No log | 0.2871 | 410 | 0.6040 | 0.5155 | 0.6040 | 0.7772 |
| No log | 0.2885 | 412 | 0.5528 | 0.5375 | 0.5528 | 0.7435 |
| No log | 0.2899 | 414 | 0.5638 | 0.5429 | 0.5638 | 0.7509 |
| No log | 0.2913 | 416 | 0.5448 | 0.5547 | 0.5448 | 0.7381 |
| No log | 0.2927 | 418 | 0.7432 | 0.4907 | 0.7432 | 0.8621 |
| No log | 0.2941 | 420 | 1.3031 | 0.3418 | 1.3031 | 1.1415 |
| No log | 0.2955 | 422 | 1.3055 | 0.3310 | 1.3055 | 1.1426 |
| No log | 0.2969 | 424 | 0.8418 | 0.4400 | 0.8418 | 0.9175 |
| No log | 0.2983 | 426 | 0.5642 | 0.5766 | 0.5642 | 0.7511 |
| No log | 0.2997 | 428 | 0.5625 | 0.5826 | 0.5625 | 0.7500 |
| No log | 0.3011 | 430 | 0.7363 | 0.5018 | 0.7363 | 0.8581 |
| No log | 0.3025 | 432 | 1.0207 | 0.4011 | 1.0207 | 1.0103 |
| No log | 0.3039 | 434 | 1.0567 | 0.3939 | 1.0567 | 1.0280 |
| No log | 0.3053 | 436 | 0.7566 | 0.4916 | 0.7566 | 0.8698 |
| No log | 0.3067 | 438 | 0.5578 | 0.5672 | 0.5578 | 0.7469 |
| No log | 0.3081 | 440 | 0.5542 | 0.5406 | 0.5542 | 0.7445 |
| No log | 0.3095 | 442 | 0.6431 | 0.5411 | 0.6431 | 0.8019 |
| No log | 0.3109 | 444 | 0.9116 | 0.4342 | 0.9116 | 0.9548 |
| No log | 0.3123 | 446 | 0.8558 | 0.4566 | 0.8558 | 0.9251 |
| No log | 0.3137 | 448 | 0.6192 | 0.5529 | 0.6192 | 0.7869 |
| No log | 0.3151 | 450 | 0.5635 | 0.5926 | 0.5635 | 0.7507 |
| No log | 0.3165 | 452 | 0.6140 | 0.5548 | 0.6140 | 0.7836 |
| No log | 0.3179 | 454 | 0.8941 | 0.4333 | 0.8941 | 0.9456 |
| No log | 0.3193 | 456 | 0.9721 | 0.3997 | 0.9721 | 0.9860 |
| No log | 0.3207 | 458 | 0.7711 | 0.4768 | 0.7711 | 0.8781 |
| No log | 0.3221 | 460 | 0.5774 | 0.5360 | 0.5774 | 0.7599 |
| No log | 0.3235 | 462 | 0.5525 | 0.5477 | 0.5525 | 0.7433 |
| No log | 0.3249 | 464 | 0.5574 | 0.5543 | 0.5574 | 0.7466 |
| No log | 0.3263 | 466 | 0.6564 | 0.4879 | 0.6564 | 0.8102 |
| No log | 0.3277 | 468 | 0.8008 | 0.4201 | 0.8008 | 0.8949 |
| No log | 0.3291 | 470 | 0.7166 | 0.4586 | 0.7166 | 0.8465 |
| No log | 0.3305 | 472 | 0.6124 | 0.5413 | 0.6124 | 0.7825 |
| No log | 0.3319 | 474 | 0.5963 | 0.5677 | 0.5963 | 0.7722 |
| No log | 0.3333 | 476 | 0.5690 | 0.5612 | 0.5690 | 0.7543 |
| No log | 0.3347 | 478 | 0.5880 | 0.5892 | 0.5880 | 0.7668 |
| No log | 0.3361 | 480 | 0.6000 | 0.5890 | 0.6000 | 0.7746 |
| No log | 0.3375 | 482 | 0.7493 | 0.5008 | 0.7493 | 0.8656 |
| No log | 0.3389 | 484 | 0.8120 | 0.4586 | 0.8120 | 0.9011 |
| No log | 0.3403 | 486 | 0.6472 | 0.5620 | 0.6472 | 0.8045 |
| No log | 0.3417 | 488 | 0.5700 | 0.5641 | 0.5700 | 0.7550 |
| No log | 0.3431 | 490 | 0.5720 | 0.5598 | 0.5720 | 0.7563 |
| No log | 0.3445 | 492 | 0.5839 | 0.5783 | 0.5839 | 0.7641 |
| No log | 0.3459 | 494 | 0.6182 | 0.5494 | 0.6182 | 0.7863 |
| No log | 0.3473 | 496 | 0.6523 | 0.5283 | 0.6523 | 0.8076 |
| No log | 0.3487 | 498 | 0.6482 | 0.5028 | 0.6482 | 0.8051 |
| 1.1226 | 0.3501 | 500 | 0.6844 | 0.4615 | 0.6844 | 0.8273 |
| 1.1226 | 0.3515 | 502 | 0.6007 | 0.5391 | 0.6007 | 0.7751 |
| 1.1226 | 0.3529 | 504 | 0.5776 | 0.5005 | 0.5776 | 0.7600 |
| 1.1226 | 0.3543 | 506 | 0.5865 | 0.4984 | 0.5865 | 0.7658 |
| 1.1226 | 0.3557 | 508 | 0.7013 | 0.4505 | 0.7013 | 0.8374 |
| 1.1226 | 0.3571 | 510 | 0.7552 | 0.4529 | 0.7552 | 0.8690 |
| 1.1226 | 0.3585 | 512 | 0.6736 | 0.4895 | 0.6736 | 0.8207 |
| 1.1226 | 0.3599 | 514 | 0.6141 | 0.5078 | 0.6141 | 0.7836 |
| 1.1226 | 0.3613 | 516 | 0.6371 | 0.5148 | 0.6371 | 0.7982 |
| 1.1226 | 0.3627 | 518 | 0.7526 | 0.5037 | 0.7526 | 0.8675 |
| 1.1226 | 0.3641 | 520 | 0.8322 | 0.4910 | 0.8322 | 0.9123 |
| 1.1226 | 0.3655 | 522 | 0.6733 | 0.5399 | 0.6733 | 0.8206 |
| 1.1226 | 0.3669 | 524 | 0.6377 | 0.5567 | 0.6377 | 0.7985 |
| 1.1226 | 0.3683 | 526 | 0.6771 | 0.5444 | 0.6771 | 0.8228 |
| 1.1226 | 0.3697 | 528 | 0.6936 | 0.5436 | 0.6936 | 0.8328 |
| 1.1226 | 0.3711 | 530 | 0.6963 | 0.5398 | 0.6963 | 0.8344 |
| 1.1226 | 0.3725 | 532 | 0.6385 | 0.5605 | 0.6385 | 0.7991 |
| 1.1226 | 0.3739 | 534 | 0.6504 | 0.5474 | 0.6504 | 0.8065 |
| 1.1226 | 0.3754 | 536 | 0.7366 | 0.4959 | 0.7366 | 0.8583 |
| 1.1226 | 0.3768 | 538 | 0.7385 | 0.4841 | 0.7385 | 0.8594 |
| 1.1226 | 0.3782 | 540 | 0.6090 | 0.5436 | 0.6090 | 0.7804 |
| 1.1226 | 0.3796 | 542 | 0.6219 | 0.5390 | 0.6219 | 0.7886 |
| 1.1226 | 0.3810 | 544 | 0.7899 | 0.4819 | 0.7899 | 0.8888 |
| 1.1226 | 0.3824 | 546 | 1.0528 | 0.3953 | 1.0528 | 1.0261 |
| 1.1226 | 0.3838 | 548 | 0.9013 | 0.4325 | 0.9013 | 0.9494 |
| 1.1226 | 0.3852 | 550 | 0.8531 | 0.4563 | 0.8531 | 0.9236 |
| 1.1226 | 0.3866 | 552 | 0.6853 | 0.5130 | 0.6853 | 0.8278 |
| 1.1226 | 0.3880 | 554 | 0.6949 | 0.4989 | 0.6949 | 0.8336 |
| 1.1226 | 0.3894 | 556 | 0.7979 | 0.4554 | 0.7979 | 0.8933 |
| 1.1226 | 0.3908 | 558 | 0.6904 | 0.4679 | 0.6904 | 0.8309 |
| 1.1226 | 0.3922 | 560 | 0.6241 | 0.5049 | 0.6241 | 0.7900 |
| 1.1226 | 0.3936 | 562 | 0.5952 | 0.5393 | 0.5952 | 0.7715 |
| 1.1226 | 0.3950 | 564 | 0.7182 | 0.4713 | 0.7182 | 0.8475 |
| 1.1226 | 0.3964 | 566 | 0.7792 | 0.4805 | 0.7792 | 0.8827 |
| 1.1226 | 0.3978 | 568 | 0.6630 | 0.5211 | 0.6630 | 0.8142 |
| 1.1226 | 0.3992 | 570 | 0.6273 | 0.5574 | 0.6273 | 0.7920 |
| 1.1226 | 0.4006 | 572 | 0.7272 | 0.5214 | 0.7272 | 0.8527 |
| 1.1226 | 0.4020 | 574 | 0.7778 | 0.5106 | 0.7778 | 0.8819 |
| 1.1226 | 0.4034 | 576 | 0.6949 | 0.5581 | 0.6949 | 0.8336 |
| 1.1226 | 0.4048 | 578 | 0.6811 | 0.5167 | 0.6811 | 0.8253 |
| 1.1226 | 0.4062 | 580 | 0.6885 | 0.5143 | 0.6885 | 0.8298 |
| 1.1226 | 0.4076 | 582 | 0.6475 | 0.5162 | 0.6475 | 0.8047 |
| 1.1226 | 0.4090 | 584 | 0.6691 | 0.5031 | 0.6691 | 0.8180 |
| 1.1226 | 0.4104 | 586 | 0.6591 | 0.4993 | 0.6591 | 0.8118 |
| 1.1226 | 0.4118 | 588 | 0.7592 | 0.4904 | 0.7592 | 0.8713 |
| 1.1226 | 0.4132 | 590 | 0.7774 | 0.4837 | 0.7774 | 0.8817 |
| 1.1226 | 0.4146 | 592 | 0.7241 | 0.4977 | 0.7241 | 0.8509 |
| 1.1226 | 0.4160 | 594 | 0.6221 | 0.5718 | 0.6221 | 0.7887 |
| 1.1226 | 0.4174 | 596 | 0.6755 | 0.5469 | 0.6755 | 0.8219 |
| 1.1226 | 0.4188 | 598 | 0.8922 | 0.4501 | 0.8922 | 0.9446 |
| 1.1226 | 0.4202 | 600 | 0.7950 | 0.4953 | 0.7950 | 0.8916 |
| 1.1226 | 0.4216 | 602 | 0.5648 | 0.5844 | 0.5648 | 0.7516 |
| 1.1226 | 0.4230 | 604 | 0.5683 | 0.5522 | 0.5683 | 0.7539 |
| 1.1226 | 0.4244 | 606 | 0.5547 | 0.5887 | 0.5547 | 0.7448 |
| 1.1226 | 0.4258 | 608 | 0.6105 | 0.5597 | 0.6105 | 0.7813 |
| 1.1226 | 0.4272 | 610 | 0.6285 | 0.5511 | 0.6285 | 0.7928 |
| 1.1226 | 0.4286 | 612 | 0.6012 | 0.5764 | 0.6012 | 0.7753 |
| 1.1226 | 0.4300 | 614 | 0.5399 | 0.5673 | 0.5399 | 0.7347 |
| 1.1226 | 0.4314 | 616 | 0.5852 | 0.5273 | 0.5852 | 0.7650 |
| 1.1226 | 0.4328 | 618 | 0.5425 | 0.5652 | 0.5425 | 0.7365 |
| 1.1226 | 0.4342 | 620 | 0.5764 | 0.5741 | 0.5764 | 0.7592 |
| 1.1226 | 0.4356 | 622 | 0.5823 | 0.5730 | 0.5823 | 0.7631 |
| 1.1226 | 0.4370 | 624 | 0.5473 | 0.5963 | 0.5473 | 0.7398 |
| 1.1226 | 0.4384 | 626 | 0.5582 | 0.5708 | 0.5582 | 0.7471 |
| 1.1226 | 0.4398 | 628 | 0.5664 | 0.5868 | 0.5664 | 0.7526 |
| 1.1226 | 0.4412 | 630 | 0.6608 | 0.5588 | 0.6608 | 0.8129 |
| 1.1226 | 0.4426 | 632 | 0.7621 | 0.5408 | 0.7621 | 0.8730 |
| 1.1226 | 0.4440 | 634 | 0.6362 | 0.5388 | 0.6362 | 0.7976 |
| 1.1226 | 0.4454 | 636 | 0.5615 | 0.5834 | 0.5615 | 0.7493 |
| 1.1226 | 0.4468 | 638 | 0.5629 | 0.5664 | 0.5629 | 0.7503 |
| 1.1226 | 0.4482 | 640 | 0.6825 | 0.5096 | 0.6825 | 0.8262 |
| 1.1226 | 0.4496 | 642 | 0.7188 | 0.5065 | 0.7188 | 0.8478 |
| 1.1226 | 0.4510 | 644 | 0.6332 | 0.5274 | 0.6332 | 0.7957 |
| 1.1226 | 0.4524 | 646 | 0.5558 | 0.5686 | 0.5558 | 0.7455 |
| 1.1226 | 0.4538 | 648 | 0.5509 | 0.5813 | 0.5509 | 0.7422 |
| 1.1226 | 0.4552 | 650 | 0.6071 | 0.5561 | 0.6071 | 0.7792 |
| 1.1226 | 0.4566 | 652 | 0.8585 | 0.4684 | 0.8585 | 0.9266 |
| 1.1226 | 0.4580 | 654 | 0.8182 | 0.4584 | 0.8182 | 0.9045 |
| 1.1226 | 0.4594 | 656 | 0.5973 | 0.5559 | 0.5973 | 0.7728 |
| 1.1226 | 0.4608 | 658 | 0.5736 | 0.5434 | 0.5736 | 0.7574 |
| 1.1226 | 0.4622 | 660 | 0.6334 | 0.4750 | 0.6334 | 0.7959 |
| 1.1226 | 0.4636 | 662 | 0.5618 | 0.5777 | 0.5618 | 0.7495 |
| 1.1226 | 0.4650 | 664 | 0.6978 | 0.5320 | 0.6978 | 0.8353 |
| 1.1226 | 0.4664 | 666 | 0.8681 | 0.4879 | 0.8681 | 0.9317 |
| 1.1226 | 0.4678 | 668 | 0.7315 | 0.5343 | 0.7315 | 0.8553 |
| 1.1226 | 0.4692 | 670 | 0.5627 | 0.5607 | 0.5627 | 0.7502 |
| 1.1226 | 0.4706 | 672 | 0.5519 | 0.5709 | 0.5519 | 0.7429 |
| 1.1226 | 0.4720 | 674 | 0.5693 | 0.5501 | 0.5693 | 0.7545 |
| 1.1226 | 0.4734 | 676 | 0.6689 | 0.5107 | 0.6689 | 0.8178 |
| 1.1226 | 0.4748 | 678 | 0.6780 | 0.5062 | 0.6780 | 0.8234 |
| 1.1226 | 0.4762 | 680 | 0.5938 | 0.5554 | 0.5938 | 0.7706 |
| 1.1226 | 0.4776 | 682 | 0.5730 | 0.5764 | 0.5730 | 0.7569 |
| 1.1226 | 0.4790 | 684 | 0.5798 | 0.5740 | 0.5798 | 0.7615 |
| 1.1226 | 0.4804 | 686 | 0.7007 | 0.5264 | 0.7007 | 0.8371 |
| 1.1226 | 0.4818 | 688 | 0.6921 | 0.5344 | 0.6921 | 0.8319 |
| 1.1226 | 0.4832 | 690 | 0.6160 | 0.5525 | 0.6160 | 0.7849 |
| 1.1226 | 0.4846 | 692 | 0.5742 | 0.5858 | 0.5742 | 0.7578 |
| 1.1226 | 0.4860 | 694 | 0.5831 | 0.5861 | 0.5831 | 0.7636 |
| 1.1226 | 0.4874 | 696 | 0.6600 | 0.5502 | 0.6600 | 0.8124 |
| 1.1226 | 0.4888 | 698 | 0.8671 | 0.4999 | 0.8671 | 0.9312 |
| 1.1226 | 0.4902 | 700 | 0.7504 | 0.5246 | 0.7504 | 0.8663 |
| 1.1226 | 0.4916 | 702 | 0.5683 | 0.5996 | 0.5683 | 0.7539 |
| 1.1226 | 0.4930 | 704 | 0.5545 | 0.5812 | 0.5545 | 0.7447 |
| 1.1226 | 0.4944 | 706 | 0.5900 | 0.5685 | 0.5900 | 0.7681 |
| 1.1226 | 0.4958 | 708 | 0.7714 | 0.5070 | 0.7714 | 0.8783 |
| 1.1226 | 0.4972 | 710 | 0.7643 | 0.5133 | 0.7643 | 0.8743 |
| 1.1226 | 0.4986 | 712 | 0.5880 | 0.5793 | 0.5880 | 0.7668 |
| 1.1226 | 0.5 | 714 | 0.5631 | 0.5590 | 0.5631 | 0.7504 |
| 1.1226 | 0.5014 | 716 | 0.5643 | 0.5600 | 0.5643 | 0.7512 |
| 1.1226 | 0.5028 | 718 | 0.6314 | 0.5572 | 0.6314 | 0.7946 |
| 1.1226 | 0.5042 | 720 | 0.8133 | 0.5287 | 0.8133 | 0.9018 |
| 1.1226 | 0.5056 | 722 | 0.7176 | 0.5650 | 0.7176 | 0.8471 |
| 1.1226 | 0.5070 | 724 | 0.6046 | 0.5867 | 0.6046 | 0.7776 |
| 1.1226 | 0.5084 | 726 | 0.8717 | 0.4868 | 0.8717 | 0.9337 |
| 1.1226 | 0.5098 | 728 | 0.8795 | 0.4865 | 0.8795 | 0.9378 |
| 1.1226 | 0.5112 | 730 | 0.6244 | 0.5858 | 0.6244 | 0.7902 |
| 1.1226 | 0.5126 | 732 | 0.7111 | 0.5650 | 0.7111 | 0.8433 |
| 1.1226 | 0.5140 | 734 | 0.8293 | 0.5099 | 0.8293 | 0.9107 |
| 1.1226 | 0.5154 | 736 | 0.6678 | 0.5614 | 0.6678 | 0.8172 |
| 1.1226 | 0.5168 | 738 | 0.5350 | 0.5843 | 0.5350 | 0.7314 |
| 1.1226 | 0.5182 | 740 | 0.5641 | 0.5447 | 0.5641 | 0.7511 |
| 1.1226 | 0.5196 | 742 | 0.5413 | 0.5717 | 0.5413 | 0.7357 |
| 1.1226 | 0.5210 | 744 | 0.5587 | 0.6035 | 0.5587 | 0.7474 |
| 1.1226 | 0.5224 | 746 | 0.7773 | 0.4934 | 0.7773 | 0.8816 |
| 1.1226 | 0.5238 | 748 | 0.8000 | 0.4775 | 0.8000 | 0.8944 |
| 1.1226 | 0.5252 | 750 | 0.6374 | 0.5184 | 0.6374 | 0.7984 |
| 1.1226 | 0.5266 | 752 | 0.5442 | 0.5745 | 0.5442 | 0.7377 |
| 1.1226 | 0.5280 | 754 | 0.5517 | 0.5579 | 0.5517 | 0.7427 |
| 1.1226 | 0.5294 | 756 | 0.5490 | 0.5720 | 0.5490 | 0.7409 |
| 1.1226 | 0.5308 | 758 | 0.6797 | 0.5011 | 0.6797 | 0.8244 |
| 1.1226 | 0.5322 | 760 | 0.8407 | 0.4594 | 0.8407 | 0.9169 |
| 1.1226 | 0.5336 | 762 | 0.7697 | 0.4628 | 0.7697 | 0.8773 |
| 1.1226 | 0.5350 | 764 | 0.5868 | 0.5481 | 0.5868 | 0.7660 |
| 1.1226 | 0.5364 | 766 | 0.5555 | 0.5728 | 0.5555 | 0.7453 |
| 1.1226 | 0.5378 | 768 | 0.5707 | 0.5658 | 0.5707 | 0.7555 |
| 1.1226 | 0.5392 | 770 | 0.6568 | 0.5452 | 0.6568 | 0.8104 |
| 1.1226 | 0.5406 | 772 | 0.8153 | 0.4882 | 0.8153 | 0.9029 |
| 1.1226 | 0.5420 | 774 | 0.7428 | 0.5160 | 0.7428 | 0.8619 |
| 1.1226 | 0.5434 | 776 | 0.5833 | 0.5811 | 0.5833 | 0.7638 |
| 1.1226 | 0.5448 | 778 | 0.5302 | 0.5710 | 0.5302 | 0.7281 |
| 1.1226 | 0.5462 | 780 | 0.5426 | 0.5626 | 0.5426 | 0.7366 |
| 1.1226 | 0.5476 | 782 | 0.5321 | 0.5782 | 0.5321 | 0.7294 |
| 1.1226 | 0.5490 | 784 | 0.5522 | 0.5963 | 0.5522 | 0.7431 |
| 1.1226 | 0.5504 | 786 | 0.5384 | 0.5728 | 0.5384 | 0.7338 |
| 1.1226 | 0.5518 | 788 | 0.5409 | 0.5629 | 0.5409 | 0.7355 |
| 1.1226 | 0.5532 | 790 | 0.5636 | 0.5531 | 0.5636 | 0.7507 |
| 1.1226 | 0.5546 | 792 | 0.5370 | 0.5986 | 0.5370 | 0.7328 |
| 1.1226 | 0.5560 | 794 | 0.5328 | 0.5883 | 0.5328 | 0.7300 |
| 1.1226 | 0.5574 | 796 | 0.5313 | 0.6027 | 0.5313 | 0.7289 |
| 1.1226 | 0.5588 | 798 | 0.5299 | 0.5942 | 0.5299 | 0.7280 |
| 1.1226 | 0.5602 | 800 | 0.5495 | 0.5804 | 0.5495 | 0.7413 |
| 1.1226 | 0.5616 | 802 | 0.5339 | 0.5626 | 0.5339 | 0.7307 |
| 1.1226 | 0.5630 | 804 | 0.5327 | 0.5997 | 0.5327 | 0.7299 |
| 1.1226 | 0.5644 | 806 | 0.5401 | 0.6040 | 0.5401 | 0.7349 |
| 1.1226 | 0.5658 | 808 | 0.5924 | 0.5561 | 0.5924 | 0.7697 |
| 1.1226 | 0.5672 | 810 | 0.5819 | 0.5994 | 0.5819 | 0.7628 |
| 1.1226 | 0.5686 | 812 | 0.5736 | 0.6080 | 0.5736 | 0.7574 |
| 1.1226 | 0.5700 | 814 | 0.5853 | 0.6006 | 0.5853 | 0.7651 |
| 1.1226 | 0.5714 | 816 | 0.5630 | 0.6095 | 0.5630 | 0.7503 |
| 1.1226 | 0.5728 | 818 | 0.5654 | 0.5971 | 0.5654 | 0.7519 |
| 1.1226 | 0.5742 | 820 | 0.5949 | 0.5755 | 0.5949 | 0.7713 |
| 1.1226 | 0.5756 | 822 | 0.5883 | 0.5945 | 0.5883 | 0.7670 |
| 1.1226 | 0.5770 | 824 | 0.6036 | 0.5524 | 0.6036 | 0.7769 |
| 1.1226 | 0.5784 | 826 | 0.5824 | 0.5694 | 0.5824 | 0.7631 |
| 1.1226 | 0.5798 | 828 | 0.5355 | 0.5775 | 0.5355 | 0.7318 |
| 1.1226 | 0.5812 | 830 | 0.5393 | 0.5815 | 0.5393 | 0.7344 |
| 1.1226 | 0.5826 | 832 | 0.5657 | 0.5957 | 0.5657 | 0.7521 |
| 1.1226 | 0.5840 | 834 | 0.5498 | 0.5906 | 0.5498 | 0.7415 |
| 1.1226 | 0.5854 | 836 | 0.5816 | 0.5811 | 0.5816 | 0.7626 |
| 1.1226 | 0.5868 | 838 | 0.6490 | 0.5285 | 0.6490 | 0.8056 |
| 1.1226 | 0.5882 | 840 | 0.5987 | 0.5646 | 0.5987 | 0.7737 |
| 1.1226 | 0.5896 | 842 | 0.5391 | 0.5839 | 0.5391 | 0.7343 |
| 1.1226 | 0.5910 | 844 | 0.5528 | 0.5770 | 0.5528 | 0.7435 |
| 1.1226 | 0.5924 | 846 | 0.5484 | 0.5787 | 0.5484 | 0.7405 |
| 1.1226 | 0.5938 | 848 | 0.5463 | 0.5926 | 0.5463 | 0.7391 |
| 1.1226 | 0.5952 | 850 | 0.5811 | 0.5560 | 0.5811 | 0.7623 |
| 1.1226 | 0.5966 | 852 | 0.6407 | 0.5515 | 0.6407 | 0.8004 |
| 1.1226 | 0.5980 | 854 | 0.5770 | 0.5631 | 0.5770 | 0.7596 |
| 1.1226 | 0.5994 | 856 | 0.5469 | 0.6120 | 0.5469 | 0.7395 |
| 1.1226 | 0.6008 | 858 | 0.5543 | 0.5977 | 0.5543 | 0.7445 |
| 1.1226 | 0.6022 | 860 | 0.5471 | 0.6024 | 0.5471 | 0.7397 |
| 1.1226 | 0.6036 | 862 | 0.5516 | 0.6044 | 0.5516 | 0.7427 |
| 1.1226 | 0.6050 | 864 | 0.6549 | 0.5575 | 0.6549 | 0.8092 |
| 1.1226 | 0.6064 | 866 | 0.7146 | 0.5255 | 0.7146 | 0.8453 |
| 1.1226 | 0.6078 | 868 | 0.6053 | 0.5591 | 0.6053 | 0.7780 |
| 1.1226 | 0.6092 | 870 | 0.5707 | 0.5646 | 0.5707 | 0.7555 |
| 1.1226 | 0.6106 | 872 | 0.5795 | 0.5532 | 0.5795 | 0.7612 |
| 1.1226 | 0.6120 | 874 | 0.6929 | 0.5045 | 0.6929 | 0.8324 |
| 1.1226 | 0.6134 | 876 | 0.6839 | 0.5056 | 0.6839 | 0.8270 |
| 1.1226 | 0.6148 | 878 | 0.5800 | 0.5208 | 0.5800 | 0.7616 |
| 1.1226 | 0.6162 | 880 | 0.5689 | 0.5766 | 0.5689 | 0.7543 |
| 1.1226 | 0.6176 | 882 | 0.5679 | 0.5470 | 0.5679 | 0.7536 |
| 1.1226 | 0.6190 | 884 | 0.5785 | 0.5602 | 0.5785 | 0.7606 |
| 1.1226 | 0.6204 | 886 | 0.5808 | 0.5541 | 0.5808 | 0.7621 |
| 1.1226 | 0.6218 | 888 | 0.5874 | 0.5529 | 0.5874 | 0.7664 |
| 1.1226 | 0.6232 | 890 | 0.6272 | 0.5342 | 0.6272 | 0.7919 |
| 1.1226 | 0.6246 | 892 | 0.5718 | 0.5516 | 0.5718 | 0.7562 |
| 1.1226 | 0.6261 | 894 | 0.6129 | 0.5554 | 0.6129 | 0.7829 |
| 1.1226 | 0.6275 | 896 | 0.6091 | 0.5377 | 0.6091 | 0.7805 |
| 1.1226 | 0.6289 | 898 | 0.5755 | 0.5269 | 0.5755 | 0.7586 |
| 1.1226 | 0.6303 | 900 | 0.5963 | 0.5204 | 0.5963 | 0.7722 |
| 1.1226 | 0.6317 | 902 | 0.6018 | 0.5106 | 0.6018 | 0.7757 |
| 1.1226 | 0.6331 | 904 | 0.6278 | 0.5295 | 0.6278 | 0.7923 |
| 1.1226 | 0.6345 | 906 | 0.6184 | 0.5280 | 0.6184 | 0.7864 |
| 1.1226 | 0.6359 | 908 | 0.6037 | 0.5344 | 0.6037 | 0.7770 |
| 1.1226 | 0.6373 | 910 | 0.6451 | 0.5210 | 0.6451 | 0.8032 |
| 1.1226 | 0.6387 | 912 | 0.6496 | 0.5174 | 0.6496 | 0.8060 |
| 1.1226 | 0.6401 | 914 | 0.6209 | 0.5380 | 0.6209 | 0.7880 |
| 1.1226 | 0.6415 | 916 | 0.7105 | 0.5183 | 0.7105 | 0.8429 |
| 1.1226 | 0.6429 | 918 | 0.8302 | 0.4626 | 0.8302 | 0.9112 |
| 1.1226 | 0.6443 | 920 | 0.7406 | 0.4911 | 0.7406 | 0.8606 |
| 1.1226 | 0.6457 | 922 | 0.5905 | 0.4973 | 0.5905 | 0.7685 |
| 1.1226 | 0.6471 | 924 | 0.5775 | 0.5045 | 0.5775 | 0.7599 |
| 1.1226 | 0.6485 | 926 | 0.6133 | 0.5353 | 0.6133 | 0.7831 |
| 1.1226 | 0.6499 | 928 | 0.8274 | 0.4737 | 0.8274 | 0.9096 |
| 1.1226 | 0.6513 | 930 | 0.9338 | 0.4401 | 0.9338 | 0.9663 |
| 1.1226 | 0.6527 | 932 | 0.7294 | 0.5014 | 0.7294 | 0.8541 |
| 1.1226 | 0.6541 | 934 | 0.5929 | 0.5353 | 0.5929 | 0.7700 |
| 1.1226 | 0.6555 | 936 | 0.5832 | 0.5303 | 0.5832 | 0.7637 |
| 1.1226 | 0.6569 | 938 | 0.5806 | 0.5562 | 0.5806 | 0.7620 |
| 1.1226 | 0.6583 | 940 | 0.7343 | 0.5141 | 0.7343 | 0.8569 |
| 1.1226 | 0.6597 | 942 | 0.8257 | 0.4841 | 0.8257 | 0.9087 |
| 1.1226 | 0.6611 | 944 | 0.6738 | 0.5183 | 0.6738 | 0.8209 |
| 1.1226 | 0.6625 | 946 | 0.5567 | 0.5710 | 0.5567 | 0.7461 |
| 1.1226 | 0.6639 | 948 | 0.5536 | 0.5682 | 0.5536 | 0.7440 |
| 1.1226 | 0.6653 | 950 | 0.6003 | 0.5299 | 0.6003 | 0.7748 |
| 1.1226 | 0.6667 | 952 | 0.6702 | 0.5288 | 0.6702 | 0.8186 |
| 1.1226 | 0.6681 | 954 | 0.6485 | 0.5373 | 0.6485 | 0.8053 |
| 1.1226 | 0.6695 | 956 | 0.6174 | 0.5478 | 0.6174 | 0.7858 |
| 1.1226 | 0.6709 | 958 | 0.5693 | 0.5746 | 0.5693 | 0.7546 |
| 1.1226 | 0.6723 | 960 | 0.5991 | 0.5613 | 0.5991 | 0.7740 |
| 1.1226 | 0.6737 | 962 | 0.7011 | 0.5287 | 0.7011 | 0.8373 |
| 1.1226 | 0.6751 | 964 | 0.6643 | 0.5336 | 0.6643 | 0.8150 |
| 1.1226 | 0.6765 | 966 | 0.5766 | 0.5826 | 0.5766 | 0.7593 |
| 1.1226 | 0.6779 | 968 | 0.5804 | 0.5792 | 0.5804 | 0.7619 |
| 1.1226 | 0.6793 | 970 | 0.6091 | 0.5539 | 0.6091 | 0.7804 |
| 1.1226 | 0.6807 | 972 | 0.7106 | 0.5210 | 0.7106 | 0.8430 |
| 1.1226 | 0.6821 | 974 | 0.6360 | 0.5360 | 0.6360 | 0.7975 |
| 1.1226 | 0.6835 | 976 | 0.5747 | 0.5321 | 0.5747 | 0.7581 |
| 1.1226 | 0.6849 | 978 | 0.5911 | 0.4666 | 0.5911 | 0.7688 |
| 1.1226 | 0.6863 | 980 | 0.5702 | 0.5249 | 0.5702 | 0.7551 |
| 1.1226 | 0.6877 | 982 | 0.6021 | 0.5232 | 0.6021 | 0.7760 |
| 1.1226 | 0.6891 | 984 | 0.6082 | 0.5124 | 0.6082 | 0.7799 |
| 1.1226 | 0.6905 | 986 | 0.5978 | 0.5297 | 0.5978 | 0.7732 |
| 1.1226 | 0.6919 | 988 | 0.5491 | 0.5584 | 0.5491 | 0.7410 |
| 1.1226 | 0.6933 | 990 | 0.5533 | 0.5423 | 0.5533 | 0.7438 |
| 1.1226 | 0.6947 | 992 | 0.5513 | 0.5670 | 0.5513 | 0.7425 |
| 1.1226 | 0.6961 | 994 | 0.6158 | 0.5666 | 0.6158 | 0.7847 |
| 1.1226 | 0.6975 | 996 | 0.5692 | 0.5865 | 0.5692 | 0.7544 |
| 1.1226 | 0.6989 | 998 | 0.5570 | 0.6029 | 0.5570 | 0.7463 |
| 0.4202 | 0.7003 | 1000 | 0.5466 | 0.5995 | 0.5466 | 0.7394 |
| 0.4202 | 0.7017 | 1002 | 0.5582 | 0.6006 | 0.5582 | 0.7471 |
| 0.4202 | 0.7031 | 1004 | 0.6114 | 0.6155 | 0.6114 | 0.7819 |
| 0.4202 | 0.7045 | 1006 | 0.5795 | 0.6233 | 0.5795 | 0.7612 |
| 0.4202 | 0.7059 | 1008 | 0.5832 | 0.5710 | 0.5832 | 0.7637 |
| 0.4202 | 0.7073 | 1010 | 0.5720 | 0.6075 | 0.5720 | 0.7563 |
| 0.4202 | 0.7087 | 1012 | 0.5990 | 0.6360 | 0.5990 | 0.7739 |
| 0.4202 | 0.7101 | 1014 | 0.5702 | 0.6341 | 0.5702 | 0.7551 |
| 0.4202 | 0.7115 | 1016 | 0.5658 | 0.5648 | 0.5658 | 0.7522 |
| 0.4202 | 0.7129 | 1018 | 0.5773 | 0.5539 | 0.5773 | 0.7598 |
| 0.4202 | 0.7143 | 1020 | 0.5439 | 0.6065 | 0.5439 | 0.7375 |
| 0.4202 | 0.7157 | 1022 | 0.7085 | 0.5575 | 0.7085 | 0.8417 |
| 0.4202 | 0.7171 | 1024 | 0.6819 | 0.5584 | 0.6819 | 0.8258 |
| 0.4202 | 0.7185 | 1026 | 0.5403 | 0.5940 | 0.5403 | 0.7350 |
| 0.4202 | 0.7199 | 1028 | 0.5529 | 0.5533 | 0.5529 | 0.7436 |
| 0.4202 | 0.7213 | 1030 | 0.5421 | 0.5747 | 0.5421 | 0.7363 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
easwar03/t5-small-legal-summarizer | easwar03 | "2024-11-01T18:10:59Z" | 132 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-11-01T18:02:44Z" | ---
library_name: transformers
license: apache-2.0
base_model: t5-small
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-small-legal-summarizer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-legal-summarizer
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9930
- Rouge1: 22.9243
- Rouge2: 7.1417
- Rougel: 18.8502
- Rougelsum: 19.6924
- Gen Len: 17.5222
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 89 | 3.0995 | 23.1688 | 7.6038 | 19.0864 | 20.241 | 18.1778 |
| No log | 2.0 | 178 | 3.0162 | 23.35 | 7.1787 | 19.2791 | 20.0032 | 17.6222 |
| No log | 3.0 | 267 | 2.9930 | 22.9243 | 7.1417 | 18.8502 | 19.6924 | 17.5222 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.1.0
- Tokenizers 0.19.1
|
matrixportal/Meta-Llama-3-8B-Instruct-Q5_K_M-GGUF | matrixportal | "2025-01-10T19:51:51Z" | 34 | 0 | null | [
"gguf",
"facebook",
"meta",
"pytorch",
"llama",
"llama-3",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:quantized:meta-llama/Meta-Llama-3-8B-Instruct",
"license:llama3",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2025-01-10T19:51:24Z" | ---
language:
- en
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
- llama-cpp
- gguf-my-repo
license: llama3
new_version: meta-llama/Llama-3.1-8B-Instruct
extra_gated_prompt: "### META LLAMA 3 COMMUNITY LICENSE AGREEMENT\nMeta Llama 3 Version\
\ Release Date: April 18, 2024\n\"Agreement\" means the terms and conditions for\
\ use, reproduction, distribution and modification of the Llama Materials set forth\
\ herein.\n\"Documentation\" means the specifications, manuals and documentation\
\ accompanying Meta Llama 3 distributed by Meta at https://llama.meta.com/get-started/.\n\
\"Licensee\" or \"you\" means you, or your employer or any other person or entity\
\ (if you are entering into this Agreement on such person or entity’s behalf), of\
\ the age required under applicable laws, rules or regulations to provide legal\
\ consent and that has legal authority to bind your employer or such other person\
\ or entity if you are entering in this Agreement on their behalf.\n\"Meta Llama\
\ 3\" means the foundational large language models and software and algorithms,\
\ including machine-learning model code, trained model weights, inference-enabling\
\ code, training-enabling code, fine-tuning enabling code and other elements of\
\ the foregoing distributed by Meta at https://llama.meta.com/llama-downloads.\n\
\"Llama Materials\" means, collectively, Meta’s proprietary Meta Llama 3 and Documentation\
\ (and any portion thereof) made available under this Agreement.\n\"Meta\" or \"\
we\" means Meta Platforms Ireland Limited (if you are located in or, if you are\
\ an entity, your principal place of business is in the EEA or Switzerland) and\
\ Meta Platforms, Inc. (if you are located outside of the EEA or Switzerland).\n\
\ \n1. License Rights and Redistribution.\na. Grant of Rights. You are granted\
\ a non-exclusive, worldwide, non-transferable and royalty-free limited license\
\ under Meta’s intellectual property or other rights owned by Meta embodied in the\
\ Llama Materials to use, reproduce, distribute, copy, create derivative works of,\
\ and make modifications to the Llama Materials.\nb. Redistribution and Use.\ni.\
\ If you distribute or make available the Llama Materials (or any derivative works\
\ thereof), or a product or service that uses any of them, including another AI\
\ model, you shall (A) provide a copy of this Agreement with any such Llama Materials;\
\ and (B) prominently display “Built with Meta Llama 3” on a related website, user\
\ interface, blogpost, about page, or product documentation. If you use the Llama\
\ Materials to create, train, fine tune, or otherwise improve an AI model, which\
\ is distributed or made available, you shall also include “Llama 3” at the beginning\
\ of any such AI model name.\nii. If you receive Llama Materials, or any derivative\
\ works thereof, from a Licensee as part of an integrated end user product, then\
\ Section 2 of this Agreement will not apply to you.\niii. You must retain in all\
\ copies of the Llama Materials that you distribute the following attribution notice\
\ within a “Notice” text file distributed as a part of such copies: “Meta Llama\
\ 3 is licensed under the Meta Llama 3 Community License, Copyright © Meta Platforms,\
\ Inc. All Rights Reserved.”\niv. Your use of the Llama Materials must comply with\
\ applicable laws and regulations (including trade compliance laws and regulations)\
\ and adhere to the Acceptable Use Policy for the Llama Materials (available at\
\ https://llama.meta.com/llama3/use-policy), which is hereby incorporated by reference\
\ into this Agreement.\nv. You will not use the Llama Materials or any output or\
\ results of the Llama Materials to improve any other large language model (excluding\
\ Meta Llama 3 or derivative works thereof).\n2. Additional Commercial Terms. If,\
\ on the Meta Llama 3 version release date, the monthly active users of the products\
\ or services made available by or for Licensee, or Licensee’s affiliates, is greater\
\ than 700 million monthly active users in the preceding calendar month, you must\
\ request a license from Meta, which Meta may grant to you in its sole discretion,\
\ and you are not authorized to exercise any of the rights under this Agreement\
\ unless or until Meta otherwise expressly grants you such rights.\n3. Disclaimer\
\ of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE LLAMA MATERIALS AND ANY OUTPUT\
\ AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS” BASIS, WITHOUT WARRANTIES OF\
\ ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF ANY KIND, BOTH EXPRESS AND IMPLIED,\
\ INCLUDING, WITHOUT LIMITATION, ANY WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY,\
\ OR FITNESS FOR A PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING\
\ THE APPROPRIATENESS OF USING OR REDISTRIBUTING THE LLAMA MATERIALS AND ASSUME\
\ ANY RISKS ASSOCIATED WITH YOUR USE OF THE LLAMA MATERIALS AND ANY OUTPUT AND RESULTS.\n\
4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE UNDER\
\ ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS LIABILITY,\
\ OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS OR ANY INDIRECT,\
\ SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR PUNITIVE DAMAGES, EVEN IF META\
\ OR ITS AFFILIATES HAVE BEEN ADVISED OF THE POSSIBILITY OF ANY OF THE FOREGOING.\n\
5. Intellectual Property.\na. No trademark licenses are granted under this Agreement,\
\ and in connection with the Llama Materials, neither Meta nor Licensee may use\
\ any name or mark owned by or associated with the other or any of its affiliates,\
\ except as required for reasonable and customary use in describing and redistributing\
\ the Llama Materials or as set forth in this Section 5(a). Meta hereby grants you\
\ a license to use “Llama 3” (the “Mark”) solely as required to comply with the\
\ last sentence of Section 1.b.i. You will comply with Meta’s brand guidelines (currently\
\ accessible at https://about.meta.com/brand/resources/meta/company-brand/ ). All\
\ goodwill arising out of your use of the Mark will inure to the benefit of Meta.\n\
b. Subject to Meta’s ownership of Llama Materials and derivatives made by or for\
\ Meta, with respect to any derivative works and modifications of the Llama Materials\
\ that are made by you, as between you and Meta, you are and will be the owner of\
\ such derivative works and modifications.\nc. If you institute litigation or other\
\ proceedings against Meta or any entity (including a cross-claim or counterclaim\
\ in a lawsuit) alleging that the Llama Materials or Meta Llama 3 outputs or results,\
\ or any portion of any of the foregoing, constitutes infringement of intellectual\
\ property or other rights owned or licensable by you, then any licenses granted\
\ to you under this Agreement shall terminate as of the date such litigation or\
\ claim is filed or instituted. You will indemnify and hold harmless Meta from and\
\ against any claim by any third party arising out of or related to your use or\
\ distribution of the Llama Materials.\n6. Term and Termination. The term of this\
\ Agreement will commence upon your acceptance of this Agreement or access to the\
\ Llama Materials and will continue in full force and effect until terminated in\
\ accordance with the terms and conditions herein. Meta may terminate this Agreement\
\ if you are in breach of any term or condition of this Agreement. Upon termination\
\ of this Agreement, you shall delete and cease use of the Llama Materials. Sections\
\ 3, 4 and 7 shall survive the termination of this Agreement.\n7. Governing Law\
\ and Jurisdiction. This Agreement will be governed and construed under the laws\
\ of the State of California without regard to choice of law principles, and the\
\ UN Convention on Contracts for the International Sale of Goods does not apply\
\ to this Agreement. The courts of California shall have exclusive jurisdiction\
\ of any dispute arising out of this Agreement.\n### Meta Llama 3 Acceptable Use\
\ Policy\nMeta is committed to promoting safe and fair use of its tools and features,\
\ including Meta Llama 3. If you access or use Meta Llama 3, you agree to this Acceptable\
\ Use Policy (“Policy”). The most recent copy of this policy can be found at [https://llama.meta.com/llama3/use-policy](https://llama.meta.com/llama3/use-policy)\n\
#### Prohibited Uses\nWe want everyone to use Meta Llama 3 safely and responsibly.\
\ You agree you will not use, or allow others to use, Meta Llama 3 to: 1. Violate\
\ the law or others’ rights, including to:\n 1. Engage in, promote, generate,\
\ contribute to, encourage, plan, incite, or further illegal or unlawful activity\
\ or content, such as:\n 1. Violence or terrorism\n 2. Exploitation\
\ or harm to children, including the solicitation, creation, acquisition, or dissemination\
\ of child exploitative content or failure to report Child Sexual Abuse Material\n\
\ 3. Human trafficking, exploitation, and sexual violence\n 4. The\
\ illegal distribution of information or materials to minors, including obscene\
\ materials, or failure to employ legally required age-gating in connection with\
\ such information or materials.\n 5. Sexual solicitation\n 6. Any\
\ other criminal activity\n 2. Engage in, promote, incite, or facilitate the\
\ harassment, abuse, threatening, or bullying of individuals or groups of individuals\n\
\ 3. Engage in, promote, incite, or facilitate discrimination or other unlawful\
\ or harmful conduct in the provision of employment, employment benefits, credit,\
\ housing, other economic benefits, or other essential goods and services\n 4.\
\ Engage in the unauthorized or unlicensed practice of any profession including,\
\ but not limited to, financial, legal, medical/health, or related professional\
\ practices\n 5. Collect, process, disclose, generate, or infer health, demographic,\
\ or other sensitive personal or private information about individuals without rights\
\ and consents required by applicable laws\n 6. Engage in or facilitate any action\
\ or generate any content that infringes, misappropriates, or otherwise violates\
\ any third-party rights, including the outputs or results of any products or services\
\ using the Llama Materials\n 7. Create, generate, or facilitate the creation\
\ of malicious code, malware, computer viruses or do anything else that could disable,\
\ overburden, interfere with or impair the proper working, integrity, operation\
\ or appearance of a website or computer system\n2. Engage in, promote, incite,\
\ facilitate, or assist in the planning or development of activities that present\
\ a risk of death or bodily harm to individuals, including use of Meta Llama 3 related\
\ to the following:\n 1. Military, warfare, nuclear industries or applications,\
\ espionage, use for materials or activities that are subject to the International\
\ Traffic Arms Regulations (ITAR) maintained by the United States Department of\
\ State\n 2. Guns and illegal weapons (including weapon development)\n 3.\
\ Illegal drugs and regulated/controlled substances\n 4. Operation of critical\
\ infrastructure, transportation technologies, or heavy machinery\n 5. Self-harm\
\ or harm to others, including suicide, cutting, and eating disorders\n 6. Any\
\ content intended to incite or promote violence, abuse, or any infliction of bodily\
\ harm to an individual\n3. Intentionally deceive or mislead others, including use\
\ of Meta Llama 3 related to the following:\n 1. Generating, promoting, or furthering\
\ fraud or the creation or promotion of disinformation\n 2. Generating, promoting,\
\ or furthering defamatory content, including the creation of defamatory statements,\
\ images, or other content\n 3. Generating, promoting, or further distributing\
\ spam\n 4. Impersonating another individual without consent, authorization,\
\ or legal right\n 5. Representing that the use of Meta Llama 3 or outputs are\
\ human-generated\n 6. Generating or facilitating false online engagement, including\
\ fake reviews and other means of fake online engagement\n4. Fail to appropriately\
\ disclose to end users any known dangers of your AI system\nPlease report any violation\
\ of this Policy, software “bug,” or other problems that could lead to a violation\
\ of this Policy through one of the following means:\n * Reporting issues with\
\ the model: [https://github.com/meta-llama/llama3](https://github.com/meta-llama/llama3)\n\
\ * Reporting risky content generated by the model:\n developers.facebook.com/llama_output_feedback\n\
\ * Reporting bugs and security concerns: facebook.com/whitehat/info\n * Reporting\
\ violations of the Acceptable Use Policy or unlicensed uses of Meta Llama 3: [email protected]"
extra_gated_fields:
First Name: text
Last Name: text
Date of birth: date_picker
Country: country
Affiliation: text
geo: ip_location
? By clicking Submit below I accept the terms of the license and acknowledge that
the information I provide will be collected stored processed and shared in accordance
with the Meta Privacy Policy
: checkbox
extra_gated_description: The information you provide will be collected, stored, processed
and shared in accordance with the [Meta Privacy Policy](https://www.facebook.com/privacy/policy/).
extra_gated_button_content: Submit
widget:
- example_title: Hello
messages:
- role: user
content: Hey my name is Julien! How are you?
- example_title: Winter holidays
messages:
- role: system
content: You are a helpful and honest assistant. Please, respond concisely and
truthfully.
- role: user
content: Can you recommend a good destination for Winter holidays?
- example_title: Programming assistant
messages:
- role: system
content: You are a helpful and honest code and programming assistant. Please,
respond concisely and truthfully.
- role: user
content: Write a function that computes the nth fibonacci number.
inference:
parameters:
max_new_tokens: 300
stop:
- <|end_of_text|>
- <|eot_id|>
base_model: meta-llama/Meta-Llama-3-8B-Instruct
---
# matrixportal/Meta-Llama-3-8B-Instruct-Q5_K_M-GGUF
This model was converted to GGUF format from [`meta-llama/Meta-Llama-3-8B-Instruct`](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo matrixportal/Meta-Llama-3-8B-Instruct-Q5_K_M-GGUF --hf-file meta-llama-3-8b-instruct-q5_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo matrixportal/Meta-Llama-3-8B-Instruct-Q5_K_M-GGUF --hf-file meta-llama-3-8b-instruct-q5_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo matrixportal/Meta-Llama-3-8B-Instruct-Q5_K_M-GGUF --hf-file meta-llama-3-8b-instruct-q5_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo matrixportal/Meta-Llama-3-8B-Instruct-Q5_K_M-GGUF --hf-file meta-llama-3-8b-instruct-q5_k_m.gguf -c 2048
```
|
ManuVleuBeu/t5_base_answer-aware_normal_eduQG | ManuVleuBeu | "2023-08-11T17:40:09Z" | 161 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-08-09T15:15:57Z" | ---
tags:
- generated_from_trainer
model-index:
- name: t5_base_answer-aware_normal_eduQG
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5_base_answer-aware_normal_eduQG
This model was trained from scratch on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu117
- Datasets 2.11.0
- Tokenizers 0.13.3
|
Dogusdogus1/Dogus | Dogusdogus1 | "2025-02-11T12:59:59Z" | 0 | 0 | null | [
"license:artistic-2.0",
"region:us"
] | null | "2025-02-11T12:59:59Z" | ---
license: artistic-2.0
---
|
MIIB-NLP/Arabic-question-generation | MIIB-NLP | "2022-10-09T14:01:23Z" | 248 | 5 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"answer-aware-question-generation",
"question-generation",
"QG",
"ar",
"arxiv:2109.12068",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2022-10-08T20:51:19Z" | ---
language:
- ar
tags:
- answer-aware-question-generation
- question-generation
- QG
widget:
- text: "context: الثورة الجزائرية أو ثورة المليون شهيد، اندلعت في 1 نوفمبر 1954 ضد المستعمر الفرنسي ودامت 7 سنوات ونصف. استشهد فيها أكثر من مليون ونصف مليون جزائري answer: 7 سنوات ونصف </s>
"
- text: "context: اسكتلندا دولة في شمال غرب أوروبا، تعتبر جزء من الدول الأربع المكونة المملكة المتحدة. تحتل الثلث الشمالي من جزيرة بريطانيا العظمى وتحدها جنوبا إنجلترا ويحدها شرقا بحر الشمال وغربا المحيط الأطلسي. عاصمتها أدنبرة، وأهم مدنها وأكبرها مدينة غلاسكو. كانت اسكتلندا مملكة مستقلة حتى 1 مايو 1707 answer: أدنبرة </s>"
- text: "context: تم تفكيك الإمبراطورية النمساوية المجرية في عام 1918 بعد نهاية الحرب العالمية الأولى. وكان اباطرتها: الإمبراطور فرانس جوزيف الأول هابسبورغ لورين (في الفترة من 1867 إلى 1916) والإمبراطورة إليزابيث (من 1867 إلى 1898)، تبعها الإمبراطور تشارلز الأول إمبراطور النمسا (من 1916 إلى 1918). answer: 1918 </s>
"
metrics:
- bleu
model-index:
- name: Arabic-Question-Generation
results:
- task:
name: Question-Generation
type: automatic-question-generation
metrics:
- name: Bleu1
type: bleu
value: 37.62
- name: Bleu2
type: bleu
value: 27.80
- name: Bleu3
type: bleu
value: 20.89
- name: Bleu4
type: bleu
value: 15.87
- name: meteor
type: meteor
value: 33.19
- name: rougel
type: rouge
value: 43.37
---
# Arabic Question Generation Model
This model is ready to use for **Question Generation** task, simply input the text and answer, the model will generate a question, This model is a fine-tuned version of [AraT5-Base](https://huggingface.co/UBC-NLP/AraT5-base) Model
## Live Demo
Get the Question from given Context and a Answer : [Arabic QG Model](https://huggingface.co/spaces/MIIB-NLP/Arabic-Question-Generation)
## Model in Action 🚀
```python
#Requirements: !pip install transformers
from transformers import AutoTokenizer,AutoModelForSeq2SeqLM
model = AutoModelForSeq2SeqLM.from_pretrained("MIIB-NLP/Arabic-question-generation")
tokenizer = AutoTokenizer.from_pretrained("MIIB-NLP/Arabic-question-generation")
def get_question(context,answer):
text="context: " +context + " " + "answer: " + answer + " </s>"
text_encoding = tokenizer.encode_plus(
text,return_tensors="pt"
)
model.eval()
generated_ids = model.generate(
input_ids=text_encoding['input_ids'],
attention_mask=text_encoding['attention_mask'],
max_length=64,
num_beams=5,
num_return_sequences=1
)
return tokenizer.decode(generated_ids[0],skip_special_tokens=True,clean_up_tokenization_spaces=True).replace('question: ',' ')
context="الثورة الجزائرية أو ثورة المليون شهيد، اندلعت في 1 نوفمبر 1954 ضد المستعمر الفرنسي ودامت 7 سنوات ونصف. استشهد فيها أكثر من مليون ونصف مليون جزائري"
answer =" 7 سنوات ونصف"
get_question(context,answer)
#output : question="كم استمرت الثورة الجزائرية؟ "
```
## Details of Ara-T5
The **Ara-T5** model was presented in [AraT5: Text-to-Text Transformers for Arabic Language Generation](https://arxiv.org/abs/2109.12068) by *El Moatez Billah Nagoudi, AbdelRahim Elmadany, Muhammad Abdul-Mageed*
## Contacts
**Mihoubi Akram Fawzi**: [Linkedin](https://www.linkedin.com/in/mihoubi-akram/) | [Github](https://github.com/mihoubi-akram) | <[email protected]>
**Ibrir Adel**: [Linkedin](https://www.linkedin.com/in/adel-ibrir/) | [Github]() | <[email protected]>
|
fifxus/a5779138-30a3-4c2d-b01e-3e50427ccd1b | fifxus | "2025-02-05T20:13:45Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:unsloth/Llama-3.2-1B",
"base_model:adapter:unsloth/Llama-3.2-1B",
"license:llama3.2",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-02-05T19:58:02Z" | ---
library_name: peft
license: llama3.2
base_model: unsloth/Llama-3.2-1B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: a5779138-30a3-4c2d-b01e-3e50427ccd1b
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: unsloth/Llama-3.2-1B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 176447a4a3aac1cd_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/176447a4a3aac1cd_train_data.json
type:
field_instruction: question
field_output: answer
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: true
hub_model_id: fifxus/a5779138-30a3-4c2d-b01e-3e50427ccd1b
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 500
micro_batch_size: 2
mlflow_experiment_name: /tmp/176447a4a3aac1cd_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: 8ab127c1-ec9a-4dd1-bf3a-abe43ab19c10
wandb_project: Gradients-On-10
wandb_run: your_name
wandb_runid: 8ab127c1-ec9a-4dd1-bf3a-abe43ab19c10
warmup_steps: 5
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# a5779138-30a3-4c2d-b01e-3e50427ccd1b
This model is a fine-tuned version of [unsloth/Llama-3.2-1B](https://huggingface.co/unsloth/Llama-3.2-1B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2086
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.7037 | 0.2195 | 500 | 1.2086 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
espnet/kan-bayashi_ljspeech_transformer | espnet | "2021-07-03T14:49:36Z" | 3 | 0 | espnet | [
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:ljspeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | text-to-speech | "2022-03-02T23:29:05Z" | ---
tags:
- espnet
- audio
- text-to-speech
language: en
datasets:
- ljspeech
license: cc-by-4.0
---
## Example ESPnet2 TTS model
### `kan-bayashi/ljspeech_transformer`
♻️ Imported from https://zenodo.org/record/4039194/
This model was trained by kan-bayashi using ljspeech/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
Shijia/xlm-roberta-base_ary_loss_0.0001 | Shijia | "2024-02-16T21:42:39Z" | 90 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-02-16T21:41:54Z" | ---
license: mit
base_model: FacebookAI/xlm-roberta-base
tags:
- generated_from_trainer
model-index:
- name: xlm-roberta-base_ary_loss_0.0001
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base_ary_loss_0.0001
This model is a fine-tuned version of [FacebookAI/xlm-roberta-base](https://huggingface.co/FacebookAI/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0476
- Spearman Corr: -0.0518
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Spearman Corr |
|:-------------:|:-----:|:----:|:---------------:|:-------------:|
| No log | 0.9 | 200 | 0.0463 | nan |
| No log | 1.8 | 400 | 0.0463 | nan |
| 0.0485 | 2.7 | 600 | 0.0481 | nan |
| 0.0485 | 3.6 | 800 | 0.0470 | nan |
| 0.0488 | 4.5 | 1000 | 0.0472 | -0.0282 |
| 0.0488 | 5.41 | 1200 | 0.0465 | nan |
| 0.0485 | 6.31 | 1400 | 0.0465 | nan |
| 0.0485 | 7.21 | 1600 | 0.0462 | nan |
| 0.0489 | 8.11 | 1800 | 0.0470 | nan |
| 0.0489 | 9.01 | 2000 | 0.0463 | -0.0105 |
| 0.0489 | 9.91 | 2200 | 0.0464 | nan |
| 0.0485 | 10.81 | 2400 | 0.0464 | nan |
| 0.0485 | 11.71 | 2600 | 0.0464 | nan |
| 0.0485 | 12.61 | 2800 | 0.0465 | nan |
| 0.0485 | 13.51 | 3000 | 0.0464 | 0.0548 |
| 0.0485 | 14.41 | 3200 | 0.0476 | -0.0518 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
fydhfzh/hubert-classifier | fydhfzh | "2024-06-23T10:18:06Z" | 9 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"hubert",
"audio-classification",
"generated_from_trainer",
"base_model:facebook/hubert-base-ls960",
"base_model:finetune:facebook/hubert-base-ls960",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | audio-classification | "2024-06-17T08:21:17Z" | ---
license: apache-2.0
base_model: facebook/hubert-base-ls960
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: hubert-classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hubert-classifier
This model is a fine-tuned version of [facebook/hubert-base-ls960](https://huggingface.co/facebook/hubert-base-ls960) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9330
- Accuracy: 0.0674
- Precision: 0.0116
- Recall: 0.0674
- F1: 0.0182
- Binary: 0.3423
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | Binary |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:------:|
| No log | 0.96 | 50 | 4.4099 | 0.0647 | 0.0191 | 0.0647 | 0.0221 | 0.2396 |
| No log | 1.91 | 100 | 4.3523 | 0.0593 | 0.0190 | 0.0593 | 0.0194 | 0.3019 |
| No log | 2.87 | 150 | 4.2416 | 0.0701 | 0.0246 | 0.0701 | 0.0235 | 0.3358 |
| No log | 3.83 | 200 | 4.1412 | 0.0701 | 0.0265 | 0.0701 | 0.0214 | 0.3437 |
| No log | 4.78 | 250 | 4.0716 | 0.0593 | 0.0069 | 0.0593 | 0.0122 | 0.3334 |
| No log | 5.74 | 300 | 4.0195 | 0.0701 | 0.0124 | 0.0701 | 0.0186 | 0.3453 |
| No log | 6.7 | 350 | 3.9850 | 0.0593 | 0.0073 | 0.0593 | 0.0126 | 0.3350 |
| No log | 7.66 | 400 | 3.9610 | 0.0647 | 0.0097 | 0.0647 | 0.0162 | 0.3388 |
| No log | 8.61 | 450 | 3.9420 | 0.0674 | 0.0113 | 0.0674 | 0.0180 | 0.3396 |
| 4.2019 | 9.57 | 500 | 3.9330 | 0.0674 | 0.0116 | 0.0674 | 0.0182 | 0.3423 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.3.0
- Datasets 2.19.1
- Tokenizers 0.15.1
|
MayBashendy/ArabicNewSplits6_WithDuplicationsForScore5_FineTuningAraBERT_run2_AugV5_k2_task2_organization | MayBashendy | "2024-12-23T10:59:20Z" | 162 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:aubmindlab/bert-base-arabertv02",
"base_model:finetune:aubmindlab/bert-base-arabertv02",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-12-23T10:53:11Z" | ---
library_name: transformers
base_model: aubmindlab/bert-base-arabertv02
tags:
- generated_from_trainer
model-index:
- name: ArabicNewSplits6_WithDuplicationsForScore5_FineTuningAraBERT_run2_AugV5_k2_task2_organization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ArabicNewSplits6_WithDuplicationsForScore5_FineTuningAraBERT_run2_AugV5_k2_task2_organization
This model is a fine-tuned version of [aubmindlab/bert-base-arabertv02](https://huggingface.co/aubmindlab/bert-base-arabertv02) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1421
- Qwk: 0.4812
- Mse: 1.1421
- Rmse: 1.0687
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:------:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 0.1333 | 2 | 3.9362 | -0.0062 | 3.9362 | 1.9840 |
| No log | 0.2667 | 4 | 1.9904 | 0.0977 | 1.9904 | 1.4108 |
| No log | 0.4 | 6 | 1.0441 | 0.0666 | 1.0441 | 1.0218 |
| No log | 0.5333 | 8 | 0.7309 | 0.1194 | 0.7309 | 0.8550 |
| No log | 0.6667 | 10 | 0.7325 | 0.1404 | 0.7325 | 0.8558 |
| No log | 0.8 | 12 | 0.8429 | 0.0472 | 0.8429 | 0.9181 |
| No log | 0.9333 | 14 | 0.8978 | -0.0775 | 0.8978 | 0.9475 |
| No log | 1.0667 | 16 | 0.7873 | 0.1119 | 0.7873 | 0.8873 |
| No log | 1.2 | 18 | 0.7568 | 0.1189 | 0.7568 | 0.8699 |
| No log | 1.3333 | 20 | 0.7336 | 0.1657 | 0.7336 | 0.8565 |
| No log | 1.4667 | 22 | 0.7128 | 0.2019 | 0.7128 | 0.8443 |
| No log | 1.6 | 24 | 0.7258 | 0.1546 | 0.7258 | 0.8519 |
| No log | 1.7333 | 26 | 0.7906 | 0.1118 | 0.7906 | 0.8892 |
| No log | 1.8667 | 28 | 0.7999 | 0.1652 | 0.7999 | 0.8944 |
| No log | 2.0 | 30 | 0.7153 | 0.2703 | 0.7153 | 0.8458 |
| No log | 2.1333 | 32 | 0.5813 | 0.3801 | 0.5813 | 0.7624 |
| No log | 2.2667 | 34 | 0.5966 | 0.3476 | 0.5966 | 0.7724 |
| No log | 2.4 | 36 | 0.5757 | 0.3897 | 0.5757 | 0.7587 |
| No log | 2.5333 | 38 | 0.5909 | 0.4719 | 0.5909 | 0.7687 |
| No log | 2.6667 | 40 | 0.6521 | 0.4404 | 0.6521 | 0.8076 |
| No log | 2.8 | 42 | 0.6583 | 0.4648 | 0.6583 | 0.8114 |
| No log | 2.9333 | 44 | 0.6864 | 0.5159 | 0.6864 | 0.8285 |
| No log | 3.0667 | 46 | 0.7592 | 0.4675 | 0.7592 | 0.8713 |
| No log | 3.2 | 48 | 0.8560 | 0.4557 | 0.8560 | 0.9252 |
| No log | 3.3333 | 50 | 0.9070 | 0.4299 | 0.9070 | 0.9523 |
| No log | 3.4667 | 52 | 0.8995 | 0.4630 | 0.8995 | 0.9484 |
| No log | 3.6 | 54 | 0.7296 | 0.5374 | 0.7296 | 0.8542 |
| No log | 3.7333 | 56 | 0.6794 | 0.5220 | 0.6794 | 0.8243 |
| No log | 3.8667 | 58 | 0.6453 | 0.5093 | 0.6453 | 0.8033 |
| No log | 4.0 | 60 | 0.7372 | 0.5293 | 0.7372 | 0.8586 |
| No log | 4.1333 | 62 | 0.8609 | 0.4943 | 0.8609 | 0.9279 |
| No log | 4.2667 | 64 | 0.7710 | 0.5169 | 0.7710 | 0.8781 |
| No log | 4.4 | 66 | 0.6420 | 0.4815 | 0.6420 | 0.8013 |
| No log | 4.5333 | 68 | 0.6617 | 0.5303 | 0.6617 | 0.8135 |
| No log | 4.6667 | 70 | 0.7152 | 0.4983 | 0.7152 | 0.8457 |
| No log | 4.8 | 72 | 0.9293 | 0.4855 | 0.9293 | 0.9640 |
| No log | 4.9333 | 74 | 1.0223 | 0.4604 | 1.0223 | 1.0111 |
| No log | 5.0667 | 76 | 0.9887 | 0.4799 | 0.9887 | 0.9943 |
| No log | 5.2 | 78 | 0.9308 | 0.4941 | 0.9308 | 0.9648 |
| No log | 5.3333 | 80 | 0.8706 | 0.5073 | 0.8706 | 0.9331 |
| No log | 5.4667 | 82 | 0.8465 | 0.5527 | 0.8465 | 0.9201 |
| No log | 5.6 | 84 | 0.8758 | 0.5051 | 0.8758 | 0.9358 |
| No log | 5.7333 | 86 | 1.0527 | 0.4979 | 1.0527 | 1.0260 |
| No log | 5.8667 | 88 | 1.1641 | 0.4683 | 1.1641 | 1.0789 |
| No log | 6.0 | 90 | 1.1436 | 0.4629 | 1.1436 | 1.0694 |
| No log | 6.1333 | 92 | 0.9992 | 0.5298 | 0.9992 | 0.9996 |
| No log | 6.2667 | 94 | 0.7450 | 0.5058 | 0.7450 | 0.8631 |
| No log | 6.4 | 96 | 0.7016 | 0.5510 | 0.7016 | 0.8376 |
| No log | 6.5333 | 98 | 0.7280 | 0.54 | 0.7280 | 0.8532 |
| No log | 6.6667 | 100 | 0.9098 | 0.5007 | 0.9098 | 0.9538 |
| No log | 6.8 | 102 | 1.0648 | 0.4903 | 1.0648 | 1.0319 |
| No log | 6.9333 | 104 | 0.9864 | 0.4777 | 0.9864 | 0.9932 |
| No log | 7.0667 | 106 | 0.8954 | 0.5311 | 0.8954 | 0.9463 |
| No log | 7.2 | 108 | 0.8353 | 0.5598 | 0.8353 | 0.9140 |
| No log | 7.3333 | 110 | 0.8430 | 0.5526 | 0.8430 | 0.9181 |
| No log | 7.4667 | 112 | 0.8475 | 0.5428 | 0.8475 | 0.9206 |
| No log | 7.6 | 114 | 0.9304 | 0.5313 | 0.9304 | 0.9646 |
| No log | 7.7333 | 116 | 1.0578 | 0.4826 | 1.0578 | 1.0285 |
| No log | 7.8667 | 118 | 1.2319 | 0.4975 | 1.2319 | 1.1099 |
| No log | 8.0 | 120 | 1.4590 | 0.4054 | 1.4590 | 1.2079 |
| No log | 8.1333 | 122 | 1.5472 | 0.4061 | 1.5472 | 1.2439 |
| No log | 8.2667 | 124 | 1.4862 | 0.4027 | 1.4862 | 1.2191 |
| No log | 8.4 | 126 | 1.3318 | 0.4530 | 1.3318 | 1.1540 |
| No log | 8.5333 | 128 | 1.1400 | 0.4894 | 1.1400 | 1.0677 |
| No log | 8.6667 | 130 | 0.9978 | 0.5197 | 0.9978 | 0.9989 |
| No log | 8.8 | 132 | 0.9552 | 0.5157 | 0.9552 | 0.9773 |
| No log | 8.9333 | 134 | 0.9401 | 0.5155 | 0.9401 | 0.9696 |
| No log | 9.0667 | 136 | 0.9700 | 0.5242 | 0.9700 | 0.9849 |
| No log | 9.2 | 138 | 1.0054 | 0.4929 | 1.0054 | 1.0027 |
| No log | 9.3333 | 140 | 1.0466 | 0.5004 | 1.0466 | 1.0230 |
| No log | 9.4667 | 142 | 1.0948 | 0.4963 | 1.0948 | 1.0463 |
| No log | 9.6 | 144 | 1.1347 | 0.4947 | 1.1347 | 1.0652 |
| No log | 9.7333 | 146 | 1.1428 | 0.4812 | 1.1428 | 1.0690 |
| No log | 9.8667 | 148 | 1.1416 | 0.4812 | 1.1416 | 1.0685 |
| No log | 10.0 | 150 | 1.1421 | 0.4812 | 1.1421 | 1.0687 |
### Framework versions
- Transformers 4.44.2
- Pytorch 2.4.0+cu118
- Datasets 2.21.0
- Tokenizers 0.19.1
|
wahidww/swin-tiny-patch4-window7-224-finetuned-eurosat | wahidww | "2024-05-25T06:35:03Z" | 220 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:microsoft/swin-tiny-patch4-window7-224",
"base_model:finetune:microsoft/swin-tiny-patch4-window7-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | "2024-05-25T06:24:48Z" | ---
license: apache-2.0
base_model: microsoft/swin-tiny-patch4-window7-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.808641975308642
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5712
- Accuracy: 0.8086
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.87 | 5 | 1.3767 | 0.5370 |
| 1.289 | 1.91 | 11 | 1.3503 | 0.5494 |
| 1.289 | 2.96 | 17 | 1.3712 | 0.5556 |
| 1.0376 | 4.0 | 23 | 1.3064 | 0.5556 |
| 1.0376 | 4.87 | 28 | 1.1062 | 0.5802 |
| 0.8346 | 5.91 | 34 | 0.9249 | 0.6481 |
| 0.7096 | 6.96 | 40 | 0.8947 | 0.6235 |
| 0.7096 | 8.0 | 46 | 0.8626 | 0.6543 |
| 0.6356 | 8.87 | 51 | 0.6820 | 0.7222 |
| 0.6356 | 9.91 | 57 | 0.7249 | 0.7346 |
| 0.5956 | 10.96 | 63 | 0.6818 | 0.7407 |
| 0.5956 | 12.0 | 69 | 0.6111 | 0.7840 |
| 0.5534 | 12.87 | 74 | 0.6026 | 0.7778 |
| 0.519 | 13.91 | 80 | 0.6070 | 0.7901 |
| 0.519 | 14.96 | 86 | 0.5758 | 0.7963 |
| 0.5117 | 16.0 | 92 | 0.5791 | 0.7840 |
| 0.5117 | 16.87 | 97 | 0.5711 | 0.8025 |
| 0.4913 | 17.39 | 100 | 0.5712 | 0.8086 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
lixiangchun/transcriptome-bert-1536-1-16-64 | lixiangchun | "2023-01-12T07:07:27Z" | 116 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | "2023-01-12T06:43:52Z" | # iSEEEK
Generative pretraining from the rankings of top expressing genes.
It was trained on more than 20 million single-cell transcriptomes with a sequence length of 64.
|
AryaParikh/autotrain-summ_arp_2-46098114797 | AryaParikh | "2023-04-03T07:14:00Z" | 108 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain",
"summarization",
"en",
"dataset:Hinataaa/autotrain-data-summ_arp_2",
"co2_eq_emissions",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | summarization | "2023-04-03T07:08:28Z" | ---
tags:
- autotrain
- summarization
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- Hinataaa/autotrain-data-summ_arp_2
co2_eq_emissions:
emissions: 2.584620959475704
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 46098114797
- CO2 Emissions (in grams): 2.5846
## Validation Metrics
- Loss: 0.914
- Rouge1: 55.361
- Rouge2: 27.454
- RougeL: 47.968
- RougeLsum: 47.978
- Gen Len: 13.540
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_HUGGINGFACE_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/Hinataaa/autotrain-summ_arp_2-46098114797
``` |
Aktioshi/DeepSeek-R1-Mourse-14B-Instruct-v0.2 | Aktioshi | "2025-02-11T15:42:43Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:cyberagent/DeepSeek-R1-Distill-Qwen-14B-Japanese",
"base_model:adapter:cyberagent/DeepSeek-R1-Distill-Qwen-14B-Japanese",
"region:us"
] | null | "2025-02-11T15:41:57Z" | ---
base_model: cyberagent/DeepSeek-R1-Distill-Qwen-14B-Japanese
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0 |
VERSIL91/7011d4ab-02bf-4cec-ad2f-ae7dc6130702 | VERSIL91 | "2025-01-10T18:12:09Z" | 11 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:heegyu/WizardVicuna2-13b-hf",
"base_model:adapter:heegyu/WizardVicuna2-13b-hf",
"region:us"
] | null | "2025-01-10T17:53:04Z" | ---
library_name: peft
base_model: heegyu/WizardVicuna2-13b-hf
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 7011d4ab-02bf-4cec-ad2f-ae7dc6130702
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
accelerate_config:
dynamo_backend: inductor
mixed_precision: bf16
num_machines: 1
num_processes: auto
use_cpu: false
adapter: lora
base_model: heegyu/WizardVicuna2-13b-hf
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 3ad533406285a0ea_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/3ad533406285a0ea_train_data.json
type:
field_instruction: dialogue
field_output: reference
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 16
gradient_checkpointing: true
group_by_length: false
hub_model_id: VERSIL91/7011d4ab-02bf-4cec-ad2f-ae7dc6130702
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lora_target_modules:
- q_proj
- v_proj
lr_scheduler: cosine
max_memory:
0: 70GiB
max_steps: 20
micro_batch_size: 2
mlflow_experiment_name: /tmp/3ad533406285a0ea_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
quantization_config:
llm_int8_enable_fp32_cpu_offload: true
load_in_8bit: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
torch_compile: true
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 7011d4ab-02bf-4cec-ad2f-ae7dc6130702
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 7011d4ab-02bf-4cec-ad2f-ae7dc6130702
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 7011d4ab-02bf-4cec-ad2f-ae7dc6130702
This model is a fine-tuned version of [heegyu/WizardVicuna2-13b-hf](https://huggingface.co/heegyu/WizardVicuna2-13b-hf) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8292
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.9597 | 0.0012 | 1 | 0.9392 |
| 0.9621 | 0.0058 | 5 | 0.9288 |
| 0.8401 | 0.0115 | 10 | 0.8883 |
| 0.8373 | 0.0173 | 15 | 0.8374 |
| 0.811 | 0.0231 | 20 | 0.8292 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
pristinawang/adv-ssm-hw1-full-full-1726083079 | pristinawang | "2024-09-11T20:18:28Z" | 105 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-09-11T20:13:48Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
nat-hunt/8825b9cc-24bf-4c07-83ff-2308f051d111 | nat-hunt | "2025-01-23T07:28:08Z" | 6 | 0 | peft | [
"peft",
"safetensors",
"dbrx",
"axolotl",
"generated_from_trainer",
"base_model:katuni4ka/tiny-random-dbrx",
"base_model:adapter:katuni4ka/tiny-random-dbrx",
"region:us"
] | null | "2025-01-23T07:25:25Z" | ---
library_name: peft
base_model: katuni4ka/tiny-random-dbrx
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 8825b9cc-24bf-4c07-83ff-2308f051d111
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: katuni4ka/tiny-random-dbrx
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- c540333983914c07_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/c540333983914c07_train_data.json
type:
field_instruction: sent2
field_output: ending0
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: nat-hunt/8825b9cc-24bf-4c07-83ff-2308f051d111
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/c540333983914c07_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: b421b89d-5e84-408f-b1f2-add043a89b69
wandb_project: Birthday-SN56-4-Gradients-On-Demand
wandb_run: your_name
wandb_runid: b421b89d-5e84-408f-b1f2-add043a89b69
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 8825b9cc-24bf-4c07-83ff-2308f051d111
This model is a fine-tuned version of [katuni4ka/tiny-random-dbrx](https://huggingface.co/katuni4ka/tiny-random-dbrx) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 11.5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 46.0 | 0.0001 | 1 | 11.5 |
| 46.0 | 0.0002 | 3 | 11.5 |
| 46.0 | 0.0004 | 6 | 11.5 |
| 46.0 | 0.0006 | 9 | 11.5 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
MStarn/q_Taxi-V3 | MStarn | "2023-08-26T05:27:00Z" | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-08-26T05:26:57Z" | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q_Taxi-V3-Third
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="MStarn/q_Taxi-V3-Third", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
viictorfe/angels | viictorfe | "2023-11-27T21:37:12Z" | 0 | 0 | asteroid | [
"asteroid",
"legal",
"audio-to-audio",
"as",
"dataset:HuggingFaceH4/no_robots",
"license:apache-2.0",
"region:us"
] | audio-to-audio | "2023-11-27T21:35:17Z" | ---
license: apache-2.0
datasets:
- HuggingFaceH4/no_robots
language:
- as
metrics:
- bleurt
library_name: asteroid
pipeline_tag: audio-to-audio
tags:
- legal
--- |
Sneka/test | Sneka | "2023-08-23T06:04:47Z" | 1 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-08-22T12:45:13Z" | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.6.0.dev0
|
mradermacher/openbuddy-zephyr-7b-v14.1-sharded-GGUF | mradermacher | "2024-11-02T03:29:56Z" | 66 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:mikeee/openbuddy-zephyr-7b-v14.1-sharded",
"base_model:quantized:mikeee/openbuddy-zephyr-7b-v14.1-sharded",
"endpoints_compatible",
"region:us"
] | null | "2024-11-02T03:16:51Z" | ---
base_model: mikeee/openbuddy-zephyr-7b-v14.1-sharded
language:
- en
library_name: transformers
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/mikeee/openbuddy-zephyr-7b-v14.1-sharded
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/openbuddy-zephyr-7b-v14.1-sharded-GGUF/resolve/main/openbuddy-zephyr-7b-v14.1-sharded.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-zephyr-7b-v14.1-sharded-GGUF/resolve/main/openbuddy-zephyr-7b-v14.1-sharded.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-zephyr-7b-v14.1-sharded-GGUF/resolve/main/openbuddy-zephyr-7b-v14.1-sharded.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-zephyr-7b-v14.1-sharded-GGUF/resolve/main/openbuddy-zephyr-7b-v14.1-sharded.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-zephyr-7b-v14.1-sharded-GGUF/resolve/main/openbuddy-zephyr-7b-v14.1-sharded.IQ4_XS.gguf) | IQ4_XS | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-zephyr-7b-v14.1-sharded-GGUF/resolve/main/openbuddy-zephyr-7b-v14.1-sharded.Q4_K_S.gguf) | Q4_K_S | 4.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-zephyr-7b-v14.1-sharded-GGUF/resolve/main/openbuddy-zephyr-7b-v14.1-sharded.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-zephyr-7b-v14.1-sharded-GGUF/resolve/main/openbuddy-zephyr-7b-v14.1-sharded.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-zephyr-7b-v14.1-sharded-GGUF/resolve/main/openbuddy-zephyr-7b-v14.1-sharded.Q5_K_M.gguf) | Q5_K_M | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-zephyr-7b-v14.1-sharded-GGUF/resolve/main/openbuddy-zephyr-7b-v14.1-sharded.Q6_K.gguf) | Q6_K | 6.1 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-zephyr-7b-v14.1-sharded-GGUF/resolve/main/openbuddy-zephyr-7b-v14.1-sharded.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/openbuddy-zephyr-7b-v14.1-sharded-GGUF/resolve/main/openbuddy-zephyr-7b-v14.1-sharded.f16.gguf) | f16 | 14.7 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
kaleem11/Enlighten_Instruct | kaleem11 | "2024-05-22T07:28:36Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.2",
"region:us"
] | null | "2024-05-22T07:28:06Z" | ---
library_name: peft
base_model: mistralai/Mistral-7B-Instruct-v0.2
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.11.1 |