Datasets:
modelId
stringlengths 5
124
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
223M
| likes
int64 0
8.08k
| library_name
stringclasses 346
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 53
values | createdAt
unknown | card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
duyntnet/Qwen2.5-3B-Instruct-imatrix-GGUF | duyntnet | "2024-09-20T22:49:32" | 37 | 0 | transformers | [
"transformers",
"gguf",
"imatrix",
"Qwen2.5-3B-Instruct",
"text-generation",
"en",
"license:other",
"region:us",
"conversational"
] | text-generation | "2024-09-20T21:13:14" | ---
license: other
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- transformers
- gguf
- imatrix
- Qwen2.5-3B-Instruct
---
Quantizations of https://huggingface.co/Qwen/Qwen2.5-3B-Instruct
### Inference Clients/UIs
* [llama.cpp](https://github.com/ggerganov/llama.cpp)
* [KoboldCPP](https://github.com/LostRuins/koboldcpp)
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
* [ollama](https://github.com/ollama/ollama)
---
# From original readme
Qwen2.5 is the latest series of Qwen large language models. For Qwen2.5, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters. Qwen2.5 brings the following improvements upon Qwen2:
- Significantly **more knowledge** and has greatly improved capabilities in **coding** and **mathematics**, thanks to our specialized expert models in these domains.
- Significant improvements in **instruction following**, **generating long texts** (over 8K tokens), **understanding structured data** (e.g, tables), and **generating structured outputs** especially JSON. **More resilient to the diversity of system prompts**, enhancing role-play implementation and condition-setting for chatbots.
- **Long-context Support** up to 128K tokens and can generate up to 8K tokens.
- **Multilingual support** for over 29 languages, including Chinese, English, French, Spanish, Portuguese, German, Italian, Russian, Japanese, Korean, Vietnamese, Thai, Arabic, and more.
**This repo contains the instruction-tuned 3B Qwen2.5 model**, which has the following features:
- Type: Causal Language Models
- Training Stage: Pretraining & Post-training
- Architecture: transformers with RoPE, SwiGLU, RMSNorm, Attention QKV bias and tied word embeddings
- Number of Parameters: 3.09B
- Number of Paramaters (Non-Embedding): 2.77B
- Number of Layers: 36
- Number of Attention Heads (GQA): 16 for Q and 2 for KV
- Context Length: Full 32,768 tokens and generation 8192 tokens
## Requirements
The code of Qwen2.5 has been in the latest Hugging face `transformers` and we advise you to use the latest version of `transformers`.
With `transformers<4.37.0`, you will encounter the following error:
```
KeyError: 'qwen2'
```
## Quickstart
Here provides a code snippet with `apply_chat_template` to show you how to load the tokenizer and model and how to generate contents.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "Qwen/Qwen2.5-3B-Instruct"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
prompt = "Give me a short introduction to large language model."
messages = [
{"role": "system", "content": "You are Qwen, created by Alibaba Cloud. You are a helpful assistant."},
{"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512
)
generated_ids = [
output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
``` |
QuantFactory/Gemma-Radiation-RP-9B-GGUF | QuantFactory | "2024-07-31T06:35:14" | 44 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"arxiv:2403.19522",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-07-31T05:36:54" |
---
library_name: transformers
tags:
- mergekit
- merge
---
![](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)
# QuantFactory/Gemma-Radiation-RP-9B-GGUF
This is quantized version of [Casual-Autopsy/Gemma-Radiation-RP-9B](https://huggingface.co/Casual-Autopsy/Gemma-Radiation-RP-9B) created using llama.cpp
# Original Model Card
<img src="https://huggingface.co/Casual-Autopsy/Gemma-Radiation-RP-9B/resolve/main/Gemma_Rad.png" style="display: block; margin: auto;">
ToDo: Fill the card with more info.
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
It's a bit of a test merge to dip my toes into merging Gemma 2.
Sadly, however, it seems like 8B is my PC's tolerable limit before performance becomes painstakingly and infuriatingly slow, so after this, I might have to sit out on Gemma 2
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [Casual-Autopsy/Gemma-Rad-RP](https://huggingface.co/Casual-Autopsy/Gemma-Rad-RP) as a base.
### Models Merged
The following models were included in the merge:
* [Casual-Autopsy/Gemma-Rad-Uncen](https://huggingface.co/Casual-Autopsy/Gemma-Rad-Uncen)
* [Casual-Autopsy/Gemma-Rad-IQ](https://huggingface.co/Casual-Autopsy/Gemma-Rad-IQ)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: crestf411/gemma2-9B-sunfall-v0.5.2
- model: crestf411/gemma2-9B-daybreak-v0.5
parameters:
density: [0.7, 0.5, 0.3, 0.35, 0.65, 0.35, 0.75, 0.25, 0.75, 0.35, 0.65, 0.35, 0.3, 0.5, 0.7]
weight: [0.5, 0.13, 0.5, 0.13, 0.3]
- model: crestf411/gemstone-9b
parameters:
density: [0.7, 0.5, 0.3, 0.35, 0.65, 0.35, 0.75, 0.25, 0.75, 0.35, 0.65, 0.35, 0.3, 0.5, 0.7]
weight: [0.13, 0.5, 0.13, 0.5, 0.13]
merge_method: dare_ties
base_model: crestf411/gemma2-9B-sunfall-v0.5.2
parameters:
normalize: false
int8_mask: true
dtype: bfloat16
```
```yaml
models:
- model: UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3
- model: nldemo/Gemma-9B-Summarizer-QLoRA
parameters:
density: [0.7, 0.5, 0.3, 0.35, 0.65, 0.35, 0.75, 0.25, 0.75, 0.35, 0.65, 0.35, 0.3, 0.5, 0.7]
weight: [0.0625, 0.25, 0.0625, 0.25, 0.0625]
- model: SillyTilly/google-gemma-2-9b-it+rbojja/gemma2-9b-intent-lora-adapter
parameters:
density: [0.7, 0.5, 0.3, 0.35, 0.65, 0.35, 0.75, 0.25, 0.75, 0.35, 0.65, 0.35, 0.3, 0.5, 0.7]
weight: [0.0625, 0.25, 0.0625, 0.25, 0.0625]
- model: nbeerbower/gemma2-gutenberg-9B
parameters:
density: [0.7, 0.5, 0.3, 0.35, 0.65, 0.35, 0.75, 0.25, 0.75, 0.35, 0.65, 0.35, 0.3, 0.5, 0.7]
weight: [0.25, 0.0625, 0.25, 0.0625, 0.25]
merge_method: ties
base_model: UCLA-AGI/Gemma-2-9B-It-SPPO-Iter3
parameters:
normalize: false
int8_mask: true
dtype: bfloat16
```
```yaml
models:
- model: IlyaGusev/gemma-2-9b-it-abliterated
- model: TheDrummer/Smegmma-9B-v1
parameters:
density: [0.7, 0.5, 0.3, 0.35, 0.65, 0.35, 0.75, 0.25, 0.75, 0.35, 0.65, 0.35, 0.3, 0.5, 0.7]
weight: [0.5, 0.13, 0.5, 0.13, 0.3]
- model: TheDrummer/Tiger-Gemma-9B-v1
parameters:
density: [0.7, 0.5, 0.3, 0.35, 0.65, 0.35, 0.75, 0.25, 0.75, 0.35, 0.65, 0.35, 0.3, 0.5, 0.7]
weight: [0.13, 0.5, 0.13, 0.5, 0.13]
merge_method: dare_ties
base_model: IlyaGusev/gemma-2-9b-it-abliterated
parameters:
normalize: false
int8_mask: true
dtype: bfloat16
```
```yaml
models:
- model: Casual-Autopsy/Gemma-Rad-RP
- model: Casual-Autopsy/Gemma-Rad-Uncen
- model: Casual-Autopsy/Gemma-Rad-IQ
merge_method: model_stock
base_model: Casual-Autopsy/Gemma-Rad-RP
dtype: bfloat16
```
|
mradermacher/ThaliaAlpha-GGUF | mradermacher | "2024-05-06T05:37:18" | 56 | 0 | transformers | [
"transformers",
"gguf",
"mlx",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-03-29T15:54:22" | ---
base_model: N8Programs/ThaliaAlpha
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- mlx
---
## About
static quants of https://huggingface.co/N8Programs/ThaliaAlpha
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ThaliaAlpha-GGUF/resolve/main/ThaliaAlpha.Q2_K.gguf) | Q2_K | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/ThaliaAlpha-GGUF/resolve/main/ThaliaAlpha.IQ3_XS.gguf) | IQ3_XS | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/ThaliaAlpha-GGUF/resolve/main/ThaliaAlpha.Q3_K_S.gguf) | Q3_K_S | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/ThaliaAlpha-GGUF/resolve/main/ThaliaAlpha.IQ3_S.gguf) | IQ3_S | 3.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/ThaliaAlpha-GGUF/resolve/main/ThaliaAlpha.IQ3_M.gguf) | IQ3_M | 3.5 | |
| [GGUF](https://huggingface.co/mradermacher/ThaliaAlpha-GGUF/resolve/main/ThaliaAlpha.Q3_K_M.gguf) | Q3_K_M | 3.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ThaliaAlpha-GGUF/resolve/main/ThaliaAlpha.Q3_K_L.gguf) | Q3_K_L | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/ThaliaAlpha-GGUF/resolve/main/ThaliaAlpha.IQ4_XS.gguf) | IQ4_XS | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/ThaliaAlpha-GGUF/resolve/main/ThaliaAlpha.Q4_0.gguf) | Q4_0 | 4.4 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/ThaliaAlpha-GGUF/resolve/main/ThaliaAlpha.Q4_K_S.gguf) | Q4_K_S | 4.4 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ThaliaAlpha-GGUF/resolve/main/ThaliaAlpha.IQ4_NL.gguf) | IQ4_NL | 4.4 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/ThaliaAlpha-GGUF/resolve/main/ThaliaAlpha.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ThaliaAlpha-GGUF/resolve/main/ThaliaAlpha.Q5_K_S.gguf) | Q5_K_S | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/ThaliaAlpha-GGUF/resolve/main/ThaliaAlpha.Q5_K_M.gguf) | Q5_K_M | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/ThaliaAlpha-GGUF/resolve/main/ThaliaAlpha.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/ThaliaAlpha-GGUF/resolve/main/ThaliaAlpha.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):
![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png)
And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
seongwoon/Labor-Specter | seongwoon | "2023-03-29T16:30:17" | 2 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2023-03-29T14:33:48" | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
def cls_pooling(model_output, attention_mask):
return model_output[0][:,0]
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 530 with parameters:
```
{'batch_size': 4, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.TripletLoss.TripletLoss` with parameters:
```
{'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 5}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 530,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
zblaaa/t5-base-finetuned-ner_docred_30 | zblaaa | "2023-07-12T17:30:08" | 105 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-07-12T11:00:35" | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: t5-base-finetuned-ner_docred_30
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-finetuned-ner_docred_30
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1900
- Rouge1: 6.698
- Rouge2: 5.261
- Rougel: 6.6835
- Rougelsum: 6.6818
- Gen Len: 20.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 125 | 0.5156 | 6.5406 | 4.9855 | 6.4905 | 6.494 | 20.0 |
| No log | 2.0 | 250 | 0.3949 | 6.5113 | 4.9122 | 6.4534 | 6.4453 | 20.0 |
| No log | 3.0 | 375 | 0.3280 | 6.5165 | 4.9088 | 6.4537 | 6.451 | 20.0 |
| 0.7311 | 4.0 | 500 | 0.2949 | 6.424 | 4.7298 | 6.3672 | 6.3627 | 20.0 |
| 0.7311 | 5.0 | 625 | 0.2764 | 6.6189 | 5.1219 | 6.5651 | 6.5672 | 20.0 |
| 0.7311 | 6.0 | 750 | 0.2633 | 6.628 | 5.1335 | 6.5664 | 6.5721 | 20.0 |
| 0.7311 | 7.0 | 875 | 0.2547 | 6.5591 | 4.9979 | 6.5075 | 6.5057 | 20.0 |
| 0.3331 | 8.0 | 1000 | 0.2482 | 6.6612 | 5.1918 | 6.5987 | 6.6068 | 20.0 |
| 0.3331 | 9.0 | 1125 | 0.2413 | 6.6093 | 5.0954 | 6.5515 | 6.5553 | 20.0 |
| 0.3331 | 10.0 | 1250 | 0.2357 | 6.6264 | 5.1201 | 6.5681 | 6.5723 | 20.0 |
| 0.3331 | 11.0 | 1375 | 0.2300 | 6.6487 | 5.1525 | 6.6176 | 6.6177 | 20.0 |
| 0.2788 | 12.0 | 1500 | 0.2226 | 6.6858 | 5.2325 | 6.6745 | 6.6762 | 20.0 |
| 0.2788 | 13.0 | 1625 | 0.2166 | 6.6495 | 5.1531 | 6.6378 | 6.6377 | 20.0 |
| 0.2788 | 14.0 | 1750 | 0.2108 | 6.6807 | 5.2212 | 6.6653 | 6.6664 | 20.0 |
| 0.2788 | 15.0 | 1875 | 0.2068 | 6.6811 | 5.2248 | 6.6699 | 6.6697 | 20.0 |
| 0.2435 | 16.0 | 2000 | 0.2030 | 6.6701 | 5.2077 | 6.652 | 6.6492 | 20.0 |
| 0.2435 | 17.0 | 2125 | 0.1997 | 6.6845 | 5.2334 | 6.6647 | 6.6624 | 20.0 |
| 0.2435 | 18.0 | 2250 | 0.1978 | 6.6762 | 5.2202 | 6.6571 | 6.6559 | 20.0 |
| 0.2435 | 19.0 | 2375 | 0.1964 | 6.684 | 5.2358 | 6.6695 | 6.6683 | 20.0 |
| 0.2188 | 20.0 | 2500 | 0.1957 | 6.6882 | 5.2426 | 6.675 | 6.6735 | 20.0 |
| 0.2188 | 21.0 | 2625 | 0.1942 | 6.6882 | 5.2426 | 6.675 | 6.6735 | 20.0 |
| 0.2188 | 22.0 | 2750 | 0.1932 | 6.6935 | 5.2513 | 6.6784 | 6.6762 | 20.0 |
| 0.2188 | 23.0 | 2875 | 0.1924 | 6.6935 | 5.2513 | 6.6784 | 6.6762 | 20.0 |
| 0.2052 | 24.0 | 3000 | 0.1918 | 6.6882 | 5.2426 | 6.675 | 6.6735 | 20.0 |
| 0.2052 | 25.0 | 3125 | 0.1915 | 6.6935 | 5.2513 | 6.6784 | 6.6762 | 20.0 |
| 0.2052 | 26.0 | 3250 | 0.1908 | 6.698 | 5.261 | 6.6835 | 6.6818 | 20.0 |
| 0.2052 | 27.0 | 3375 | 0.1905 | 6.698 | 5.261 | 6.6835 | 6.6818 | 20.0 |
| 0.1977 | 28.0 | 3500 | 0.1901 | 6.698 | 5.261 | 6.6835 | 6.6818 | 20.0 |
| 0.1977 | 29.0 | 3625 | 0.1900 | 6.698 | 5.261 | 6.6835 | 6.6818 | 20.0 |
| 0.1977 | 30.0 | 3750 | 0.1900 | 6.698 | 5.261 | 6.6835 | 6.6818 | 20.0 |
### Framework versions
- Transformers 4.30.0.dev0
- Pytorch 2.1.0.dev20230611+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
smp-hub/segformer-b1-512x512-ade-160k | smp-hub | "2025-01-11T14:00:38" | 14 | 0 | segmentation-models-pytorch | [
"segmentation-models-pytorch",
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"semantic-segmentation",
"pytorch",
"segformer",
"image-segmentation",
"license:other",
"region:us"
] | image-segmentation | "2024-11-29T16:25:32" | ---
library_name: segmentation-models-pytorch
license: other
pipeline_tag: image-segmentation
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
- segmentation-models-pytorch
- semantic-segmentation
- pytorch
- segformer
languages:
- python
---
# Segformer Model Card
Table of Contents:
- [Load trained model](#load-trained-model)
- [Model init parameters](#model-init-parameters)
- [Model metrics](#model-metrics)
- [Dataset](#dataset)
## Load trained model
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/qubvel/segmentation_models.pytorch/blob/main/examples/segformer_inference_pretrained.ipynb)
1. Install requirements.
```bash
pip install -U segmentation_models_pytorch albumentations
```
2. Run inference.
```python
import torch
import requests
import numpy as np
import albumentations as A
import segmentation_models_pytorch as smp
from PIL import Image
device = "cuda" if torch.cuda.is_available() else "cpu"
# Load pretrained model and preprocessing function
checkpoint = "smp-hub/segformer-b1-512x512-ade-160k"
model = smp.from_pretrained(checkpoint).eval().to(device)
preprocessing = A.Compose.from_pretrained(checkpoint)
# Load image
url = "https://huggingface.co/datasets/hf-internal-testing/fixtures_ade20k/resolve/main/ADE_val_00000001.jpg"
image = Image.open(requests.get(url, stream=True).raw)
# Preprocess image
np_image = np.array(image)
normalized_image = preprocessing(image=np_image)["image"]
input_tensor = torch.as_tensor(normalized_image)
input_tensor = input_tensor.permute(2, 0, 1).unsqueeze(0) # HWC -> BCHW
input_tensor = input_tensor.to(device)
# Perform inference
with torch.no_grad():
output_mask = model(input_tensor)
# Postprocess mask
mask = torch.nn.functional.interpolate(
output_mask, size=(image.height, image.width), mode="bilinear", align_corners=False
)
mask = mask.argmax(1).cpu().numpy() # argmax over predicted classes (channels dim)
```
## Model init parameters
```python
model_init_params = {
"encoder_name": "mit_b1",
"encoder_depth": 5,
"encoder_weights": None,
"decoder_segmentation_channels": 256,
"in_channels": 3,
"classes": 150,
"activation": None,
"aux_params": None
}
```
## Dataset
Dataset name: [ADE20K](https://ade20k.csail.mit.edu/)
## More Information
- Library: https://github.com/qubvel/segmentation_models.pytorch
- Docs: https://smp.readthedocs.io/en/latest/
- License: https://github.com/NVlabs/SegFormer/blob/master/LICENSE
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) |
satvikahuja/mixer_on_off_new_6e | satvikahuja | "2025-01-16T15:20:53" | 10 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"robotics",
"region:us"
] | robotics | "2025-01-16T15:20:39" | ---
library_name: lerobot
tags:
- act
- model_hub_mixin
- pytorch_model_hub_mixin
- robotics
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: https://github.com/huggingface/lerobot
- Docs: [More Information Needed] |
Hoax0930/kyoto_marian_mod_1 | Hoax0930 | "2022-09-22T14:38:34" | 104 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | "2022-09-22T12:45:50" | ---
license: apache-2.0
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: kyoto_marian_mod_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kyoto_marian_mod_1
This model is a fine-tuned version of [Hoax0930/kyoto_marian_mod](https://huggingface.co/Hoax0930/kyoto_marian_mod) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6973
- Bleu: 19.0580
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.5.1
- Tokenizers 0.12.1
|
LandCruiser/Karnataka_10 | LandCruiser | "2025-02-13T10:01:51" | 0 | 0 | null | [
"onnx",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] | any-to-any | "2025-02-13T09:56:48" | ---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
Kudod/mdeberta-base-ner-ghtk-ai-fluent-21-label-new-data-3090-8Nov-1 | Kudod | "2024-11-08T02:54:18" | 104 | 0 | transformers | [
"transformers",
"safetensors",
"deberta-v2",
"token-classification",
"generated_from_trainer",
"base_model:microsoft/mdeberta-v3-base",
"base_model:finetune:microsoft/mdeberta-v3-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2024-11-08T01:24:35" | ---
library_name: transformers
license: mit
base_model: microsoft/mdeberta-v3-base
tags:
- generated_from_trainer
model-index:
- name: mdeberta-base-ner-ghtk-ai-fluent-21-label-new-data-3090-8Nov-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mdeberta-base-ner-ghtk-ai-fluent-21-label-new-data-3090-8Nov-1
This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2248
- Ho: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6}
- Hoảng thời gian: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3}
- Háng cụ thể: {'precision': 0.7, 'recall': 0.875, 'f1': 0.7777777777777777, 'number': 16}
- Háng trừu tượng: {'precision': 0.5, 'recall': 0.4, 'f1': 0.4444444444444445, 'number': 10}
- Hông tin ctt: {'precision': 0.8333333333333334, 'recall': 1.0, 'f1': 0.9090909090909091, 'number': 5}
- Hụ cấp: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3}
- Hứ: {'precision': 0.5833333333333334, 'recall': 0.7777777777777778, 'f1': 0.6666666666666666, 'number': 9}
- Iấy tờ: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3}
- Iền cụ thể: {'precision': 0.4857142857142857, 'recall': 0.5483870967741935, 'f1': 0.5151515151515151, 'number': 31}
- Iền trừu tượng: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 5}
- Iờ: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3}
- Ã số thuế: {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 2}
- Ã đơn: {'precision': 0.6818181818181818, 'recall': 0.6818181818181818, 'f1': 0.6818181818181818, 'number': 22}
- Ình thức làm việc: {'precision': 0.3333333333333333, 'recall': 0.375, 'f1': 0.35294117647058826, 'number': 8}
- Ông: {'precision': 0.6666666666666666, 'recall': 0.7804878048780488, 'f1': 0.7191011235955055, 'number': 82}
- Ăm cụ thể: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2}
- Ương: {'precision': 0.7, 'recall': 0.7777777777777778, 'f1': 0.7368421052631577, 'number': 54}
- Ị trí: {'precision': 0.7758620689655172, 'recall': 0.9183673469387755, 'f1': 0.8411214953271028, 'number': 49}
- Ố công: {'precision': 0.9221311475409836, 'recall': 0.9868421052631579, 'f1': 0.9533898305084745, 'number': 228}
- Ố giờ: {'precision': 0.9371428571428572, 'recall': 0.8677248677248677, 'f1': 0.9010989010989011, 'number': 378}
- Ố điểm: {'precision': 0.8153846153846154, 'recall': 0.8548387096774194, 'f1': 0.8346456692913387, 'number': 62}
- Ố đơn: {'precision': 0.5294117647058824, 'recall': 0.6666666666666666, 'f1': 0.5901639344262295, 'number': 27}
- Ợt: {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1}
- Ỷ lệ: {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 11}
- Overall Precision: 0.8194
- Overall Recall: 0.8363
- Overall F1: 0.8278
- Overall Accuracy: 0.9526
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Ho | Hoảng thời gian | Háng cụ thể | Háng trừu tượng | Hông tin ctt | Hụ cấp | Hứ | Iấy tờ | Iền cụ thể | Iền trừu tượng | Iờ | Ã số thuế | Ã đơn | Ình thức làm việc | Ông | Ăm cụ thể | Ương | Ị trí | Ố công | Ố giờ | Ố điểm | Ố đơn | Ợt | Ỷ lệ | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------------------------------------------------------:|:---------------------------------------------------------:|:-------------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------:|:---------------------------------------------------------:|:------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------------------------:|:------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:----------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------:|:-------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:|
| No log | 1.0 | 147 | 0.3551 | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 16} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 10} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 5} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 9} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.2, 'recall': 0.25806451612903225, 'f1': 0.22535211267605634, 'number': 31} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 5} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.22727272727272727, 'recall': 0.45454545454545453, 'f1': 0.30303030303030304, 'number': 22} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 8} | {'precision': 0.44761904761904764, 'recall': 0.573170731707317, 'f1': 0.5026737967914437, 'number': 82} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.3333333333333333, 'recall': 0.2037037037037037, 'f1': 0.25287356321839083, 'number': 54} | {'precision': 0.6428571428571429, 'recall': 0.7346938775510204, 'f1': 0.6857142857142857, 'number': 49} | {'precision': 0.608, 'recall': 1.0, 'f1': 0.7562189054726367, 'number': 228} | {'precision': 0.8670076726342711, 'recall': 0.8968253968253969, 'f1': 0.881664499349805, 'number': 378} | {'precision': 0.9230769230769231, 'recall': 0.1935483870967742, 'f1': 0.31999999999999995, 'number': 62} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 27} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 11} | 0.6537 | 0.6775 | 0.6654 | 0.9130 |
| No log | 2.0 | 294 | 0.2564 | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.5909090909090909, 'recall': 0.8125, 'f1': 0.6842105263157896, 'number': 16} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 10} | {'precision': 0.2, 'recall': 0.2, 'f1': 0.20000000000000004, 'number': 5} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.5, 'recall': 0.3333333333333333, 'f1': 0.4, 'number': 9} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.36363636363636365, 'recall': 0.3870967741935484, 'f1': 0.37500000000000006, 'number': 31} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 5} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.48, 'recall': 0.5454545454545454, 'f1': 0.5106382978723404, 'number': 22} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 8} | {'precision': 0.5217391304347826, 'recall': 0.5853658536585366, 'f1': 0.5517241379310345, 'number': 82} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.5068493150684932, 'recall': 0.6851851851851852, 'f1': 0.5826771653543307, 'number': 54} | {'precision': 0.65, 'recall': 0.7959183673469388, 'f1': 0.7155963302752293, 'number': 49} | {'precision': 0.7018633540372671, 'recall': 0.9912280701754386, 'f1': 0.8218181818181817, 'number': 228} | {'precision': 0.923943661971831, 'recall': 0.8677248677248677, 'f1': 0.8949522510231923, 'number': 378} | {'precision': 0.6617647058823529, 'recall': 0.7258064516129032, 'f1': 0.6923076923076922, 'number': 62} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 27} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 11} | 0.7201 | 0.7490 | 0.7343 | 0.9292 |
| No log | 3.0 | 441 | 0.2272 | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.3448275862068966, 'recall': 0.625, 'f1': 0.4444444444444445, 'number': 16} | {'precision': 0.125, 'recall': 0.1, 'f1': 0.11111111111111112, 'number': 10} | {'precision': 0.7142857142857143, 'recall': 1.0, 'f1': 0.8333333333333333, 'number': 5} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.3333333333333333, 'recall': 0.6666666666666666, 'f1': 0.4444444444444444, 'number': 9} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.375, 'recall': 0.3870967741935484, 'f1': 0.38095238095238093, 'number': 31} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 5} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 1.0, 'recall': 0.5, 'f1': 0.6666666666666666, 'number': 2} | {'precision': 0.5416666666666666, 'recall': 0.5909090909090909, 'f1': 0.5652173913043478, 'number': 22} | {'precision': 0.4, 'recall': 0.25, 'f1': 0.3076923076923077, 'number': 8} | {'precision': 0.5773195876288659, 'recall': 0.6829268292682927, 'f1': 0.6256983240223464, 'number': 82} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.6086956521739131, 'recall': 0.7777777777777778, 'f1': 0.6829268292682927, 'number': 54} | {'precision': 0.6060606060606061, 'recall': 0.8163265306122449, 'f1': 0.6956521739130436, 'number': 49} | {'precision': 0.9173553719008265, 'recall': 0.9736842105263158, 'f1': 0.9446808510638298, 'number': 228} | {'precision': 0.9262536873156342, 'recall': 0.8306878306878307, 'f1': 0.8758716875871688, 'number': 378} | {'precision': 0.7213114754098361, 'recall': 0.7096774193548387, 'f1': 0.7154471544715446, 'number': 62} | {'precision': 0.45454545454545453, 'recall': 0.37037037037037035, 'f1': 0.40816326530612246, 'number': 27} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 1.0, 'recall': 0.2727272727272727, 'f1': 0.42857142857142855, 'number': 11} | 0.7634 | 0.7657 | 0.7646 | 0.9398 |
| 0.3496 | 4.0 | 588 | 0.2105 | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.6190476190476191, 'recall': 0.8125, 'f1': 0.7027027027027026, 'number': 16} | {'precision': 0.2727272727272727, 'recall': 0.3, 'f1': 0.28571428571428564, 'number': 10} | {'precision': 0.8333333333333334, 'recall': 1.0, 'f1': 0.9090909090909091, 'number': 5} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.375, 'recall': 0.6666666666666666, 'f1': 0.4800000000000001, 'number': 9} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.3902439024390244, 'recall': 0.5161290322580645, 'f1': 0.4444444444444444, 'number': 31} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 5} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 2} | {'precision': 0.5, 'recall': 0.5454545454545454, 'f1': 0.5217391304347826, 'number': 22} | {'precision': 0.4444444444444444, 'recall': 0.5, 'f1': 0.47058823529411764, 'number': 8} | {'precision': 0.6021505376344086, 'recall': 0.6829268292682927, 'f1': 0.64, 'number': 82} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.6779661016949152, 'recall': 0.7407407407407407, 'f1': 0.7079646017699114, 'number': 54} | {'precision': 0.6949152542372882, 'recall': 0.8367346938775511, 'f1': 0.7592592592592592, 'number': 49} | {'precision': 0.9142857142857143, 'recall': 0.9824561403508771, 'f1': 0.9471458773784355, 'number': 228} | {'precision': 0.9340659340659341, 'recall': 0.8994708994708994, 'f1': 0.9164420485175203, 'number': 378} | {'precision': 0.7543859649122807, 'recall': 0.6935483870967742, 'f1': 0.7226890756302521, 'number': 62} | {'precision': 0.4375, 'recall': 0.5185185185185185, 'f1': 0.47457627118644063, 'number': 27} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 1.0, 'recall': 0.7272727272727273, 'f1': 0.8421052631578948, 'number': 11} | 0.7899 | 0.8108 | 0.8002 | 0.9485 |
| 0.3496 | 5.0 | 735 | 0.2023 | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.6842105263157895, 'recall': 0.8125, 'f1': 0.742857142857143, 'number': 16} | {'precision': 0.4, 'recall': 0.4, 'f1': 0.4000000000000001, 'number': 10} | {'precision': 0.8333333333333334, 'recall': 1.0, 'f1': 0.9090909090909091, 'number': 5} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.5, 'recall': 0.7777777777777778, 'f1': 0.6086956521739131, 'number': 9} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.5454545454545454, 'recall': 0.3870967741935484, 'f1': 0.45283018867924524, 'number': 31} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 5} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 2} | {'precision': 0.6190476190476191, 'recall': 0.5909090909090909, 'f1': 0.6046511627906977, 'number': 22} | {'precision': 0.3333333333333333, 'recall': 0.375, 'f1': 0.35294117647058826, 'number': 8} | {'precision': 0.631578947368421, 'recall': 0.7317073170731707, 'f1': 0.6779661016949152, 'number': 82} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.6721311475409836, 'recall': 0.7592592592592593, 'f1': 0.7130434782608696, 'number': 54} | {'precision': 0.7413793103448276, 'recall': 0.8775510204081632, 'f1': 0.8037383177570093, 'number': 49} | {'precision': 0.9259259259259259, 'recall': 0.9868421052631579, 'f1': 0.9554140127388535, 'number': 228} | {'precision': 0.9166666666666666, 'recall': 0.9021164021164021, 'f1': 0.9093333333333332, 'number': 378} | {'precision': 0.8793103448275862, 'recall': 0.8225806451612904, 'f1': 0.8500000000000001, 'number': 62} | {'precision': 0.4857142857142857, 'recall': 0.6296296296296297, 'f1': 0.5483870967741936, 'number': 27} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 11} | 0.8185 | 0.8314 | 0.8249 | 0.9485 |
| 0.3496 | 6.0 | 882 | 0.2123 | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.6666666666666666, 'recall': 0.875, 'f1': 0.7567567567567567, 'number': 16} | {'precision': 0.4, 'recall': 0.4, 'f1': 0.4000000000000001, 'number': 10} | {'precision': 0.8333333333333334, 'recall': 1.0, 'f1': 0.9090909090909091, 'number': 5} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.5833333333333334, 'recall': 0.7777777777777778, 'f1': 0.6666666666666666, 'number': 9} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.43243243243243246, 'recall': 0.5161290322580645, 'f1': 0.47058823529411764, 'number': 31} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 5} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 2} | {'precision': 0.5909090909090909, 'recall': 0.5909090909090909, 'f1': 0.5909090909090909, 'number': 22} | {'precision': 0.18181818181818182, 'recall': 0.25, 'f1': 0.2105263157894737, 'number': 8} | {'precision': 0.6041666666666666, 'recall': 0.7073170731707317, 'f1': 0.6516853932584269, 'number': 82} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.7586206896551724, 'recall': 0.8148148148148148, 'f1': 0.7857142857142857, 'number': 54} | {'precision': 0.7321428571428571, 'recall': 0.8367346938775511, 'f1': 0.7809523809523811, 'number': 49} | {'precision': 0.9364406779661016, 'recall': 0.9692982456140351, 'f1': 0.9525862068965517, 'number': 228} | {'precision': 0.9319526627218935, 'recall': 0.8333333333333334, 'f1': 0.8798882681564246, 'number': 378} | {'precision': 0.828125, 'recall': 0.8548387096774194, 'f1': 0.8412698412698412, 'number': 62} | {'precision': 0.4722222222222222, 'recall': 0.6296296296296297, 'f1': 0.5396825396825397, 'number': 27} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 11} | 0.8092 | 0.8069 | 0.8081 | 0.9488 |
| 0.1106 | 7.0 | 1029 | 0.2214 | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.7, 'recall': 0.875, 'f1': 0.7777777777777777, 'number': 16} | {'precision': 0.5, 'recall': 0.4, 'f1': 0.4444444444444445, 'number': 10} | {'precision': 0.8333333333333334, 'recall': 1.0, 'f1': 0.9090909090909091, 'number': 5} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.5833333333333334, 'recall': 0.7777777777777778, 'f1': 0.6666666666666666, 'number': 9} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.53125, 'recall': 0.5483870967741935, 'f1': 0.5396825396825397, 'number': 31} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 5} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 2} | {'precision': 0.625, 'recall': 0.6818181818181818, 'f1': 0.6521739130434783, 'number': 22} | {'precision': 0.36363636363636365, 'recall': 0.5, 'f1': 0.4210526315789474, 'number': 8} | {'precision': 0.6521739130434783, 'recall': 0.7317073170731707, 'f1': 0.6896551724137931, 'number': 82} | {'precision': 0.5, 'recall': 0.5, 'f1': 0.5, 'number': 2} | {'precision': 0.8148148148148148, 'recall': 0.8148148148148148, 'f1': 0.8148148148148148, 'number': 54} | {'precision': 0.75, 'recall': 0.9183673469387755, 'f1': 0.8256880733944955, 'number': 49} | {'precision': 0.9324894514767933, 'recall': 0.9692982456140351, 'f1': 0.9505376344086022, 'number': 228} | {'precision': 0.9301675977653632, 'recall': 0.8809523809523809, 'f1': 0.9048913043478262, 'number': 378} | {'precision': 0.8, 'recall': 0.8387096774193549, 'f1': 0.8188976377952757, 'number': 62} | {'precision': 0.5, 'recall': 0.7037037037037037, 'f1': 0.5846153846153846, 'number': 27} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.9090909090909091, 'recall': 0.9090909090909091, 'f1': 0.9090909090909091, 'number': 11} | 0.8226 | 0.8363 | 0.8294 | 0.9505 |
| 0.1106 | 8.0 | 1176 | 0.2258 | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.6190476190476191, 'recall': 0.8125, 'f1': 0.7027027027027026, 'number': 16} | {'precision': 0.5555555555555556, 'recall': 0.5, 'f1': 0.5263157894736842, 'number': 10} | {'precision': 0.8333333333333334, 'recall': 1.0, 'f1': 0.9090909090909091, 'number': 5} | {'precision': 0.25, 'recall': 0.3333333333333333, 'f1': 0.28571428571428575, 'number': 3} | {'precision': 0.7, 'recall': 0.7777777777777778, 'f1': 0.7368421052631577, 'number': 9} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.5483870967741935, 'recall': 0.5483870967741935, 'f1': 0.5483870967741935, 'number': 31} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 5} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 2} | {'precision': 0.625, 'recall': 0.6818181818181818, 'f1': 0.6521739130434783, 'number': 22} | {'precision': 0.3, 'recall': 0.375, 'f1': 0.33333333333333326, 'number': 8} | {'precision': 0.65625, 'recall': 0.7682926829268293, 'f1': 0.7078651685393258, 'number': 82} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.75, 'recall': 0.7777777777777778, 'f1': 0.7636363636363638, 'number': 54} | {'precision': 0.8076923076923077, 'recall': 0.8571428571428571, 'f1': 0.8316831683168318, 'number': 49} | {'precision': 0.9186991869918699, 'recall': 0.9912280701754386, 'f1': 0.9535864978902953, 'number': 228} | {'precision': 0.9384164222873901, 'recall': 0.8465608465608465, 'f1': 0.8901251738525731, 'number': 378} | {'precision': 0.803030303030303, 'recall': 0.8548387096774194, 'f1': 0.828125, 'number': 62} | {'precision': 0.53125, 'recall': 0.6296296296296297, 'f1': 0.5762711864406779, 'number': 27} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 0.9166666666666666, 'recall': 1.0, 'f1': 0.9565217391304348, 'number': 11} | 0.8231 | 0.8255 | 0.8243 | 0.9508 |
| 0.1106 | 9.0 | 1323 | 0.2212 | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.7, 'recall': 0.875, 'f1': 0.7777777777777777, 'number': 16} | {'precision': 0.5, 'recall': 0.4, 'f1': 0.4444444444444445, 'number': 10} | {'precision': 0.8333333333333334, 'recall': 1.0, 'f1': 0.9090909090909091, 'number': 5} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.5833333333333334, 'recall': 0.7777777777777778, 'f1': 0.6666666666666666, 'number': 9} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.5588235294117647, 'recall': 0.6129032258064516, 'f1': 0.5846153846153845, 'number': 31} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 5} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 2} | {'precision': 0.6521739130434783, 'recall': 0.6818181818181818, 'f1': 0.6666666666666666, 'number': 22} | {'precision': 0.3333333333333333, 'recall': 0.375, 'f1': 0.35294117647058826, 'number': 8} | {'precision': 0.6458333333333334, 'recall': 0.7560975609756098, 'f1': 0.6966292134831461, 'number': 82} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.7288135593220338, 'recall': 0.7962962962962963, 'f1': 0.7610619469026549, 'number': 54} | {'precision': 0.7627118644067796, 'recall': 0.9183673469387755, 'f1': 0.8333333333333333, 'number': 49} | {'precision': 0.9186991869918699, 'recall': 0.9912280701754386, 'f1': 0.9535864978902953, 'number': 228} | {'precision': 0.9348441926345609, 'recall': 0.873015873015873, 'f1': 0.9028727770177839, 'number': 378} | {'precision': 0.7941176470588235, 'recall': 0.8709677419354839, 'f1': 0.8307692307692308, 'number': 62} | {'precision': 0.5142857142857142, 'recall': 0.6666666666666666, 'f1': 0.5806451612903226, 'number': 27} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 11} | 0.8179 | 0.8412 | 0.8294 | 0.9520 |
| 0.1106 | 10.0 | 1470 | 0.2248 | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 6} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.7, 'recall': 0.875, 'f1': 0.7777777777777777, 'number': 16} | {'precision': 0.5, 'recall': 0.4, 'f1': 0.4444444444444445, 'number': 10} | {'precision': 0.8333333333333334, 'recall': 1.0, 'f1': 0.9090909090909091, 'number': 5} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.5833333333333334, 'recall': 0.7777777777777778, 'f1': 0.6666666666666666, 'number': 9} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 0.4857142857142857, 'recall': 0.5483870967741935, 'f1': 0.5151515151515151, 'number': 31} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 5} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 3} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 2} | {'precision': 0.6818181818181818, 'recall': 0.6818181818181818, 'f1': 0.6818181818181818, 'number': 22} | {'precision': 0.3333333333333333, 'recall': 0.375, 'f1': 0.35294117647058826, 'number': 8} | {'precision': 0.6666666666666666, 'recall': 0.7804878048780488, 'f1': 0.7191011235955055, 'number': 82} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 2} | {'precision': 0.7, 'recall': 0.7777777777777778, 'f1': 0.7368421052631577, 'number': 54} | {'precision': 0.7758620689655172, 'recall': 0.9183673469387755, 'f1': 0.8411214953271028, 'number': 49} | {'precision': 0.9221311475409836, 'recall': 0.9868421052631579, 'f1': 0.9533898305084745, 'number': 228} | {'precision': 0.9371428571428572, 'recall': 0.8677248677248677, 'f1': 0.9010989010989011, 'number': 378} | {'precision': 0.8153846153846154, 'recall': 0.8548387096774194, 'f1': 0.8346456692913387, 'number': 62} | {'precision': 0.5294117647058824, 'recall': 0.6666666666666666, 'f1': 0.5901639344262295, 'number': 27} | {'precision': 0.0, 'recall': 0.0, 'f1': 0.0, 'number': 1} | {'precision': 1.0, 'recall': 1.0, 'f1': 1.0, 'number': 11} | 0.8194 | 0.8363 | 0.8278 | 0.9526 |
### Framework versions
- Transformers 4.45.0
- Pytorch 2.3.1+cu121
- Datasets 2.19.1
- Tokenizers 0.20.3
|
ContextSearchLM/vinilm_dropinfonce_constrast_temp01_v02 | ContextSearchLM | "2024-07-14T15:02:51" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | "2024-07-14T10:20:44" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
iampanky0/New_Falcon | iampanky0 | "2024-03-16T14:14:05" | 126 | 1 | transformers | [
"transformers",
"safetensors",
"falcon",
"text-generation",
"custom_code",
"en",
"arxiv:1910.09700",
"base_model:tiiuae/falcon-7b-instruct",
"base_model:quantized:tiiuae/falcon-7b-instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-03-16T11:30:20" | ---
library_name: transformers
license: apache-2.0
language:
- en
pipeline_tag: text-generation
packages : bitsandbytes == 0.42.0
base_model: tiiuae/falcon-7b-instruct
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ardaspear/4c926ed1-6668-42a7-8d89-5d4b78b884de | ardaspear | "2025-02-07T01:36:16" | 8 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"base_model:Qwen/Qwen1.5-7B",
"base_model:adapter:Qwen/Qwen1.5-7B",
"license:other",
"region:us"
] | null | "2025-02-07T01:01:37" | ---
library_name: peft
license: other
base_model: Qwen/Qwen1.5-7B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 4c926ed1-6668-42a7-8d89-5d4b78b884de
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: Qwen/Qwen1.5-7B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- a504b5a3f7b14303_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/a504b5a3f7b14303_train_data.json
type:
field_instruction: link
field_output: caption
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: false
hub_model_id: ardaspear/4c926ed1-6668-42a7-8d89-5d4b78b884de
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: 0
logging_steps: 3
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 200
micro_batch_size: 8
mlflow_experiment_name: /tmp/a504b5a3f7b14303_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 3
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: 5ae2a60c-5f43-4564-a12f-c9cf41820880
wandb_project: Gradients-On-Five
wandb_run: your_name
wandb_runid: 5ae2a60c-5f43-4564-a12f-c9cf41820880
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 4c926ed1-6668-42a7-8d89-5d4b78b884de
This model is a fine-tuned version of [Qwen/Qwen1.5-7B](https://huggingface.co/Qwen/Qwen1.5-7B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6169
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0028 | 1 | 2.0964 |
| 1.8485 | 0.0471 | 17 | 1.7817 |
| 1.7216 | 0.0942 | 34 | 1.7137 |
| 1.6819 | 0.1413 | 51 | 1.6819 |
| 1.6918 | 0.1884 | 68 | 1.6644 |
| 1.6478 | 0.2355 | 85 | 1.6502 |
| 1.65 | 0.2825 | 102 | 1.6397 |
| 1.656 | 0.3296 | 119 | 1.6304 |
| 1.5935 | 0.3767 | 136 | 1.6248 |
| 1.6212 | 0.4238 | 153 | 1.6201 |
| 1.6121 | 0.4709 | 170 | 1.6179 |
| 1.6341 | 0.5180 | 187 | 1.6169 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
KappaNeuro/director-gaspar-noe-style | KappaNeuro | "2023-09-14T09:25:48" | 48 | 1 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"movie",
"art",
"style",
"shot",
"xl",
"sdxl",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:other",
"region:us"
] | text-to-image | "2023-09-14T09:25:43" | ---
license: other
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- movie
- art
- style
- shot
- xl
- sdxl
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: Director Gaspar Noe style
widget:
- text: "Director Gaspar Noe style - a cinematic scene from \"Mulholland Drive\". over the shoulder shot, a young blonde woman with shoulder-length hair and red lips sitting in a LA dimly lit living room with emerald green walls looking at her partner with an intense look in her eyes, there is tension and a sense of danger in the air, the scene is atmospheric and dreamy and has an uncanny vibe, shot on Panavision Panaflex Platinum Camera, Panavision Primo Primes Spherical Lens, mist filter, Kodak Vision Color film"
- text: "Director Gaspar Noe style - Master bedroom, very big balcony door background, two women, a middle-aged woman with eyes closed in bed and a young woman dressed in red standing infront of the window looking at the ocean, n the style of philip-lorca dicorcia, green, jason rhoades, in the style of gregory crewdson, tonalist color scheme, site-specific, vertigo 1954 movie green hotel room colour, very wide shot"
- text: "Director Gaspar Noe style - UFO hovering over lake, Full - body shot of a blonde supermodel wearing relaxed street wear, at a gas pump next to a car, in the style of 1990s neo - noir, neon signs, night, shot on Fujifilm X - S20 with Kodak Portra 160, 8mm, green color grading, reduced saturation, subtle fade, distant to capture entire scene,"
- text: "Director Gaspar Noe style - a cinematic scene from \"Mulholland Drive\", rule-of-thirds shot, a young blonde woman with shoulder-length hair and red lips in an emerald green strap dress walkinga vintage style movie poster with her face on it in an LA street, nighttime, soft light, there is tension and a sense of danger in the air, the scene is atmospheric and dreamy and has an uncanny vibe, neo-noir, horror, shot on Panavision Panaflex Platinum Camera, Panavision Primo Primes Spherical Lens, mist filter, Kodak Vision Color film"
- text: "Director Gaspar Noe style - completely hyperrealistic scene of a minimalist room completely full of poisonous snakes in fluorescent and dramatic colors, this scene is filmedan alexa mini camera, a 35mm lens and an aperture of 5.6f, iso 900, facial details, hyperrealistic scene, cinematic lightingcinematic atmosphere, 4K"
- text: "Director Gaspar Noe style - scars on the belly + ugly + She feels that madness is chasing her! + anxiety + PAIN + scars + colour distortions + looking at the camera + Chronic pain + shadow + surrealist + tropical green garden + Surreal striking photograph, captured on 35mm film + bold figuration + hybrid Editorial + Urban Surreal tulle + MAXIMALISM + Blue sky + GOLDEN, pink, lavender + experimental art direction + + laugh and crying + cultural documentation + colorful clothing, jewlery + urban tulle core + flat color palette, 8k, lumen + urban tulle haute couture + bellybutton dress + plus size + genderless + dynamic portrait + transparent hat + emotionally charged portraits + blowing in the wind + unconventional poses + experimental winner photography + intense, experimental art direction + Afro-europea + haunting images + look at the camera + jungle landscapes, surrealistic elements, Bokeh, Cinematic, artistic reportage action shot + Light field photography, dynamic composition, unusual angle, lensbaby Velvet 56 lens, selective focus, intricate details, cinematic lighting, bokeh, photographic, in the style of Petra Collins + + Michel Gondry + emil melmoth intense, experimental art direction, Hasselblad H6D - 400c + Carl Zeiss Planar 80mm f 2. 8 lens + Phase One XF IQ4 digital back + film stock, Fujifilm Pro 400H + f 4 + shallow depth of field + Anamorphic + high - detailed skin: 1. 2 + film grain + 8K resolution"
- text: "Director Gaspar Noe style - An award-winning color photograph of a white 25-year-old woman with a big chin and pigmented eyebrows standing in a Brazilian neighborhood at night. The image should evoke a sense of mystery, individuality, and confidence, and should be inspiredthe works of Diane Arbus and . The woman should be shown in a candid shot, with the dark and colorful houses of the neighborhood in the background. The photograph should be taken with a 50mm lens to create a natural perspective, and with a high ISO to capture the ambient light of the city. The final image should be in 4K UHD, with a resolution of 700 pixels, and with a contrast of 5 for maximum detail. The overall tone of the photograph should be warm and inviting, with a touch of contrast to highlight the woman's unique beauty and personality."
- text: "Director Gaspar Noe style - a realistic cinematic shot from wong love story drama movie depicting, Full body shot, a handsome Asian American detective watching his lover from a distance in the mirror, in the style of visually stunning compositions, unconventional camera angles, handheld shots, and dynamic movements, cinematic sets, f+ representational, mise-en-scne, ambiance that reflects the emotional landscape of the characters. movie still, dimmed cinematic lighting, busy city, rich and vibrant worlds, bold, saturated colors to convey emotions or themes, vibrant reds, deep blues, and neon lights, a lot of film grains, [120mm lens], [cinestill ], taken with shot with mamiya rz67,"
- text: "Director Gaspar Noe style - a nightmarish cinematic scene from a psychological horror movierule-of-thirds shot, a young blonde starlet with shoulder-length hair and red lips in an emerald green strap dress looking at a vintage style horror-movie billboard with her face on it, set on Hollywood Boulevard, nighttime, soft hazy light, there is tension and a sense of danger in the air, the scene is atmospheric and dreamy and has an uncanny vibe, neo-noir, horror, shot on Panavision Panaflex Platinum Camera, Panavision Primo Primes Spherical Lens, mist filter, Kodak Vision Color film"
---
# Director Gaspar Noe style ([CivitAI](https://civitai.com/models/154880)
![Image 0](2345352.jpeg)
> Director Gaspar Noe style - a cinematic scene from "Mulholland Drive". over the shoulder shot, a young blonde woman with shoulder-length hair and red lips sitting in a LA dimly lit living room with emerald green walls looking at her partner with an intense look in her eyes, there is tension and a sense of danger in the air, the scene is atmospheric and dreamy and has an uncanny vibe, shot on Panavision Panaflex Platinum Camera, Panavision Primo Primes Spherical Lens, mist filter, Kodak Vision Color film
<p>Gaspar Noé is an Argentine-French filmmaker known for his provocative and visually intense style, characterized by controversial themes, unflinching realism, and a focus on the darker aspects of human experience.</p><p>Noé's films often explore taboo subjects and push the boundaries of what is considered acceptable in cinema. His narratives delve into explicit sexuality, violence, and psychological distress, confronting viewers with uncomfortable and confronting situations.</p><p>Visual experimentation is a defining feature of Noé's style. He employs unconventional camera techniques, intricate tracking shots, and disorienting visual effects to create an immersive and often unsettling experience. His use of lighting and color contributes to the intense and visceral atmosphere of his films.</p><p>Time manipulation is a recurring element in Noé's work. He often uses nonlinear storytelling, playing with chronology to disorient viewers and create a sense of psychological unease. This approach mirrors the chaotic and subjective nature of human perception.</p><p>Sound and music play a crucial role in Noé's style. Collaborating with composers and sound designers, he creates immersive auditory experiences that enhance the emotional and sensory impact of his films.</p><p>Noé's films frequently explore the darker aspects of human behavior and emotions. He often presents characters on the fringes of society, grappling with their desires, fears, and inner demons. This exploration of the human psyche challenges viewers to confront their own discomfort and unease.</p><p>Throughout his career, Gaspar Noé has directed films such as "Irreversible," "Enter the Void," and "Climax." His uncompromising approach to storytelling and his willingness to tackle controversial subjects have solidified his reputation as a provocative and polarizing filmmaker in contemporary cinema.</p>
## Image examples for the model:
![Image 1](2345353.jpeg)
> Director Gaspar Noe style - Master bedroom, very big balcony door background, two women, a middle-aged woman with eyes closed in bed and a young woman dressed in red standing infront of the window looking at the ocean, n the style of philip-lorca dicorcia, green, jason rhoades, in the style of gregory crewdson, tonalist color scheme, site-specific, vertigo 1954 movie green hotel room colour, very wide shot
![Image 2](2345409.jpeg)
> Director Gaspar Noe style - UFO hovering over lake, Full - body shot of a blonde supermodel wearing relaxed street wear, at a gas pump next to a car, in the style of 1990s neo - noir, neon signs, night, shot on Fujifilm X - S20 with Kodak Portra 160, 8mm, green color grading, reduced saturation, subtle fade, distant to capture entire scene,
![Image 3](2345315.jpeg)
> Director Gaspar Noe style - a cinematic scene from "Mulholland Drive", rule-of-thirds shot, a young blonde woman with shoulder-length hair and red lips in an emerald green strap dress walkinga vintage style movie poster with her face on it in an LA street, nighttime, soft light, there is tension and a sense of danger in the air, the scene is atmospheric and dreamy and has an uncanny vibe, neo-noir, horror, shot on Panavision Panaflex Platinum Camera, Panavision Primo Primes Spherical Lens, mist filter, Kodak Vision Color film
![Image 4](2345416.jpeg)
>
![Image 5](2345438.jpeg)
> Director Gaspar Noe style - completely hyperrealistic scene of a minimalist room completely full of poisonous snakes in fluorescent and dramatic colors, this scene is filmedan alexa mini camera, a 35mm lens and an aperture of 5.6f, iso 900, facial details, hyperrealistic scene, cinematic lightingcinematic atmosphere, 4K
![Image 6](2345311.jpeg)
> Director Gaspar Noe style - scars on the belly + ugly + She feels that madness is chasing her! + anxiety + PAIN + scars + colour distortions + looking at the camera + Chronic pain + shadow + surrealist + tropical green garden + Surreal striking photograph, captured on 35mm film + bold figuration + hybrid Editorial + Urban Surreal tulle + MAXIMALISM + Blue sky + GOLDEN, pink, lavender + experimental art direction + + laugh and crying + cultural documentation + colorful clothing, jewlery + urban tulle core + flat color palette, 8k, lumen + urban tulle haute couture + bellybutton dress + plus size + genderless + dynamic portrait + transparent hat + emotionally charged portraits + blowing in the wind + unconventional poses + experimental winner photography + intense, experimental art direction + Afro-europea + haunting images + look at the camera + jungle landscapes, surrealistic elements, Bokeh, Cinematic, artistic reportage action shot + Light field photography, dynamic composition, unusual angle, lensbaby Velvet 56 lens, selective focus, intricate details, cinematic lighting, bokeh, photographic, in the style of Petra Collins + + Michel Gondry + emil melmoth intense, experimental art direction, Hasselblad H6D - 400c + Carl Zeiss Planar 80mm f 2. 8 lens + Phase One XF IQ4 digital back + film stock, Fujifilm Pro 400H + f 4 + shallow depth of field + Anamorphic + high - detailed skin: 1. 2 + film grain + 8K resolution
![Image 7](2345310.jpeg)
> Director Gaspar Noe style - An award-winning color photograph of a white 25-year-old woman with a big chin and pigmented eyebrows standing in a Brazilian neighborhood at night. The image should evoke a sense of mystery, individuality, and confidence, and should be inspiredthe works of Diane Arbus and . The woman should be shown in a candid shot, with the dark and colorful houses of the neighborhood in the background. The photograph should be taken with a 50mm lens to create a natural perspective, and with a high ISO to capture the ambient light of the city. The final image should be in 4K UHD, with a resolution of 700 pixels, and with a contrast of 5 for maximum detail. The overall tone of the photograph should be warm and inviting, with a touch of contrast to highlight the woman's unique beauty and personality.
![Image 8](2345313.jpeg)
> Director Gaspar Noe style - a realistic cinematic shot from wong love story drama movie depicting, Full body shot, a handsome Asian American detective watching his lover from a distance in the mirror, in the style of visually stunning compositions, unconventional camera angles, handheld shots, and dynamic movements, cinematic sets, f+ representational, mise-en-scne, ambiance that reflects the emotional landscape of the characters. movie still, dimmed cinematic lighting, busy city, rich and vibrant worlds, bold, saturated colors to convey emotions or themes, vibrant reds, deep blues, and neon lights, a lot of film grains, [120mm lens], [cinestill ], taken with shot with mamiya rz67,
![Image 9](2345312.jpeg)
> Director Gaspar Noe style - a nightmarish cinematic scene from a psychological horror movierule-of-thirds shot, a young blonde starlet with shoulder-length hair and red lips in an emerald green strap dress looking at a vintage style horror-movie billboard with her face on it, set on Hollywood Boulevard, nighttime, soft hazy light, there is tension and a sense of danger in the air, the scene is atmospheric and dreamy and has an uncanny vibe, neo-noir, horror, shot on Panavision Panaflex Platinum Camera, Panavision Primo Primes Spherical Lens, mist filter, Kodak Vision Color film
|
AboGeek/clasificador-muchocine | AboGeek | "2024-04-23T21:41:03" | 105 | 0 | transformers | [
"transformers",
"safetensors",
"electra",
"text-classification",
"classification",
"generated_from_trainer",
"base_model:mrm8488/electricidad-base-discriminator",
"base_model:finetune:mrm8488/electricidad-base-discriminator",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-04-23T21:40:44" | ---
base_model: mrm8488/electricidad-base-discriminator
tags:
- classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: clasificador-muchocine
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# clasificador-muchocine
This model is a fine-tuned version of [mrm8488/electricidad-base-discriminator](https://huggingface.co/mrm8488/electricidad-base-discriminator) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4652
- Accuracy: 0.4426
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 388 | 1.3703 | 0.3884 |
| 1.3806 | 2.0 | 776 | 1.3091 | 0.4245 |
| 0.9712 | 3.0 | 1164 | 1.4652 | 0.4426 |
### Framework versions
- Transformers 4.40.0
- Pytorch 2.2.1+cu121
- Datasets 2.19.0
- Tokenizers 0.19.1
|
michauhl/distilbert-base-uncased-finetuned-emotion | michauhl | "2022-07-13T12:57:33" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-07-05T14:17:20" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9405
- name: F1
type: f1
value: 0.9404976918144629
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1891
- Accuracy: 0.9405
- F1: 0.9405
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.1344 | 1.0 | 1000 | 0.1760 | 0.933 | 0.9331 |
| 0.0823 | 2.0 | 2000 | 0.1891 | 0.9405 | 0.9405 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0.post202
- Datasets 2.3.2
- Tokenizers 0.11.0
|
isspek/roberta-base_monkeypox_chatgpt_1_2e-5_16_undersampling_0.2 | isspek | "2024-12-07T23:53:34" | 183 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-11-26T14:39:19" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
aleegis12/d674a03c-4e42-46a5-b753-0cda208b26b3 | aleegis12 | "2025-02-08T07:56:21" | 7 | 0 | peft | [
"peft",
"safetensors",
"phi",
"axolotl",
"generated_from_trainer",
"base_model:echarlaix/tiny-random-PhiForCausalLM",
"base_model:adapter:echarlaix/tiny-random-PhiForCausalLM",
"license:apache-2.0",
"region:us"
] | null | "2025-02-08T07:55:36" | ---
library_name: peft
license: apache-2.0
base_model: echarlaix/tiny-random-PhiForCausalLM
tags:
- axolotl
- generated_from_trainer
model-index:
- name: d674a03c-4e42-46a5-b753-0cda208b26b3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: echarlaix/tiny-random-PhiForCausalLM
bf16: true
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- ee1dc6ec8f691339_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/ee1dc6ec8f691339_train_data.json
type:
field_instruction: title
field_output: text
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: 5
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 100
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
group_by_length: true
hub_model_id: aleegis12/d674a03c-4e42-46a5-b753-0cda208b26b3
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0001
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 128
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 500
micro_batch_size: 8
mlflow_experiment_name: /tmp/ee1dc6ec8f691339_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optim_args:
adam_beta1: 0.9
adam_beta2: 0.95
adam_epsilon: 1e-5
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 100
saves_per_epoch: null
sequence_len: 1024
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 0f20ca28-67e8-4db3-a9c1-cb63640a88e1
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 0f20ca28-67e8-4db3-a9c1-cb63640a88e1
warmup_steps: 20
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# d674a03c-4e42-46a5-b753-0cda208b26b3
This model is a fine-tuned version of [echarlaix/tiny-random-PhiForCausalLM](https://huggingface.co/echarlaix/tiny-random-PhiForCausalLM) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.8593
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=adam_beta1=0.9,adam_beta2=0.95,adam_epsilon=1e-5
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 20
- training_steps: 413
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 6.9372 | 0.0024 | 1 | 6.9364 |
| 6.8863 | 0.2426 | 100 | 6.8772 |
| 6.8716 | 0.4851 | 200 | 6.8644 |
| 6.8844 | 0.7277 | 300 | 6.8603 |
| 6.8712 | 0.9703 | 400 | 6.8593 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
kamisaiko/llama3b_ogu_32_64_prompt_3e4 | kamisaiko | "2024-12-06T16:19:14" | 148 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/Llama-3.2-3B-Instruct",
"base_model:finetune:unsloth/Llama-3.2-3B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-12-06T16:14:05" | ---
base_model: unsloth/Llama-3.2-3B-Instruct
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** kamisaiko
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-3B-Instruct
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Jithendra-k/Flan_T5_InterACT | Jithendra-k | "2024-04-27T01:28:34" | 106 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-04-15T04:54:32" | ---
license: mit
---
## Project InterACT
This model is a part of Project InterACT (Multi model AI system) involving an object detection model and an LLM
This is a model built by finetuning the flan-t5-small model on custom dataset: Jithendra-k/Flan_T5_InterACT.
Here are some plots of model performance during training:<br>
Here is an Example Input/Output:<br>
Code to finetune a Flan-T5 model: [Google_Colab_file](https://colab.research.google.com/drive/1oLYGi9JQOwozZcNFMNBwCqZtsSnCPZAM?usp=sharing)
# Credits and Thanks:
Greatest thanks to NousResearch/Llama-2-70b-chat-hf and meta for enabling us to use the flan-t5-small model.
```
https://huggingface.co/google/flan-t5-small
https://www.datacamp.com/tutorial/flan-t5-tutorial
```
|
HarmlessFlower/DRL_Course_Models | HarmlessFlower | "2023-03-10T17:12:16" | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-03-10T17:11:11" | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 265.94 +/- 19.97
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
DavidAU/Confinus-2x7B-Q6_K-GGUF | DavidAU | "2024-04-11T02:37:30" | 9 | 0 | null | [
"gguf",
"moe",
"merge",
"llama-cpp",
"gguf-my-repo",
"en",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null | "2024-04-11T02:37:03" | ---
language:
- en
license: apache-2.0
tags:
- moe
- merge
- llama-cpp
- gguf-my-repo
model-index:
- name: Confinus-2x7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 73.89
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Confinus-2x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.82
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Confinus-2x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.12
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Confinus-2x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 71.88
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Confinus-2x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 84.77
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Confinus-2x7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 68.84
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=NeuralNovel/Confinus-2x7B
name: Open LLM Leaderboard
---
# DavidAU/Confinus-2x7B-Q6_K-GGUF
This model was converted to GGUF format from [`NeuralNovel/Confinus-2x7B`](https://huggingface.co/NeuralNovel/Confinus-2x7B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/NeuralNovel/Confinus-2x7B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew.
```bash
brew install ggerganov/ggerganov/llama.cpp
```
Invoke the llama.cpp server or the CLI.
CLI:
```bash
llama-cli --hf-repo DavidAU/Confinus-2x7B-Q6_K-GGUF --model confinus-2x7b.Q6_K.gguf -p "The meaning to life and the universe is"
```
Server:
```bash
llama-server --hf-repo DavidAU/Confinus-2x7B-Q6_K-GGUF --model confinus-2x7b.Q6_K.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
```
git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m confinus-2x7b.Q6_K.gguf -n 128
```
|
LoneStriker/Liberated-Miqu-70B-3.0bpw-h6-exl2 | LoneStriker | "2024-03-12T19:43:59" | 4 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"Miqu",
"Liberated",
"Uncensored",
"70B",
"conversational",
"en",
"dataset:abacusai/SystemChat",
"base_model:152334H/miqu-1-70b-sf",
"base_model:quantized:152334H/miqu-1-70b-sf",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"3-bit",
"exl2",
"region:us"
] | text-generation | "2024-03-12T19:33:04" | ---
license: apache-2.0
base_model: 152334H/miqu-1-70b-sf
language:
- en
library_name: transformers
tags:
- Miqu
- Liberated
- Uncensored
- 70B
datasets:
- abacusai/SystemChat
---
# Liberated Miqu 70B
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/64e380b2e12618b261fa6ba0/mxX4M4ePQJI7W4y1hdBfE.jpeg)
Liberated Miqu 70B is a fine-tune of Miqu-70B on Abacus AI's SystemChat dataset. This model has been trained on 2xA100 GPUs for 1 epoch.
## 🏆 Evaluation results
Coming soon
### Framework versions
- Transformers 4.38.0.dev0
- Pytorch 2.1.2+cu118
- Datasets 2.17.0
- Tokenizers 0.15.0
- axolotl: 0.4.0
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
|
silence09/InternLM2.5-7B-Chat-Converted-Qwen2 | silence09 | "2024-12-25T09:05:38" | 5 | 1 | null | [
"safetensors",
"qwen2",
"base_model:internlm/internlm2_5-7b-chat",
"base_model:finetune:internlm/internlm2_5-7b-chat",
"license:apache-2.0",
"region:us"
] | null | "2024-12-25T08:42:06" | ---
license: apache-2.0
base_model:
- internlm/internlm2_5-7b-chat
---
# Converted Qwen2 from InternLM2.5-7B-Chat
## Descritpion
This is a converted model from [InternLM2.5-7B-Chat](https://huggingface.co/internlm/internlm2_5-7b-chat) to __Qwen2__ format. This conversion allows you to use InternLM2.5-7B-Chat as if it were a Qwen2 model, which is convenient for some *inference use cases*. The __precision__ is __excatly the same__ as the original model.
## Usage
You can load the model using the `Qwen2ForCausalLM` class as shown below:
```python
device = "cpu" # cpu is exacatly the same
attn_impl = 'eager' # the attention implementation to use
meta_instruction = ("You are an AI assistant whose name is InternLM (书生·浦语).\n"
"- InternLM (书生·浦语) is a conversational language model that is developed by Shanghai AI Laboratory "
"(上海人工智能实验室). It is designed to be helpful, honest, and harmless.\n"
"- InternLM (书生·浦语) can understand and communicate fluently in the language chosen by the user such "
"as English and 中文."
)
prompt1 = "介绍下你自己"
prompt2 = "介绍下上海人工智能实验室"
def build_inputs(tokenizer, query: str, history: List[Tuple[str, str]] = None, meta_instruction=meta_instruction):
if history is None:
history = []
if tokenizer.add_bos_token:
prompt = ""
else:
prompt = tokenizer.bos_token
if meta_instruction:
prompt += f"""<|im_start|>system\n{meta_instruction}<|im_end|>\n"""
for record in history:
prompt += f"""<|im_start|>user\n{record[0]}<|im_end|>\n<|im_start|>assistant\n{record[1]}<|im_end|>\n"""
prompt += f"""<|im_start|>user\n{query}<|im_end|>\n<|im_start|>assistant\n"""
return tokenizer([prompt], return_tensors="pt")
@torch.inference_mode()
def chat(
model: Union[AutoModelForCausalLM, Qwen2ForCausalLM],
tokenizer,
query: str,
history: Optional[List[Tuple[str, str]]] = None,
streamer: Optional[BaseStreamer] = None,
max_new_tokens: int = 1024,
do_sample: bool = True,
temperature: float = 0.8,
top_p: float = 0.8,
meta_instruction: str = meta_instruction,
**kwargs,
):
if history is None:
history = []
inputs = build_inputs(tokenizer, query, history, meta_instruction)
inputs = {k: v.to(model.device) for k, v in inputs.items() if torch.is_tensor(v)}
# also add end-of-assistant token in eos token id to avoid unnecessary generation
eos_token_id = [tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids(["<|im_end|>"])[0]]
outputs = model.generate(
**inputs,
streamer=streamer,
max_new_tokens=max_new_tokens,
do_sample=do_sample,
temperature=temperature,
top_p=top_p,
eos_token_id=eos_token_id,
**kwargs,
)
outputs = outputs[0].cpu().tolist()[len(inputs["input_ids"][0]) :]
response = tokenizer.decode(outputs, skip_special_tokens=True)
response = response.split("<|im_end|>")[0]
history = history + [(query, response)]
return response, history
# use the official tokenizer
tokenizer = AutoTokenizer.from_pretrained("silence09/InternLM2.5-7B-Chat-Converted-Qwen2", trust_remote_code=True)
# use the converted LlaMA model
qwen2_model = Qwen2ForCausalLM.from_pretrained(
"silence09/InternLM2.5-7B-Chat-Converted-Qwen2",
torch_dtype='auto',
attn_implementation=attn_impl).to(device)
qwen2_model.eval()
response_qwen2_and_splitfunc_1, history = chat(qwen2_model, tokenizer, prompt1, history=[], do_sample=False)
print(f"User Input: {prompt1}\nConverted LlaMA Response: {response_qwen2_and_splitfunc_1}")
response_qwen2_and_splitfunc_2, history = chat(qwen2_model, tokenizer, prompt2, history=history, do_sample=False)
print(f"User Input: {prompt2}\nConverted LlaMA Response: {response_qwen2_and_splitfunc_2}")
```
## Precision Guarantee
To comare result with the original model, you can use this [code](https://github.com/silencelamb/naked_llama/blob/main/hf_example/hf_internlm_7b_qwen2_compare.py)
## More Info
It was converted using the python script available at [this repository](https://github.com/silencelamb/naked_llama/blob/main/hf_example/convert_internlm_to_qwen_hf.py) |
MaziyarPanahi/Hermes-3-Llama-3.2-3B-GGUF | MaziyarPanahi | "2024-12-11T21:00:49" | 167,288 | 1 | null | [
"gguf",
"mistral",
"quantized",
"2-bit",
"3-bit",
"4-bit",
"5-bit",
"6-bit",
"8-bit",
"GGUF",
"text-generation",
"base_model:NousResearch/Hermes-3-Llama-3.2-3B",
"base_model:quantized:NousResearch/Hermes-3-Llama-3.2-3B",
"region:us",
"conversational"
] | text-generation | "2024-12-11T20:45:35" | ---
base_model: NousResearch/Hermes-3-Llama-3.2-3B
inference: false
model_creator: NousResearch
model_name: Hermes-3-Llama-3.2-3B-GGUF
pipeline_tag: text-generation
quantized_by: MaziyarPanahi
tags:
- quantized
- 2-bit
- 3-bit
- 4-bit
- 5-bit
- 6-bit
- 8-bit
- GGUF
- text-generation
---
# [MaziyarPanahi/Hermes-3-Llama-3.2-3B-GGUF](https://huggingface.co/MaziyarPanahi/Hermes-3-Llama-3.2-3B-GGUF)
- Model creator: [NousResearch](https://huggingface.co/NousResearch)
- Original model: [NousResearch/Hermes-3-Llama-3.2-3B](https://huggingface.co/NousResearch/Hermes-3-Llama-3.2-3B)
## Description
[MaziyarPanahi/Hermes-3-Llama-3.2-3B-GGUF](https://huggingface.co/MaziyarPanahi/Hermes-3-Llama-3.2-3B-GGUF) contains GGUF format model files for [NousResearch/Hermes-3-Llama-3.2-3B](https://huggingface.co/NousResearch/Hermes-3-Llama-3.2-3B).
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [GPT4All](https://gpt4all.io/index.html), a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
## Special thanks
🙏 Special thanks to [Georgi Gerganov](https://github.com/ggerganov) and the whole team working on [llama.cpp](https://github.com/ggerganov/llama.cpp/) for making all of this possible. |
ajscalers/t5-small-finetuned-xsum | ajscalers | "2023-04-28T12:27:56" | 104 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:xsum",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-04-27T07:49:36" | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- xsum
model-index:
- name: t5-small-finetuned-xsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-xsum
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 672
- eval_batch_size: 672
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 304 | 2.6700 | 23.818 | 5.0753 | 18.3873 | 18.3908 | 18.731 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu117
- Datasets 2.11.0
- Tokenizers 0.13.3
|
alonzogarbanzo/Bloom-1b7-glue-mrpc-Cont-IT-Step3 | alonzogarbanzo | "2024-03-03T04:28:25" | 5 | 0 | transformers | [
"transformers",
"safetensors",
"bloom",
"text-generation",
"generated_from_trainer",
"base_model:alonzogarbanzo/Bloom-1b7-ropes-Cont-IT-Step2",
"base_model:finetune:alonzogarbanzo/Bloom-1b7-ropes-Cont-IT-Step2",
"license:bigscience-bloom-rail-1.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-03-03T03:32:28" | ---
license: bigscience-bloom-rail-1.0
base_model: alonzogarbanzo/Bloom-1b7-ropes-Cont-IT-Step2
tags:
- generated_from_trainer
model-index:
- name: Bloom-1b7-glue-mrpc-Cont-IT-Step3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Bloom-1b7-glue-mrpc-Cont-IT-Step3
This model is a fine-tuned version of [alonzogarbanzo/Bloom-1b7-ropes-Cont-IT-Step2](https://huggingface.co/alonzogarbanzo/Bloom-1b7-ropes-Cont-IT-Step2) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
Final results: {'loss': 0.0741, 'grad_norm': 3.153198719024658, 'learning_rate': 3.0000000000000004e-07, 'epoch': 10.0}
Average results: {'train_runtime': 362.8198, 'train_samples_per_second': 5.512, 'train_steps_per_second': 1.378, 'train_loss': 0.5167060216665268, 'epoch': 10.0}
### Framework versions
- Transformers 4.38.1
- Pytorch 2.2.0+cu121
- Datasets 2.17.0
- Tokenizers 0.15.2
|
aceuganda/english_luganda_translation | aceuganda | "2024-03-18T21:17:47" | 90 | 0 | transformers | [
"transformers",
"safetensors",
"mbart",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2024-03-18T21:16:49" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MaLA-LM/lucky52-bloom-7b1-no-35 | MaLA-LM | "2024-12-10T09:14:11" | 5 | 0 | transformers | [
"transformers",
"pytorch",
"bloom",
"text-generation",
"generation",
"question answering",
"instruction tuning",
"multilingual",
"dataset:MBZUAI/Bactrian-X",
"arxiv:2404.04850",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-04T13:18:26" |
---
library_name: transformers
pipeline_tag: text-generation
language:
- multilingual
tags:
- generation
- question answering
- instruction tuning
datasets:
- MBZUAI/Bactrian-X
license: cc-by-nc-4.0
---
### Model Description
This HF repository hosts instruction fine-tuned multilingual BLOOM model using the parallel instruction dataset called Bactrain-X in 52 languages.
We progressively add a language during instruction fine-tuning at each time, and train 52 models in total. Then, we evaluate those models in three multilingual benchmarks.
Please refer to [our paper](https://arxiv.org/abs/2404.04850) for more details.
* Base model: [BLOOM 7B1](https://huggingface.co/bigscience/bloom-7b1)
* Instruction languages: English, Chinese, Afrikaans, Arabic, Azerbaijani, Bengali, Czech, German, Spanish, Estonian, Farsi, Finnish, French, Galician, Gujarati, Hebrew, Hindi, Croatian, Indonesian, Italian, Japanese, Georgian, Kazakh, Khmer, Korean, Lithuanian, Latvian, Macedonian, Malayalam, Mongolian, Marathi, Burmese, Nepali, Dutch, Polish
* Instruction language codes: en, zh, af, ar, az, bn, cs, de, es, et, fa, fi, fr, gl, gu, he, hi, hr, id, it, ja, ka, kk, km, ko, lt, lv, mk, ml, mn, mr, my, ne, nl, pl
* Training method: full-parameter fine-tuning.
### Usage
The model checkpoint should be loaded using `transformers` library.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("MaLA-LM/lucky52-bloom-7b1-no-35")
model = AutoModelForCausalLM.from_pretrained("MaLA-LM/lucky52-bloom-7b1-no-35")
```
### Citation
```
@inproceedings{ji2025lucky52,
title={How Many Languages Make Good Multilingual Instruction Tuning? A Case Study on BLOOM},
author={Shaoxiong Ji and Pinzhen Chen},
year={2025},
booktitle={Proceedings of COLING},
url={https://arxiv.org/abs/2404.04850},
}
```
|
dean-r/ppo-PyramidsRNDPPO | dean-r | "2023-05-03T17:47:00" | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | "2023-05-03T17:46:53" | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Find your model_id: dean-r/ppo-PyramidsRNDPPO
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
frenkd/code-llama-7b-text-to-sql | frenkd | "2024-04-15T15:01:07" | 0 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:microsoft/phi-1_5",
"base_model:adapter:microsoft/phi-1_5",
"license:mit",
"region:us"
] | null | "2024-04-15T14:49:26" | ---
license: mit
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: microsoft/phi-1_5
datasets:
- generator
model-index:
- name: code-llama-7b-text-to-sql
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# code-llama-7b-text-to-sql
This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 3
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 6
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.10.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 |
RichardErkhov/stojchet_-_nl-bs-sft1-awq | RichardErkhov | "2025-01-06T17:18:20" | 5 | 0 | null | [
"safetensors",
"llama",
"4-bit",
"awq",
"region:us"
] | null | "2025-01-06T17:17:45" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
nl-bs-sft1 - AWQ
- Model creator: https://huggingface.co/stojchet/
- Original model: https://huggingface.co/stojchet/nl-bs-sft1/
Original model description:
---
license: other
base_model: deepseek-ai/deepseek-coder-1.3b-base
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
model-index:
- name: nl-bs-sft1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/stojchets/huggingface/runs/t1gwaz42)
# nl-bs-sft1
This model is a fine-tuned version of [deepseek-ai/deepseek-coder-1.3b-base](https://huggingface.co/deepseek-ai/deepseek-coder-1.3b-base) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.43.0.dev0
- Pytorch 2.2.2+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
|
hanifnoerr/Kemenkeu-Sentiment-Classifier | hanifnoerr | "2023-04-08T06:29:32" | 22 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"id",
"doi:10.57967/hf/0520",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-04-08T02:58:04" | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
- accuracy
model-index:
- name: Kemenkeu-Sentiment-Classifier
results:
- task:
name: Text Classification
type: text-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.66
- name: F1
type: f1
value: 0.6368
language:
- id
pipeline_tag: text-classification
widget:
- text: sudah beli makan buat sahur?
example_title: "contoh tidak relevan"
- text: Mengawal APBN, Indonesia Maju
example_title: "contoh kalimat"
---
# Kemenkeu-Sentiment-Classifier
This model is a fine-tuned version of [indobenchmark/indobert-base-p1](https://huggingface.co/indobenchmark/indobert-base-p1) on the MoF-DAC Mini Challenge#1 dataset.
It achieves the following results on the evaluation set:
- Accuracy: 0.66
- F1: 0.6368
Leaderboard score:
- Public score: 0.63733
- Private score: 0.65733
## Model description & limitations
- This model can be used to classify text with four possible outputs [netral, tdk-relevan, negatif, and positif]
- only for specific cases related to the Ministry Of Finance Indonesia
## How to use
You can use this model directly with a pipeline
```python
pretrained_name = "hanifnoerr/Kemenkeu-Sentiment-Classifier"
class_model = pipeline(tokenizer=pretrained_name, model=pretrained_name)
test_data = "Mengawal APBN, Indonesia Maju"
class_model(test_data)
```
## Training and evaluation data
The following hyperparameters were used during training:
- learning_rate: 1e-05
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 1.0131 | 1.0 | 500 | 0.8590 | 0.644 | 0.5964 |
| 0.7133 | 2.0 | 1000 | 0.8639 | 0.63 | 0.5924 |
| 0.5261 | 3.0 | 1500 | 0.9002 | 0.66 | 0.6368 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0+cu118
- Datasets 2.11.0
- Tokenizers 0.13.3 |
tomaarsen/setfit-absa-bge-small-en-v1.5-restaurants-polarity | tomaarsen | "2023-12-06T09:09:41" | 3,494 | 0 | setfit | [
"setfit",
"pytorch",
"bert",
"absa",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"en",
"dataset:tomaarsen/setfit-absa-semeval-restaurants",
"arxiv:2209.11055",
"base_model:BAAI/bge-small-en-v1.5",
"base_model:finetune:BAAI/bge-small-en-v1.5",
"license:apache-2.0",
"model-index",
"co2_eq_emissions",
"region:us"
] | text-classification | "2023-12-04T14:48:52" | ---
language: en
license: apache-2.0
library_name: setfit
tags:
- setfit
- absa
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
datasets:
- tomaarsen/setfit-absa-semeval-restaurants
metrics:
- accuracy
widget:
- text: (both in quantity AND quality):The Prix Fixe menu is worth every penny and
you get more than enough (both in quantity AND quality).
- text: over 100 different beers to offer thier:The have over 100 different beers
to offer thier guest so that made my husband very happy and the food was delicious,
if I must recommend a dish it must be the pumkin tortelini.
- text: back with a plate of dumplings.:Get your food to go, find a bench, and kick
back with a plate of dumplings.
- text: the udon was soy sauce and water.:The soup for the udon was soy sauce and
water.
- text: times for the beef cubes - they're:i've been back to nha trang literally a
hundred times for the beef cubes - they're that good.
pipeline_tag: text-classification
inference: false
co2_eq_emissions:
emissions: 15.732253126728272
source: codecarbon
training_type: fine-tuning
on_cloud: false
cpu_model: 13th Gen Intel(R) Core(TM) i7-13700K
ram_total_size: 31.777088165283203
hours_used: 0.174
hardware_used: 1 x NVIDIA GeForce RTX 3090
base_model: BAAI/bge-small-en-v1.5
model-index:
- name: SetFit Polarity Model with BAAI/bge-small-en-v1.5 on SemEval 2014 Task 4 (Restaurants)
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: SemEval 2014 Task 4 (Restaurants)
type: tomaarsen/setfit-absa-semeval-restaurants
split: test
metrics:
- type: accuracy
value: 0.748561042108452
name: Accuracy
---
# SetFit Polarity Model with BAAI/bge-small-en-v1.5 on SemEval 2014 Task 4 (Restaurants)
This is a [SetFit](https://github.com/huggingface/setfit) model trained on the [SemEval 2014 Task 4 (Restaurants)](https://huggingface.co/datasets/tomaarsen/setfit-absa-semeval-restaurants) dataset that can be used for Aspect Based Sentiment Analysis (ABSA). This SetFit model uses [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. In particular, this model is in charge of classifying aspect polarities.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
This model was trained within the context of a larger system for ABSA, which looks like so:
1. Use a spaCy model to select possible aspect span candidates.
2. Use a SetFit model to filter these possible aspect span candidates.
3. **Use this SetFit model to classify the filtered aspect span candidates.**
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **spaCy Model:** en_core_web_lg
- **SetFitABSA Aspect Model:** [tomaarsen/setfit-absa-bge-small-en-v1.5-restaurants-aspect](https://huggingface.co/tomaarsen/setfit-absa-bge-small-en-v1.5-restaurants-aspect)
- **SetFitABSA Polarity Model:** [tomaarsen/setfit-absa-bge-small-en-v1.5-restaurants-polarity](https://huggingface.co/tomaarsen/setfit-absa-bge-small-en-v1.5-restaurants-polarity)
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 4 classes
- **Training Dataset:** [SemEval 2014 Task 4 (Restaurants)](https://huggingface.co/datasets/tomaarsen/setfit-absa-semeval-restaurants)
- **Language:** en
- **License:** apache-2.0
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:---------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| negative | <ul><li>'But the staff was so horrible:But the staff was so horrible to us.'</li><li>', forgot our toast, left out:They did not have mayonnaise, forgot our toast, left out ingredients (ie cheese in an omelet), below hot temperatures and the bacon was so over cooked it crumbled on the plate when you touched it.'</li><li>'did not have mayonnaise, forgot our:They did not have mayonnaise, forgot our toast, left out ingredients (ie cheese in an omelet), below hot temperatures and the bacon was so over cooked it crumbled on the plate when you touched it.'</li></ul> |
| positive | <ul><li>"factor was the food, which was:To be completely fair, the only redeeming factor was the food, which was above average, but couldn't make up for all the other deficiencies of Teodora."</li><li>"The food is uniformly exceptional:The food is uniformly exceptional, with a very capable kitchen which will proudly whip up whatever you feel like eating, whether it's on the menu or not."</li><li>"a very capable kitchen which will proudly:The food is uniformly exceptional, with a very capable kitchen which will proudly whip up whatever you feel like eating, whether it's on the menu or not."</li></ul> |
| neutral | <ul><li>"'s on the menu or not.:The food is uniformly exceptional, with a very capable kitchen which will proudly whip up whatever you feel like eating, whether it's on the menu or not."</li><li>'to sample both meats).:Our agreed favorite is the orrechiete with sausage and chicken (usually the waiters are kind enough to split the dish in half so you get to sample both meats).'</li><li>'to split the dish in half so:Our agreed favorite is the orrechiete with sausage and chicken (usually the waiters are kind enough to split the dish in half so you get to sample both meats).'</li></ul> |
| conflict | <ul><li>'The food was delicious but:The food was delicious but do not come here on a empty stomach.'</li><li>"The service varys from day:The service varys from day to day- sometimes they're very nice, and sometimes not."</li><li>'Though the Spider Roll may look like:Though the Spider Roll may look like a challenge to eat, with soft shell crab hanging out of the roll, it is well worth the price you pay for them.'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.7486 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import AbsaModel
# Download from the 🤗 Hub
model = AbsaModel.from_pretrained(
"tomaarsen/setfit-absa-bge-small-en-v1.5-restaurants-aspect",
"tomaarsen/setfit-absa-bge-small-en-v1.5-restaurants-polarity",
)
# Run inference
preds = model("The food was great, but the venue is just way too busy.")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 6 | 22.4980 | 51 |
| Label | Training Sample Count |
|:---------|:----------------------|
| conflict | 6 |
| negative | 43 |
| neutral | 36 |
| positive | 170 |
### Training Hyperparameters
- batch_size: (256, 256)
- num_epochs: (5, 5)
- max_steps: 5000
- sampling_strategy: oversampling
- body_learning_rate: (2e-05, 1e-05)
- head_learning_rate: 0.01
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: True
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: True
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:----------:|:-------:|:-------------:|:---------------:|
| 0.0078 | 1 | 0.2397 | - |
| 0.3876 | 50 | 0.2252 | - |
| 0.7752 | 100 | 0.1896 | 0.1883 |
| 1.1628 | 150 | 0.0964 | - |
| **1.5504** | **200** | **0.0307** | **0.1792** |
| 1.9380 | 250 | 0.0275 | - |
| 2.3256 | 300 | 0.0138 | 0.2036 |
| 2.7132 | 350 | 0.006 | - |
| 3.1008 | 400 | 0.0035 | 0.2287 |
| 3.4884 | 450 | 0.0015 | - |
| 3.8760 | 500 | 0.0016 | 0.2397 |
| 4.2636 | 550 | 0.001 | - |
| 4.6512 | 600 | 0.0009 | 0.2477 |
* The bold row denotes the saved checkpoint.
### Environmental Impact
Carbon emissions were measured using [CodeCarbon](https://github.com/mlco2/codecarbon).
- **Carbon Emitted**: 0.016 kg of CO2
- **Hours Used**: 0.174 hours
### Training Hardware
- **On Cloud**: No
- **GPU Model**: 1 x NVIDIA GeForce RTX 3090
- **CPU Model**: 13th Gen Intel(R) Core(TM) i7-13700K
- **RAM Size**: 31.78 GB
### Framework Versions
- Python: 3.9.16
- SetFit: 1.0.0.dev0
- Sentence Transformers: 2.2.2
- spaCy: 3.7.2
- Transformers: 4.29.0
- PyTorch: 1.13.1+cu117
- Datasets: 2.15.0
- Tokenizers: 0.13.3
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
sohm/Reinforce-v3 | sohm | "2023-01-26T23:44:54" | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | "2023-01-26T23:44:44" | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 451.70 +/- 144.90
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
tensorblock/Daredevil-8B-GGUF | tensorblock | "2024-11-16T01:46:32" | 15 | 0 | null | [
"gguf",
"merge",
"mergekit",
"lazymergekit",
"TensorBlock",
"GGUF",
"base_model:mlabonne/Daredevil-8B",
"base_model:quantized:mlabonne/Daredevil-8B",
"license:other",
"model-index",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-11-15T07:08:05" | ---
license: other
tags:
- merge
- mergekit
- lazymergekit
- TensorBlock
- GGUF
base_model: mlabonne/Daredevil-8B
model-index:
- name: Daredevil-8B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 68.86
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Daredevil-8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 84.5
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Daredevil-8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 69.24
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Daredevil-8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 59.89
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Daredevil-8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 78.45
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Daredevil-8B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 73.54
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=mlabonne/Daredevil-8B
name: Open LLM Leaderboard
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## mlabonne/Daredevil-8B - GGUF
This repo contains GGUF format model files for [mlabonne/Daredevil-8B](https://huggingface.co/mlabonne/Daredevil-8B).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
<div style="text-align: left; margin: 20px 0;">
<a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
Run them on the TensorBlock client using your local machine ↗
</a>
</div>
## Prompt template
```
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Daredevil-8B-Q2_K.gguf](https://huggingface.co/tensorblock/Daredevil-8B-GGUF/blob/main/Daredevil-8B-Q2_K.gguf) | Q2_K | 2.961 GB | smallest, significant quality loss - not recommended for most purposes |
| [Daredevil-8B-Q3_K_S.gguf](https://huggingface.co/tensorblock/Daredevil-8B-GGUF/blob/main/Daredevil-8B-Q3_K_S.gguf) | Q3_K_S | 3.413 GB | very small, high quality loss |
| [Daredevil-8B-Q3_K_M.gguf](https://huggingface.co/tensorblock/Daredevil-8B-GGUF/blob/main/Daredevil-8B-Q3_K_M.gguf) | Q3_K_M | 3.743 GB | very small, high quality loss |
| [Daredevil-8B-Q3_K_L.gguf](https://huggingface.co/tensorblock/Daredevil-8B-GGUF/blob/main/Daredevil-8B-Q3_K_L.gguf) | Q3_K_L | 4.025 GB | small, substantial quality loss |
| [Daredevil-8B-Q4_0.gguf](https://huggingface.co/tensorblock/Daredevil-8B-GGUF/blob/main/Daredevil-8B-Q4_0.gguf) | Q4_0 | 4.341 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Daredevil-8B-Q4_K_S.gguf](https://huggingface.co/tensorblock/Daredevil-8B-GGUF/blob/main/Daredevil-8B-Q4_K_S.gguf) | Q4_K_S | 4.370 GB | small, greater quality loss |
| [Daredevil-8B-Q4_K_M.gguf](https://huggingface.co/tensorblock/Daredevil-8B-GGUF/blob/main/Daredevil-8B-Q4_K_M.gguf) | Q4_K_M | 4.583 GB | medium, balanced quality - recommended |
| [Daredevil-8B-Q5_0.gguf](https://huggingface.co/tensorblock/Daredevil-8B-GGUF/blob/main/Daredevil-8B-Q5_0.gguf) | Q5_0 | 5.215 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Daredevil-8B-Q5_K_S.gguf](https://huggingface.co/tensorblock/Daredevil-8B-GGUF/blob/main/Daredevil-8B-Q5_K_S.gguf) | Q5_K_S | 5.215 GB | large, low quality loss - recommended |
| [Daredevil-8B-Q5_K_M.gguf](https://huggingface.co/tensorblock/Daredevil-8B-GGUF/blob/main/Daredevil-8B-Q5_K_M.gguf) | Q5_K_M | 5.339 GB | large, very low quality loss - recommended |
| [Daredevil-8B-Q6_K.gguf](https://huggingface.co/tensorblock/Daredevil-8B-GGUF/blob/main/Daredevil-8B-Q6_K.gguf) | Q6_K | 6.143 GB | very large, extremely low quality loss |
| [Daredevil-8B-Q8_0.gguf](https://huggingface.co/tensorblock/Daredevil-8B-GGUF/blob/main/Daredevil-8B-Q8_0.gguf) | Q8_0 | 7.954 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Daredevil-8B-GGUF --include "Daredevil-8B-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Daredevil-8B-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
tensorblock/SecurityLLM-GGUF | tensorblock | "2024-11-16T01:01:47" | 158 | 0 | transformers | [
"transformers",
"gguf",
"security",
"cybersecwithai",
"threat",
"vulnerability",
"infosec",
"zysec.ai",
"cyber security",
"ai4security",
"llmsecurity",
"cyber",
"malware analysis",
"exploitdev",
"ai4good",
"aisecurity",
"cybersec",
"cybersecurity",
"TensorBlock",
"GGUF",
"base_model:ZySec-AI/SecurityLLM",
"base_model:quantized:ZySec-AI/SecurityLLM",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-11-10T22:18:37" | ---
library_name: transformers
license: apache-2.0
tags:
- security
- cybersecwithai
- threat
- vulnerability
- infosec
- zysec.ai
- cyber security
- ai4security
- llmsecurity
- cyber
- malware analysis
- exploitdev
- ai4good
- aisecurity
- cybersec
- cybersecurity
- TensorBlock
- GGUF
base_model: ZySec-AI/SecurityLLM
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## ZySec-AI/SecurityLLM - GGUF
This repo contains GGUF format model files for [ZySec-AI/SecurityLLM](https://huggingface.co/ZySec-AI/SecurityLLM).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
<div style="text-align: left; margin: 20px 0;">
<a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
Run them on the TensorBlock client using your local machine ↗
</a>
</div>
## Prompt template
```
<|system|>
{system_prompt}</s>
<|user|>
{prompt}</s>
<|assistant|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [SecurityLLM-Q2_K.gguf](https://huggingface.co/tensorblock/SecurityLLM-GGUF/blob/main/SecurityLLM-Q2_K.gguf) | Q2_K | 2.532 GB | smallest, significant quality loss - not recommended for most purposes |
| [SecurityLLM-Q3_K_S.gguf](https://huggingface.co/tensorblock/SecurityLLM-GGUF/blob/main/SecurityLLM-Q3_K_S.gguf) | Q3_K_S | 2.947 GB | very small, high quality loss |
| [SecurityLLM-Q3_K_M.gguf](https://huggingface.co/tensorblock/SecurityLLM-GGUF/blob/main/SecurityLLM-Q3_K_M.gguf) | Q3_K_M | 3.277 GB | very small, high quality loss |
| [SecurityLLM-Q3_K_L.gguf](https://huggingface.co/tensorblock/SecurityLLM-GGUF/blob/main/SecurityLLM-Q3_K_L.gguf) | Q3_K_L | 3.560 GB | small, substantial quality loss |
| [SecurityLLM-Q4_0.gguf](https://huggingface.co/tensorblock/SecurityLLM-GGUF/blob/main/SecurityLLM-Q4_0.gguf) | Q4_0 | 3.827 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [SecurityLLM-Q4_K_S.gguf](https://huggingface.co/tensorblock/SecurityLLM-GGUF/blob/main/SecurityLLM-Q4_K_S.gguf) | Q4_K_S | 3.856 GB | small, greater quality loss |
| [SecurityLLM-Q4_K_M.gguf](https://huggingface.co/tensorblock/SecurityLLM-GGUF/blob/main/SecurityLLM-Q4_K_M.gguf) | Q4_K_M | 4.068 GB | medium, balanced quality - recommended |
| [SecurityLLM-Q5_0.gguf](https://huggingface.co/tensorblock/SecurityLLM-GGUF/blob/main/SecurityLLM-Q5_0.gguf) | Q5_0 | 4.654 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [SecurityLLM-Q5_K_S.gguf](https://huggingface.co/tensorblock/SecurityLLM-GGUF/blob/main/SecurityLLM-Q5_K_S.gguf) | Q5_K_S | 4.654 GB | large, low quality loss - recommended |
| [SecurityLLM-Q5_K_M.gguf](https://huggingface.co/tensorblock/SecurityLLM-GGUF/blob/main/SecurityLLM-Q5_K_M.gguf) | Q5_K_M | 4.779 GB | large, very low quality loss - recommended |
| [SecurityLLM-Q6_K.gguf](https://huggingface.co/tensorblock/SecurityLLM-GGUF/blob/main/SecurityLLM-Q6_K.gguf) | Q6_K | 5.534 GB | very large, extremely low quality loss |
| [SecurityLLM-Q8_0.gguf](https://huggingface.co/tensorblock/SecurityLLM-GGUF/blob/main/SecurityLLM-Q8_0.gguf) | Q8_0 | 7.167 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/SecurityLLM-GGUF --include "SecurityLLM-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/SecurityLLM-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
hdve/Qwen-Qwen1.5-7B-1717600621 | hdve | "2024-06-05T15:20:29" | 8 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-06-05T15:17:44" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
tensorblock/Llama-3-8b-sft-mixture-GGUF | tensorblock | "2024-11-16T00:56:10" | 14 | 0 | transformers | [
"transformers",
"gguf",
"TensorBlock",
"GGUF",
"base_model:OpenRLHF/Llama-3-8b-sft-mixture",
"base_model:quantized:OpenRLHF/Llama-3-8b-sft-mixture",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-11-10T09:31:54" | ---
library_name: transformers
tags:
- TensorBlock
- GGUF
base_model: OpenRLHF/Llama-3-8b-sft-mixture
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/jC7kdl8.jpeg" alt="TensorBlock" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;">
Feedback and support: TensorBlock's <a href="https://x.com/tensorblock_aoi">Twitter/X</a>, <a href="https://t.me/TensorBlock">Telegram Group</a> and <a href="https://x.com/tensorblock_aoi">Discord server</a>
</p>
</div>
</div>
## OpenRLHF/Llama-3-8b-sft-mixture - GGUF
This repo contains GGUF format model files for [OpenRLHF/Llama-3-8b-sft-mixture](https://huggingface.co/OpenRLHF/Llama-3-8b-sft-mixture).
The files were quantized using machines provided by [TensorBlock](https://tensorblock.co/), and they are compatible with llama.cpp as of [commit b4011](https://github.com/ggerganov/llama.cpp/commit/a6744e43e80f4be6398fc7733a01642c846dce1d).
<div style="text-align: left; margin: 20px 0;">
<a href="https://tensorblock.co/waitlist/client" style="display: inline-block; padding: 10px 20px; background-color: #007bff; color: white; text-decoration: none; border-radius: 5px; font-weight: bold;">
Run them on the TensorBlock client using your local machine ↗
</a>
</div>
## Prompt template
```
<|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|>
<|start_header_id|>user<|end_header_id|>
{prompt}<|eot_id|>
<|start_header_id|>assistant<|end_header_id|>
```
## Model file specification
| Filename | Quant type | File Size | Description |
| -------- | ---------- | --------- | ----------- |
| [Llama-3-8b-sft-mixture-Q2_K.gguf](https://huggingface.co/tensorblock/Llama-3-8b-sft-mixture-GGUF/blob/main/Llama-3-8b-sft-mixture-Q2_K.gguf) | Q2_K | 2.961 GB | smallest, significant quality loss - not recommended for most purposes |
| [Llama-3-8b-sft-mixture-Q3_K_S.gguf](https://huggingface.co/tensorblock/Llama-3-8b-sft-mixture-GGUF/blob/main/Llama-3-8b-sft-mixture-Q3_K_S.gguf) | Q3_K_S | 3.413 GB | very small, high quality loss |
| [Llama-3-8b-sft-mixture-Q3_K_M.gguf](https://huggingface.co/tensorblock/Llama-3-8b-sft-mixture-GGUF/blob/main/Llama-3-8b-sft-mixture-Q3_K_M.gguf) | Q3_K_M | 3.743 GB | very small, high quality loss |
| [Llama-3-8b-sft-mixture-Q3_K_L.gguf](https://huggingface.co/tensorblock/Llama-3-8b-sft-mixture-GGUF/blob/main/Llama-3-8b-sft-mixture-Q3_K_L.gguf) | Q3_K_L | 4.025 GB | small, substantial quality loss |
| [Llama-3-8b-sft-mixture-Q4_0.gguf](https://huggingface.co/tensorblock/Llama-3-8b-sft-mixture-GGUF/blob/main/Llama-3-8b-sft-mixture-Q4_0.gguf) | Q4_0 | 4.341 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [Llama-3-8b-sft-mixture-Q4_K_S.gguf](https://huggingface.co/tensorblock/Llama-3-8b-sft-mixture-GGUF/blob/main/Llama-3-8b-sft-mixture-Q4_K_S.gguf) | Q4_K_S | 4.370 GB | small, greater quality loss |
| [Llama-3-8b-sft-mixture-Q4_K_M.gguf](https://huggingface.co/tensorblock/Llama-3-8b-sft-mixture-GGUF/blob/main/Llama-3-8b-sft-mixture-Q4_K_M.gguf) | Q4_K_M | 4.583 GB | medium, balanced quality - recommended |
| [Llama-3-8b-sft-mixture-Q5_0.gguf](https://huggingface.co/tensorblock/Llama-3-8b-sft-mixture-GGUF/blob/main/Llama-3-8b-sft-mixture-Q5_0.gguf) | Q5_0 | 5.215 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [Llama-3-8b-sft-mixture-Q5_K_S.gguf](https://huggingface.co/tensorblock/Llama-3-8b-sft-mixture-GGUF/blob/main/Llama-3-8b-sft-mixture-Q5_K_S.gguf) | Q5_K_S | 5.215 GB | large, low quality loss - recommended |
| [Llama-3-8b-sft-mixture-Q5_K_M.gguf](https://huggingface.co/tensorblock/Llama-3-8b-sft-mixture-GGUF/blob/main/Llama-3-8b-sft-mixture-Q5_K_M.gguf) | Q5_K_M | 5.339 GB | large, very low quality loss - recommended |
| [Llama-3-8b-sft-mixture-Q6_K.gguf](https://huggingface.co/tensorblock/Llama-3-8b-sft-mixture-GGUF/blob/main/Llama-3-8b-sft-mixture-Q6_K.gguf) | Q6_K | 6.143 GB | very large, extremely low quality loss |
| [Llama-3-8b-sft-mixture-Q8_0.gguf](https://huggingface.co/tensorblock/Llama-3-8b-sft-mixture-GGUF/blob/main/Llama-3-8b-sft-mixture-Q8_0.gguf) | Q8_0 | 7.954 GB | very large, extremely low quality loss - not recommended |
## Downloading instruction
### Command line
Firstly, install Huggingface Client
```shell
pip install -U "huggingface_hub[cli]"
```
Then, downoad the individual model file the a local directory
```shell
huggingface-cli download tensorblock/Llama-3-8b-sft-mixture-GGUF --include "Llama-3-8b-sft-mixture-Q2_K.gguf" --local-dir MY_LOCAL_DIR
```
If you wanna download multiple model files with a pattern (e.g., `*Q4_K*gguf`), you can try:
```shell
huggingface-cli download tensorblock/Llama-3-8b-sft-mixture-GGUF --local-dir MY_LOCAL_DIR --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
|
bitsanlp/roberta-finetuned-DA-task-B-100k | bitsanlp | "2022-12-26T02:17:52" | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2022-12-26T02:07:59" | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: roberta-finetuned-DA-task-B-100k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-finetuned-DA-task-B-100k
This model is a fine-tuned version of [bitsanlp/roberta-retrained-100k](https://huggingface.co/bitsanlp/roberta-retrained-100k) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 28
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
isspek/roberta-base_zika_gpt4o_5_2e-5_16_undersampling_0.1 | isspek | "2024-12-07T23:31:45" | 164 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2024-12-07T23:31:26" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
gaudi/opus-mt-en-umb-ctranslate2 | gaudi | "2024-10-19T00:34:13" | 9 | 0 | transformers | [
"transformers",
"marian",
"ctranslate2",
"translation",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | translation | "2024-07-22T15:41:50" | ---
tags:
- ctranslate2
- translation
license: apache-2.0
---
# Repository General Information
## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)!
- Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): [Model Link](https://huggingface.co/Helsinki-NLP/opus-mt-en-umb)
- This respository was based on the work of [CTranslate2](https://github.com/OpenNMT/CTranslate2).
- This repository was based on the work of [michaelfeil](https://huggingface.co/michaelfeil).
# What is CTranslate2?
[CTranslate2](https://opennmt.net/CTranslate2/) is a C++ and Python library for efficient inference with Transformer models.
CTranslate2 implements a custom runtime that applies many performance optimization techniques such as weights quantization, layers fusion, batch reordering, etc., to accelerate and reduce the memory usage of Transformer models on CPU and GPU.
CTranslate2 is one of the most performant ways of hosting translation models at scale. Current supported models include:
- Encoder-decoder models: Transformer base/big, M2M-100, NLLB, BART, mBART, Pegasus, T5, Whisper
- Decoder-only models: GPT-2, GPT-J, GPT-NeoX, OPT, BLOOM, MPT, Llama, Mistral, Gemma, CodeGen, GPTBigCode, Falcon
- Encoder-only models: BERT, DistilBERT, XLM-RoBERTa
The project is production-oriented and comes with backward compatibility guarantees, but it also includes experimental features related to model compression and inference acceleration.
# CTranslate2 Benchmarks
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings. Tested against `newstest2014` (En -> De) dataset.
The benchmark reports the number of target tokens generated per second (higher is better). The results are aggregated over multiple runs. See the benchmark scripts for more details and reproduce these numbers.
Please note that the results presented below are only valid for the configuration used during this benchmark: absolute and relative performance may change with different settings.
## CPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 147.3 | 2332MB | 27.90 |
| Marian 1.11.0 (int16) | 330.2 | 5901MB | 27.65 |
| Marian 1.11.0 (int8) | 355.8 | 4763MB | 27.27 |
| CTranslate2 3.6.0 (int16) | 596.1 | 660MB | 27.53 |
| CTranslate2 3.6.0 (int8) | 696.1 | 516MB | 27.65 |
## GPU Benchmarks for Generic Opus-MT Models
| Library | Tokens per Second | Max GPU Memory Usage | Max Memory Usage | BLEU |
| :----: | :----: | :----: | :----: | :----: |
| Transformers 4.26.1 (with PyTorch 1.13.1) | 1022.9 | 4097MB | 2109MB | 27.90 |
| Marian 1.11.0 (float16) | 3962.4 | 3239MB | 1976MB | 27.94 |
| CTranslate2 3.6.0 (float16) | 9296.7 | 909MB | 814MB | 27.9 |
| CTranslate2 3.6.0 (int8 + float16) | 8362.7 | 813MB | 766MB | 27.9 |
`Executed with 4 threads on a c5.2xlarge Amazon EC2 instance equipped with an Intel(R) Xeon(R) Platinum 8275CL CPU.`
**Source to benchmark information can be found [here](https://github.com/OpenNMT/CTranslate2).**<br />
**Original model BLEU scores can be found [here](https://huggingface.co/Helsinki-NLP/opus-mt-en-umb).**
## Internal Benchmarks
Internal testing on our end showed **inference times reduced by 6x-10x** on average compared the vanilla checkpoints using the *transformers* library. A **slight reduction on BLEU scores (~5%)** was also identified in comparison to the vanilla checkpoints with a few exceptions. This is likely due to several factors, one being the quantization applied. Further testing is needed from our end to better assess the reduction in translation quality. The command used to compile the vanilla checkpoint into a CTranslate2 model can be found below. Modifying this command can yield differing balances between inferencing performance and translation quality.
# CTranslate2 Installation
```bash
pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0
```
### ct2-transformers-converter Command Used:
```bash
ct2-transformers-converter --model Helsinki-NLP/opus-mt-en-umb --output_dir ./ctranslate2/opus-mt-en-umb-ctranslate2 --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16
```
# CTranslate2 Converted Checkpoint Information:
**Compatible With:**
- [ctranslate2](https://github.com/OpenNMT/CTranslate2)
- [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2)
**Compute Type:**
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
# Sample Code - ctranslate2
#### Clone the repository to the working directory or wherever you wish to store the model artifacts. ####
```bash
git clone https://huggingface.co/gaudi/opus-mt-en-umb-ctranslate2
```
#### Take the python code below and update the 'model_dir' variable to the location of the cloned repository. ####
```python
from ctranslate2 import Translator
import transformers
model_dir = "./opus-mt-en-umb-ctranslate2" # Path to model directory.
translator = Translator(
model_path=model_dir,
device="cuda", # cpu, cuda, or auto.
inter_threads=1, # Maximum number of parallel translations.
intra_threads=4, # Number of OpenMP threads per translator.
compute_type="int8_float16", # int8 for cpu or int8_float16 for cuda.
)
tokenizer = transformers.AutoTokenizer.from_pretrained(model_dir)
source = tokenizer.convert_ids_to_tokens(tokenizer.encode("XXXXXX, XXX XX XXXXXX."))
results = translator.translate_batch([source])
target = results[0].hypotheses[0]
print(tokenizer.decode(tokenizer.convert_tokens_to_ids(target)))
```
# Sample Code - hf-hub-ctranslate2
**Derived From [michaelfeil](https://huggingface.co/michaelfeil):**
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "gaudi/opus-mt-en-umb-ctranslate2"
model = TranslatorCT2fromHfHub(
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
tokenizer=AutoTokenizer.from_pretrained(model_name)
)
outputs = model.generate(
text=["XXX XX XXX XXXXXXX XXXX?", "XX XX XXXX XX XXX!"],
)
print(outputs)
```
# License and other remarks:
License conditions are intended to be idential to [original huggingface repository](https://huggingface.co/Helsinki-NLP/opus-mt-en-umb) by Helsinki-NLP.
|
Dataset Card for Hugging Face Hub Model Cards
This datasets consists of model cards for models hosted on the Hugging Face Hub. The model cards are created by the community and provide information about the model, its performance, its intended uses, and more. This dataset is updated on a daily basis and includes publicly available models on the Hugging Face Hub.
This dataset is made available to help support users wanting to work with a large number of Model Cards from the Hub. We hope that this dataset will help support research in the area of Model Cards and their use but the format of this dataset may not be useful for all use cases. If there are other features that you would like to see included in this dataset, please open a new discussion.
Dataset Details
Uses
There are a number of potential uses for this dataset including:
- text mining to find common themes in model cards
- analysis of the model card format/content
- topic modelling of model cards
- analysis of the model card metadata
- training language models on model cards
Out-of-Scope Use
[More Information Needed]
Dataset Structure
This dataset has a single split.
Dataset Creation
Curation Rationale
The dataset was created to assist people in working with model cards. In particular it was created to support research in the area of model cards and their use. It is possible to use the Hugging Face Hub API or client library to download model cards and this option may be preferable if you have a very specific use case or require a different format.
Source Data
The source data is README.md
files for models hosted on the Hugging Face Hub. We do not include any other supplementary files that may be included in the model card directory.
Data Collection and Processing
The data is downloaded using a CRON job on a daily basis.
Who are the source data producers?
The source data producers are the creators of the model cards on the Hugging Face Hub. This includes a broad variety of people from the community ranging from large companies to individual researchers. We do not gather any information about who created the model card in this repository although this information can be gathered from the Hugging Face Hub API.
Annotations [optional]
There are no additional annotations in this dataset beyond the model card content.
Annotation process
N/A
Who are the annotators?
N/A
Personal and Sensitive Information
We make no effort to anonymize the data. Whilst we don't expect the majority of model cards to contain personal or sensitive information, it is possible that some model cards may contain this information. Model cards may also link to websites or email addresses.
Bias, Risks, and Limitations
Model cards are created by the community and we do not have any control over the content of the model cards. We do not review the content of the model cards and we do not make any claims about the accuracy of the information in the model cards. Some model cards will themselves discuss bias and sometimes this is done by providing examples of bias in either the training data or the responses provided by the model. As a result this dataset may contain examples of bias.
Whilst we do not directly download any images linked to in the model cards, some model cards may include images. Some of these images may not be suitable for all audiences.
Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
Citation
No formal citation is required for this dataset but if you use this dataset in your work, please include a link to this dataset page.
Dataset Card Authors
Dataset Card Contact
- Downloads last month
- 457