SentenceTransformer based on marrodion/minilm-l12-v2-simple
This is a sentence-transformers model finetuned from marrodion/minilm-l12-v2-simple. It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
Model Details
Model Description
- Model Type: Sentence Transformer
- Base model: marrodion/minilm-l12-v2-simple
- Maximum Sequence Length: 128 tokens
- Output Dimensionality: 384 tokens
- Similarity Function: Cosine Similarity
Model Sources
- Documentation: Sentence Transformers Documentation
- Repository: Sentence Transformers on GitHub
- Hugging Face: Sentence Transformers on Hugging Face
Full Model Architecture
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
Usage
Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
pip install -U sentence-transformers
Then you can load this model and run inference.
from sentence_transformers import SentenceTransformer
# Download from the π€ Hub
model = SentenceTransformer("marrodion/minilm-l12-v2-toxic-ft")
# Run inference
sentences = [
'eradicate the gays',
'Anyone who supports ISIS terrorism globally should be eradicated.',
'This movie was filled with garbage Zionist propaganda.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 384]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
Evaluation
Metrics
Semantic Similarity
- Dataset:
hatespeech-sampled-dev
- Evaluated with
EmbeddingSimilarityEvaluator
Metric | Value |
---|---|
pearson_cosine | 0.5825 |
spearman_cosine | 0.4527 |
pearson_manhattan | 0.5684 |
spearman_manhattan | 0.4501 |
pearson_euclidean | 0.57 |
spearman_euclidean | 0.4527 |
pearson_dot | 0.5825 |
spearman_dot | 0.4527 |
pearson_max | 0.5825 |
spearman_max | 0.4527 |
Training Details
Training Dataset
Unnamed Dataset
- Size: 33,827 training samples
- Columns:
sentence1
,sentence2
, andscore
- Approximate statistics based on the first 1000 samples:
sentence1 sentence2 score type string string float details - min: 4 tokens
- mean: 35.5 tokens
- max: 128 tokens
- min: 4 tokens
- mean: 34.35 tokens
- max: 128 tokens
- min: 0.0
- mean: 0.82
- max: 1.0
- Samples:
sentence1 sentence2 score Don't waste good pussy on a trash ass nigga
Stfu obviously trans people aren't retarded
0.7777777777777778
The more I read about dumb violent things do, like beating up two women bc they wouldn't kiss for them.. the gayer I become #Pride2019
Nuke them all
0.5555555555555556
Forget her cunt and ass and fuck her face till she pukes!
@QTAnon1 Hope she fucks her all the way back to Somalia..
1.0
- Loss:
CosineSimilarityLoss
with these parameters:{ "loss_fct": "torch.nn.modules.loss.MSELoss" }
Evaluation Dataset
Unnamed Dataset
- Size: 1,781 evaluation samples
- Columns:
sentence1
,sentence2
, andscore
- Approximate statistics based on the first 1000 samples:
sentence1 sentence2 score type string string float details - min: 4 tokens
- mean: 35.77 tokens
- max: 128 tokens
- min: 4 tokens
- mean: 36.67 tokens
- max: 128 tokens
- min: 0.11
- mean: 0.83
- max: 1.0
- Samples:
sentence1 sentence2 score So for Trump and his inner circle to be crying over law and order, they sure know how to follow the system huh? Yet nothing about his ways of following the law were legal. So, how is his asylum rules were ever legal if children were sick and forced to live in unconditioned areas is American or right?
We need every illegal alien gang member in prison. Then deported.
1.0
hey girl are you my dick cause im about to beat the shit out of you
Reading all these hot takes from Americans, they all just seem like utter dogs. A human mind should not be this deranged, yet this is the norm in America
0.8888888888888888
Its amazing what a failure women are in politics. Its almost like its not in their nature to be powerful. No matter how much they try, poor dears....
Tbh that cut makes him look like neonazi. He looks like he's about to lecture me about how white people are the most discriminates against racially after he called me the n word with that hard r.
0.8888888888888888
- Loss:
CosineSimilarityLoss
with these parameters:{ "loss_fct": "torch.nn.modules.loss.MSELoss" }
Training Hyperparameters
Non-Default Hyperparameters
eval_strategy
: stepsper_device_train_batch_size
: 32per_device_eval_batch_size
: 64num_train_epochs
: 1warmup_ratio
: 0.1load_best_model_at_end
: True
All Hyperparameters
Click to expand
overwrite_output_dir
: Falsedo_predict
: Falseeval_strategy
: stepsprediction_loss_only
: Trueper_device_train_batch_size
: 32per_device_eval_batch_size
: 64per_gpu_train_batch_size
: Noneper_gpu_eval_batch_size
: Nonegradient_accumulation_steps
: 1eval_accumulation_steps
: Nonelearning_rate
: 5e-05weight_decay
: 0.0adam_beta1
: 0.9adam_beta2
: 0.999adam_epsilon
: 1e-08max_grad_norm
: 1.0num_train_epochs
: 1max_steps
: -1lr_scheduler_type
: linearlr_scheduler_kwargs
: {}warmup_ratio
: 0.1warmup_steps
: 0log_level
: passivelog_level_replica
: warninglog_on_each_node
: Truelogging_nan_inf_filter
: Truesave_safetensors
: Truesave_on_each_node
: Falsesave_only_model
: Falserestore_callback_states_from_checkpoint
: Falseno_cuda
: Falseuse_cpu
: Falseuse_mps_device
: Falseseed
: 42data_seed
: Nonejit_mode_eval
: Falseuse_ipex
: Falsebf16
: Falsefp16
: Falsefp16_opt_level
: O1half_precision_backend
: autobf16_full_eval
: Falsefp16_full_eval
: Falsetf32
: Nonelocal_rank
: 0ddp_backend
: Nonetpu_num_cores
: Nonetpu_metrics_debug
: Falsedebug
: []dataloader_drop_last
: Falsedataloader_num_workers
: 0dataloader_prefetch_factor
: Nonepast_index
: -1disable_tqdm
: Falseremove_unused_columns
: Truelabel_names
: Noneload_best_model_at_end
: Trueignore_data_skip
: Falsefsdp
: []fsdp_min_num_params
: 0fsdp_config
: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}fsdp_transformer_layer_cls_to_wrap
: Noneaccelerator_config
: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}deepspeed
: Nonelabel_smoothing_factor
: 0.0optim
: adamw_torchoptim_args
: Noneadafactor
: Falsegroup_by_length
: Falselength_column_name
: lengthddp_find_unused_parameters
: Noneddp_bucket_cap_mb
: Noneddp_broadcast_buffers
: Falsedataloader_pin_memory
: Truedataloader_persistent_workers
: Falseskip_memory_metrics
: Trueuse_legacy_prediction_loop
: Falsepush_to_hub
: Falseresume_from_checkpoint
: Nonehub_model_id
: Nonehub_strategy
: every_savehub_private_repo
: Falsehub_always_push
: Falsegradient_checkpointing
: Falsegradient_checkpointing_kwargs
: Noneinclude_inputs_for_metrics
: Falseeval_do_concat_batches
: Truefp16_backend
: autopush_to_hub_model_id
: Nonepush_to_hub_organization
: Nonemp_parameters
:auto_find_batch_size
: Falsefull_determinism
: Falsetorchdynamo
: Noneray_scope
: lastddp_timeout
: 1800torch_compile
: Falsetorch_compile_backend
: Nonetorch_compile_mode
: Nonedispatch_batches
: Nonesplit_batches
: Noneinclude_tokens_per_second
: Falseinclude_num_input_tokens_seen
: Falseneftune_noise_alpha
: Noneoptim_target_modules
: Nonebatch_eval_metrics
: Falsebatch_sampler
: batch_samplermulti_dataset_batch_sampler
: proportional
Training Logs
Epoch | Step | Training Loss | loss | hatespeech-sampled-dev_spearman_cosine |
---|---|---|---|---|
0.2836 | 300 | 0.0503 | 0.0139 | 0.4258 |
0.5671 | 600 | 0.0143 | 0.0135 | 0.4418 |
0.8507 | 900 | 0.0134 | 0.0131 | 0.4527 |
- The bold row denotes the saved checkpoint.
Framework Versions
- Python: 3.10.14
- Sentence Transformers: 3.0.0
- Transformers: 4.41.1
- PyTorch: 2.3.0
- Accelerate: 0.30.1
- Datasets: 2.19.1
- Tokenizers: 0.19.1
Citation
BibTeX
Sentence Transformers
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
- Downloads last month
- 2
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.
Model tree for marrodion/minilm-l12-v2-toxic-ft
Base model
sentence-transformers/all-MiniLM-L12-v2
Finetuned
marrodion/minilm-l12-v2-simple
Evaluation results
- Pearson Cosine on hatespeech sampled devself-reported0.582
- Spearman Cosine on hatespeech sampled devself-reported0.453
- Pearson Manhattan on hatespeech sampled devself-reported0.568
- Spearman Manhattan on hatespeech sampled devself-reported0.450
- Pearson Euclidean on hatespeech sampled devself-reported0.570
- Spearman Euclidean on hatespeech sampled devself-reported0.453
- Pearson Dot on hatespeech sampled devself-reported0.582
- Spearman Dot on hatespeech sampled devself-reported0.453
- Pearson Max on hatespeech sampled devself-reported0.582
- Spearman Max on hatespeech sampled devself-reported0.453