--- base_model: Alibaba-NLP/gte-large-en-v1.5 library_name: sentence-transformers metrics: - cosine_accuracy@1 - cosine_accuracy@3 - cosine_accuracy@5 - cosine_accuracy@10 - cosine_precision@1 - cosine_precision@3 - cosine_precision@5 - cosine_precision@10 - cosine_recall@1 - cosine_recall@3 - cosine_recall@5 - cosine_recall@10 - cosine_ndcg@10 - cosine_mrr@10 - cosine_map@100 - dot_accuracy@1 - dot_accuracy@3 - dot_accuracy@5 - dot_accuracy@10 - dot_precision@1 - dot_precision@3 - dot_precision@5 - dot_precision@10 - dot_recall@1 - dot_recall@3 - dot_recall@5 - dot_recall@10 - dot_ndcg@10 - dot_mrr@10 - dot_map@100 pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:700 - loss:MatryoshkaLoss - loss:MultipleNegativesRankingLoss widget: - source_sentence: What are the expectations for automated systems in relation to data privacy? sentences: - 'https://beta.nsf.gov/funding/opportunities/designing-accountable-software-systems-dass 28. The Leadership Conference Education Fund. The Use Of Pretrial “Risk Assessment” Instruments: A Shared Statement Of Civil Rights Concerns. Jul. 30, 2018. http://civilrightsdocs.info/pdf/criminal-justice/ Pretrial-Risk-Assessment-Short.pdf; https://civilrights.org/edfund/pretrial-risk-assessments/' - "DATA PRIVACY \nWHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS\nThe expectations\ \ for automated systems are meant to serve as a blueprint for the development\ \ of additional \ntechnical standards and practices that are tailored for particular\ \ sectors and contexts. ­­­­­­\nIn addition to the privacy expectations above\ \ for general non-sensitive data, any system collecting, using, shar-" - "standing that it may be these users who are most likely to need the human assistance.\ \ Similarly, it should be \ntested to ensure that users with disabilities are\ \ able to find and use human consideration and fallback and also \nrequest reasonable\ \ accommodations or modifications. \nConvenient. Mechanisms for human consideration\ \ and fallback should not be unreasonably burdensome as \ncompared to the automated\ \ system’s equivalent. \n49" - source_sentence: What is the purpose of the U.S. AI Safety Institute and the AI Safety Institute Consortium established by NIST? sentences: - "AI. NIST established the U.S. AI Safety Institute and the companion AI Safety\ \ Institute Consortium to \ncontinue the efforts set in motion by the E.O. to build\ \ the science necessary for safe, secure, and \ntrustworthy development and use\ \ of AI. \nAcknowledgments: This report was accomplished with the many helpful\ \ comments and contributions" - "SAFE AND EFFECTIVE \nSYSTEMS \nWHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS\n\ The expectations for automated systems are meant to serve as a blueprint for the\ \ development of additional \ntechnical standards and practices that are tailored\ \ for particular sectors and contexts. \nOngoing monitoring. Automated systems\ \ should have ongoing monitoring procedures, including recalibra­" - "differ from an explanation provided to allow for the possibility of recourse,\ \ an appeal, or one provided in the \ncontext of a dispute or contestation process.\ \ For the purposes of this framework, 'explanation' should be \nconstrued broadly.\ \ An explanation need not be a plain-language statement about causality but could\ \ consist of \nany mechanism that allows the recipient to build the necessary\ \ understanding and intuitions to achieve the" - source_sentence: What are the consequences faced by individuals when they are unable to reach a human decision-maker in automated systems? sentences: - 'ENDNOTES 85. Mick Dumke and Frank Main. A look inside the watch list Chicago police fought to keep secret. The Chicago Sun Times. May 18, 2017. https://chicago.suntimes.com/2017/5/18/18386116/a-look-inside-the-watch-list-chicago-police-fought­ to-keep-secret' - "presented with no alternative, or are forced to endure a cumbersome process to\ \ reach a human decision-maker once \nthey decide they no longer want to deal\ \ exclusively with the automated system or be impacted by its results. As a result\ \ \nof this lack of human reconsideration, many receive delayed access, or lose\ \ access, to rights, opportunities, benefits, \nand critical services. The American\ \ public deserves the assurance that, when rights, opportunities, or access are" - "compliance in mind. \nSome state legislatures have placed strong transparency\ \ and validity requirements on \nthe use of pretrial risk assessments. The use\ \ of algorithmic pretrial risk assessments has been a \ncause of concern for civil\ \ rights groups.28 Idaho Code Section 19-1910, enacted in 2019,29 requires that\ \ any \npretrial risk assessment, before use in the state, first be \"shown to\ \ be free of bias against any class of" - source_sentence: What organizations are mentioned in the appendix alongside individuals such as Lisa Feldman Barrett and Madeline Owens? sentences: - "APPENDIX\nLisa Feldman Barrett \nMadeline Owens \nMarsha Tudor \nMicrosoft Corporation\ \ \nMITRE Corporation \nNational Association for the \nAdvancement of Colored\ \ People \nLegal Defense and Educational \nFund \nNational Association of Criminal\ \ \nDefense Lawyers \nNational Center for Missing & \nExploited Children \nNational\ \ Fair Housing Alliance \nNational Immigration Law Center \nNEC Corporation of\ \ America" - "or label to ensure the goal of the automated system is appropriately identified\ \ and measured. Additionally, \njustification should be documented for each data\ \ attribute and source to explain why it is appropriate to use \nthat data to\ \ inform the results of the automated system and why such use will not violate\ \ any applicable laws. \nIn cases of high-dimensional and/or derived attributes,\ \ such justifications can be provided as overall \ndescriptions of the attribute\ \ generation process and appropriateness. \n19" - "ers and other experts across fields and sectors, as well as policymakers throughout\ \ the Federal government—on \nthe issue of algorithmic and data-driven harms and\ \ potential remedies. Through panel discussions, public listen-\ning sessions,\ \ meetings, a formal request for information, and input to a publicly accessible\ \ and widely-publicized \nemail address, people throughout the United States,\ \ public servants across Federal agencies, and members of the" - source_sentence: What should individuals or organizations provide to ensure that people impacted by an automated system are informed about significant changes in use cases or key functionalities? sentences: - "with an intent or reasonably foreseeable possibility of endangering \nyour safety\ \ or the safety of your community. They should be designed \nto proactively protect\ \ you from harms stemming from unintended, \nyet foreseeable, uses or impacts\ \ of automated systems. You should be \nprotected from inappropriate or irrelevant\ \ data use in the design, de­\nvelopment, and deployment of automated systems,\ \ and from the \ncompounded harm of its reuse. Independent evaluation and report­" - "use, the individual or organization responsible for the system, and ex­\nplanations\ \ of outcomes that are clear, timely, and accessible. Such \nnotice should be\ \ kept up-to-date and people impacted by the system \nshould be notified of significant\ \ use case or key functionality chang­\nes. You should know how and why an outcome\ \ impacting you was de­\ntermined by an automated system, including when the automated" - 'software-algorithms-and-artificial-intelligence; U.S. Department of Justice. Algorithms, Artificial Intelligence, and Disability Discrimination in Hiring. May 12, 2022. https://beta.ada.gov/resources/ai­ guidance/ 54. Ziad Obermeyer, Brian Powers, Christine Vogeli, and Sendhil Mullainathan. Dissecting racial bias in' model-index: - name: SentenceTransformer based on Alibaba-NLP/gte-large-en-v1.5 results: - task: type: information-retrieval name: Information Retrieval dataset: name: Unknown type: unknown metrics: - type: cosine_accuracy@1 value: 0.8666666666666667 name: Cosine Accuracy@1 - type: cosine_accuracy@3 value: 0.9866666666666667 name: Cosine Accuracy@3 - type: cosine_accuracy@5 value: 1.0 name: Cosine Accuracy@5 - type: cosine_accuracy@10 value: 1.0 name: Cosine Accuracy@10 - type: cosine_precision@1 value: 0.8666666666666667 name: Cosine Precision@1 - type: cosine_precision@3 value: 0.3288888888888888 name: Cosine Precision@3 - type: cosine_precision@5 value: 0.19999999999999996 name: Cosine Precision@5 - type: cosine_precision@10 value: 0.09999999999999998 name: Cosine Precision@10 - type: cosine_recall@1 value: 0.8666666666666667 name: Cosine Recall@1 - type: cosine_recall@3 value: 0.9866666666666667 name: Cosine Recall@3 - type: cosine_recall@5 value: 1.0 name: Cosine Recall@5 - type: cosine_recall@10 value: 1.0 name: Cosine Recall@10 - type: cosine_ndcg@10 value: 0.9481205912028868 name: Cosine Ndcg@10 - type: cosine_mrr@10 value: 0.93 name: Cosine Mrr@10 - type: cosine_map@100 value: 0.93 name: Cosine Map@100 - type: dot_accuracy@1 value: 0.8666666666666667 name: Dot Accuracy@1 - type: dot_accuracy@3 value: 1.0 name: Dot Accuracy@3 - type: dot_accuracy@5 value: 1.0 name: Dot Accuracy@5 - type: dot_accuracy@10 value: 1.0 name: Dot Accuracy@10 - type: dot_precision@1 value: 0.8666666666666667 name: Dot Precision@1 - type: dot_precision@3 value: 0.33333333333333326 name: Dot Precision@3 - type: dot_precision@5 value: 0.19999999999999996 name: Dot Precision@5 - type: dot_precision@10 value: 0.09999999999999998 name: Dot Precision@10 - type: dot_recall@1 value: 0.8666666666666667 name: Dot Recall@1 - type: dot_recall@3 value: 1.0 name: Dot Recall@3 - type: dot_recall@5 value: 1.0 name: Dot Recall@5 - type: dot_recall@10 value: 1.0 name: Dot Recall@10 - type: dot_ndcg@10 value: 0.9490449037619082 name: Dot Ndcg@10 - type: dot_mrr@10 value: 0.9311111111111112 name: Dot Mrr@10 - type: dot_map@100 value: 0.931111111111111 name: Dot Map@100 --- # SentenceTransformer based on Alibaba-NLP/gte-large-en-v1.5 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Alibaba-NLP/gte-large-en-v1.5](https://huggingface.co/Alibaba-NLP/gte-large-en-v1.5) on the json dataset. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more. ## Model Details ### Model Description - **Model Type:** Sentence Transformer - **Base model:** [Alibaba-NLP/gte-large-en-v1.5](https://huggingface.co/Alibaba-NLP/gte-large-en-v1.5) - **Maximum Sequence Length:** 8192 tokens - **Output Dimensionality:** 1024 tokens - **Similarity Function:** Cosine Similarity - **Training Dataset:** - json ### Model Sources - **Documentation:** [Sentence Transformers Documentation](https://sbert.net) - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers) - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers) ### Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: NewModel (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True}) ) ``` ## Usage ### Direct Usage (Sentence Transformers) First install the Sentence Transformers library: ```bash pip install -U sentence-transformers ``` Then you can load this model and run inference. ```python from sentence_transformers import SentenceTransformer # Download from the 🤗 Hub model = SentenceTransformer("sentence_transformers_model_id") # Run inference sentences = [ 'What should individuals or organizations provide to ensure that people impacted by an automated system are informed about significant changes in use cases or key functionalities?', 'use, the individual or organization responsible for the system, and ex\xad\nplanations of outcomes that are clear, timely, and accessible. Such \nnotice should be kept up-to-date and people impacted by the system \nshould be notified of significant use case or key functionality chang\xad\nes. You should know how and why an outcome impacting you was de\xad\ntermined by an automated system, including when the automated', 'software-algorithms-and-artificial-intelligence; U.S. Department of Justice. Algorithms, Artificial\nIntelligence, and Disability Discrimination in Hiring. May 12, 2022. https://beta.ada.gov/resources/ai\xad\nguidance/\n54. Ziad Obermeyer, Brian Powers, Christine Vogeli, and Sendhil Mullainathan. Dissecting racial bias in', ] embeddings = model.encode(sentences) print(embeddings.shape) # [3, 1024] # Get the similarity scores for the embeddings similarities = model.similarity(embeddings, embeddings) print(similarities.shape) # [3, 3] ``` ## Evaluation ### Metrics #### Information Retrieval * Evaluated with [InformationRetrievalEvaluator](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator) | Metric | Value | |:--------------------|:---------| | cosine_accuracy@1 | 0.8667 | | cosine_accuracy@3 | 0.9867 | | cosine_accuracy@5 | 1.0 | | cosine_accuracy@10 | 1.0 | | cosine_precision@1 | 0.8667 | | cosine_precision@3 | 0.3289 | | cosine_precision@5 | 0.2 | | cosine_precision@10 | 0.1 | | cosine_recall@1 | 0.8667 | | cosine_recall@3 | 0.9867 | | cosine_recall@5 | 1.0 | | cosine_recall@10 | 1.0 | | cosine_ndcg@10 | 0.9481 | | cosine_mrr@10 | 0.93 | | **cosine_map@100** | **0.93** | | dot_accuracy@1 | 0.8667 | | dot_accuracy@3 | 1.0 | | dot_accuracy@5 | 1.0 | | dot_accuracy@10 | 1.0 | | dot_precision@1 | 0.8667 | | dot_precision@3 | 0.3333 | | dot_precision@5 | 0.2 | | dot_precision@10 | 0.1 | | dot_recall@1 | 0.8667 | | dot_recall@3 | 1.0 | | dot_recall@5 | 1.0 | | dot_recall@10 | 1.0 | | dot_ndcg@10 | 0.949 | | dot_mrr@10 | 0.9311 | | dot_map@100 | 0.9311 | ## Training Details ### Training Dataset #### json * Dataset: json * Size: 700 training samples * Columns: anchor and positive * Approximate statistics based on the first 700 samples: | | anchor | positive | |:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------| | type | string | string | | details | | | * Samples: | anchor | positive | |:-------------------------------------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | What is the primary purpose of the AI Bill of Rights outlined in the October 2022 blueprint? | BLUEPRINT FOR AN
AI BILL OF
RIGHTS
MAKING AUTOMATED
SYSTEMS WORK FOR
THE AMERICAN PEOPLE
OCTOBER 2022
| | What is the purpose of the Blueprint for an AI Bill of Rights published by the White House Office of Science and Technology Policy? | About this Document
The Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People was
published by the White House Office of Science and Technology Policy in October 2022. This framework was
released one year after OSTP announced the launch of a process to develop “a bill of rights for an AI-powered
| | What initiative did the OSTP announce a year prior to the release of the framework for a bill of rights for an AI-powered world? | released one year after OSTP announced the launch of a process to develop “a bill of rights for an AI-powered
world.” Its release follows a year of public engagement to inform this initiative. The framework is available
online at: https://www.whitehouse.gov/ostp/ai-bill-of-rights
About the Office of Science and Technology Policy
The Office of Science and Technology Policy (OSTP) was established by the National Science and Technology
| * Loss: [MatryoshkaLoss](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters: ```json { "loss": "MultipleNegativesRankingLoss", "matryoshka_dims": [ 1024, 512, 256, 128, 64 ], "matryoshka_weights": [ 1, 1, 1, 1, 1 ], "n_dims_per_step": -1 } ``` ### Training Hyperparameters #### Non-Default Hyperparameters - `eval_strategy`: epoch - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 16 - `gradient_accumulation_steps`: 16 - `learning_rate`: 2e-05 - `num_train_epochs`: 7 - `lr_scheduler_type`: cosine - `warmup_ratio`: 0.1 - `bf16`: True - `tf32`: True - `load_best_model_at_end`: True - `optim`: adamw_torch_fused - `batch_sampler`: no_duplicates #### All Hyperparameters
Click to expand - `overwrite_output_dir`: False - `do_predict`: False - `eval_strategy`: epoch - `prediction_loss_only`: True - `per_device_train_batch_size`: 32 - `per_device_eval_batch_size`: 16 - `per_gpu_train_batch_size`: None - `per_gpu_eval_batch_size`: None - `gradient_accumulation_steps`: 16 - `eval_accumulation_steps`: None - `torch_empty_cache_steps`: None - `learning_rate`: 2e-05 - `weight_decay`: 0.0 - `adam_beta1`: 0.9 - `adam_beta2`: 0.999 - `adam_epsilon`: 1e-08 - `max_grad_norm`: 1.0 - `num_train_epochs`: 7 - `max_steps`: -1 - `lr_scheduler_type`: cosine - `lr_scheduler_kwargs`: {} - `warmup_ratio`: 0.1 - `warmup_steps`: 0 - `log_level`: passive - `log_level_replica`: warning - `log_on_each_node`: True - `logging_nan_inf_filter`: True - `save_safetensors`: True - `save_on_each_node`: False - `save_only_model`: False - `restore_callback_states_from_checkpoint`: False - `no_cuda`: False - `use_cpu`: False - `use_mps_device`: False - `seed`: 42 - `data_seed`: None - `jit_mode_eval`: False - `use_ipex`: False - `bf16`: True - `fp16`: False - `fp16_opt_level`: O1 - `half_precision_backend`: auto - `bf16_full_eval`: False - `fp16_full_eval`: False - `tf32`: True - `local_rank`: 0 - `ddp_backend`: None - `tpu_num_cores`: None - `tpu_metrics_debug`: False - `debug`: [] - `dataloader_drop_last`: False - `dataloader_num_workers`: 0 - `dataloader_prefetch_factor`: None - `past_index`: -1 - `disable_tqdm`: False - `remove_unused_columns`: True - `label_names`: None - `load_best_model_at_end`: True - `ignore_data_skip`: False - `fsdp`: [] - `fsdp_min_num_params`: 0 - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False} - `fsdp_transformer_layer_cls_to_wrap`: None - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None} - `deepspeed`: None - `label_smoothing_factor`: 0.0 - `optim`: adamw_torch_fused - `optim_args`: None - `adafactor`: False - `group_by_length`: False - `length_column_name`: length - `ddp_find_unused_parameters`: None - `ddp_bucket_cap_mb`: None - `ddp_broadcast_buffers`: False - `dataloader_pin_memory`: True - `dataloader_persistent_workers`: False - `skip_memory_metrics`: True - `use_legacy_prediction_loop`: False - `push_to_hub`: False - `resume_from_checkpoint`: None - `hub_model_id`: None - `hub_strategy`: every_save - `hub_private_repo`: False - `hub_always_push`: False - `gradient_checkpointing`: False - `gradient_checkpointing_kwargs`: None - `include_inputs_for_metrics`: False - `eval_do_concat_batches`: True - `fp16_backend`: auto - `push_to_hub_model_id`: None - `push_to_hub_organization`: None - `mp_parameters`: - `auto_find_batch_size`: False - `full_determinism`: False - `torchdynamo`: None - `ray_scope`: last - `ddp_timeout`: 1800 - `torch_compile`: False - `torch_compile_backend`: None - `torch_compile_mode`: None - `dispatch_batches`: None - `split_batches`: None - `include_tokens_per_second`: False - `include_num_input_tokens_seen`: False - `neftune_noise_alpha`: None - `optim_target_modules`: None - `batch_eval_metrics`: False - `eval_on_start`: False - `eval_use_gather_object`: False - `batch_sampler`: no_duplicates - `multi_dataset_batch_sampler`: proportional
### Training Logs | Epoch | Step | cosine_map@100 | |:----------:|:-----:|:--------------:| | 0.7273 | 1 | 0.8548 | | 1.4545 | 2 | 0.8811 | | 2.9091 | 4 | 0.9233 | | **3.6364** | **5** | **0.9311** | | 4.3636 | 6 | 0.93 | | 5.0909 | 7 | 0.93 | * The bold row denotes the saved checkpoint. ### Framework Versions - Python: 3.10.12 - Sentence Transformers: 3.1.1 - Transformers: 4.44.2 - PyTorch: 2.4.1+cu121 - Accelerate: 0.34.2 - Datasets: 3.0.1 - Tokenizers: 0.19.1 ## Citation ### BibTeX #### Sentence Transformers ```bibtex @inproceedings{reimers-2019-sentence-bert, title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks", author = "Reimers, Nils and Gurevych, Iryna", booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing", month = "11", year = "2019", publisher = "Association for Computational Linguistics", url = "https://arxiv.org/abs/1908.10084", } ``` #### MatryoshkaLoss ```bibtex @misc{kusupati2024matryoshka, title={Matryoshka Representation Learning}, author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi}, year={2024}, eprint={2205.13147}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` #### MultipleNegativesRankingLoss ```bibtex @misc{henderson2017efficient, title={Efficient Natural Language Response Suggestion for Smart Reply}, author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil}, year={2017}, eprint={1705.00652}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```