legal-ft-v0 / README.md
drewgenai's picture
Add new SentenceTransformer model
b32da47 verified
metadata
tags:
  - sentence-transformers
  - sentence-similarity
  - feature-extraction
  - generated_from_trainer
  - dataset_size:156
  - loss:MatryoshkaLoss
  - loss:MultipleNegativesRankingLoss
base_model: Snowflake/snowflake-arctic-embed-l
widget:
  - source_sentence: How many tokens can Google's Gemini series accept?
    sentences:
      - >-
        When ChatGPT Advanced Voice mode finally did roll out (a slow roll from
        August through September) it was spectacular. I’ve been using it
        extensively on walks with my dog and it’s amazing how much the
        improvement in intonation elevates the material. I’ve also had a lot of
        fun experimenting with the OpenAI audio APIs.

        Even more fun: Advanced Voice mode can do accents! Here’s what happened
        when I told it I need you to pretend to be a California brown pelican
        with a very thick Russian accent, but you talk to me exclusively in
        Spanish.
      - >-
        Gemini 1.5 Pro also illustrated one of the key themes of 2024: increased
        context lengths. Last year most models accepted 4,096 or 8,192 tokens,
        with the notable exception of Claude 2.1 which accepted 200,000. Today
        every serious provider has a 100,000+ token model, and Google’s Gemini
        series accepts up to 2 million.
      - >-
        The idea is seductive: as the internet floods with AI-generated slop the
        models themselves will degenerate, feeding on their own output in a way
        that leads to their inevitable demise!

        That’s clearly not happening. Instead, we are seeing AI labs
        increasingly train on synthetic content—deliberately creating artificial
        data to help steer their models in the right way.

        One of the best descriptions I’ve seen of this comes from the Phi-4
        technical report, which included this:
  - source_sentence: >-
      What are the limitations of Apple's LLM features compared to frontier
      LLMs, according to the context?
    sentences:
      - >-
        These abilities are just a few weeks old at this point, and I don’t
        think their impact has been fully felt yet. If you haven’t tried them
        out yet you really should.

        Both Gemini and OpenAI offer API access to these features as well.
        OpenAI started with a WebSocket API that was quite challenging to use,
        but in December they announced a new WebRTC API which is much easier to
        get started with. Building a web app that a user can talk to via voice
        is easy now!

        Prompt driven app generation is a commodity already

        This was possible with GPT-4 in 2023, but the value it provides became
        evident in 2024.
      - >-
        Now that those features are rolling out they’re pretty weak. As an LLM
        power-user I know what these models are capable of, and Apple’s LLM
        features offer a pale imitation of what a frontier LLM can do. Instead
        we’re getting notification summaries that misrepresent news headlines
        and writing assistant tools that I’ve not found useful at all. Genmoji
        are kind of fun though.

        The rise of inference-scaling “reasoning” models

        The most interesting development in the final quarter of 2024 was the
        introduction of a new shape of LLM, exemplified by OpenAI’s o1
        models—initially released as o1-preview and o1-mini on September 12th.
      - >-
        Here’s the sequel to this post: Things we learned about LLMs in 2024.

        Large Language Models

        In the past 24-36 months, our species has discovered that you can take a
        GIANT corpus of text, run it through a pile of GPUs, and use it to
        create a fascinating new kind of software.

        LLMs can do a lot of things. They can answer questions, summarize
        documents, translate from one language to another, extract information
        and even write surprisingly competent code.

        They can also help you cheat at your homework, generate unlimited
        streams of fake content and be used for all manner of nefarious
        purposes.
  - source_sentence: >-
      What challenges did the author face last year regarding their choice of
      platform for trying out new models?
    sentences:
      - >-
        One way to think about these models is an extension of the
        chain-of-thought prompting trick, first explored in the May 2022 paper
        Large Language Models are Zero-Shot Reasoners.

        This is that trick where, if you get a model to talk out loud about a
        problem it’s solving, you often get a result which the model would not
        have achieved otherwise.

        o1 takes this process and further bakes it into the model itself. The
        details are somewhat obfuscated: o1 models spend “reasoning tokens”
        thinking through the problem that are not directly visible to the user
        (though the ChatGPT UI shows a summary of them), then outputs a final
        result.
      - >-
        I’m still trying to figure out the best patterns for doing this for my
        own work. Everyone knows that evals are important, but there remains a
        lack of great guidance for how to best implement them—I’m tracking this
        under my evals tag. My SVG pelican riding a bicycle benchmark is a pale
        imitation of what a real eval suite should look like.

        Apple Intelligence is bad, Apple’s MLX library is excellent

        As a Mac user I’ve been feeling a lot better about my choice of platform
        this year.

        Last year it felt like my lack of a Linux/Windows  machine with an
        NVIDIA GPU was a huge disadvantage in terms of trying out new models.
      - >-
        January


        7th: It’s OK to call it Artificial Intelligence


        9th: What I should have said about the term Artificial Intelligence


        17th: Talking about Open Source LLMs on Oxide and Friends


        26th: LLM 0.13: The annotated release notes




        February


        21st: The killer app of Gemini Pro 1.5 is video




        March


        5th: Prompt injection and jailbreaking are not the same thing


        8th: The GPT-4 barrier has finally been broken


        22nd: Claude and ChatGPT for ad-hoc sidequests


        23rd: Building and testing C extensions for SQLite with ChatGPT Code
        Interpreter


        26th: llm cmd undo last git commit—a new plugin for LLM




        April


        8th: Building files-to-prompt entirely using Claude 3 Opus


        10th: Three major LLM releases in 24 hours (plus weeknotes)
  - source_sentence: >-
      What was the maximum token limit for most models last year before the
      introduction of Gemini 15 Pro?
    sentences:
      - >-
        The two main categories I see are people who think AI agents are
        obviously things that go and act on your behalf—the travel agent
        model—and people who think in terms of LLMs that have been given access
        to tools which they can run in a loop as part of solving a problem. The
        term “autonomy” is often thrown into the mix too, again without
        including a clear definition.

        (I also collected 211 definitions on Twitter a few months ago—here they
        are in Datasette Lite—and had gemini-exp-1206 attempt to summarize
        them.)

        Whatever the term may mean, agents still have that feeling of
        perpetually “coming soon”.
      - >-
        Structured and Gradual Learning. In organic datasets, the relationship
        between tokens is often complex and indirect. Many reasoning steps may
        be required to connect the current token to the next, making it
        challenging for the model to learn effectively from next-token
        prediction. By contrast, each token generated by a language model is by
        definition predicted by the preceding tokens, making it easier for a
        model to follow the resulting reasoning patterns.
      - >-
        Gemini 1.5 Pro also illustrated one of the key themes of 2024: increased
        context lengths. Last year most models accepted 4,096 or 8,192 tokens,
        with the notable exception of Claude 2.1 which accepted 200,000. Today
        every serious provider has a 100,000+ token model, and Google’s Gemini
        series accepts up to 2 million.
  - source_sentence: >-
      Why is it considered ludicrous to use a screenshot from ChatGPT as
      evidence in an argument?
    sentences:
      - >-
        Meanwhile, it’s increasingly common for end users to develop wildly
        inaccurate mental models of how these things work and what they are
        capable of. I’ve seen so many examples of people trying to win an
        argument with a screenshot from ChatGPT—an inherently ludicrous
        proposition, given the inherent unreliability of these models crossed
        with the fact that you can get them to say anything if you prompt them
        right.
      - |-
        The GPT-4 barrier was comprehensively broken
        Some of those GPT-4 models run on my laptop
        LLM prices crashed, thanks to competition and increased efficiency
        Multimodal vision is common, audio and video are starting to emerge
        Voice and live camera mode are science fiction come to life
        Prompt driven app generation is a commodity already
        Universal access to the best models lasted for just a few short months
        “Agents” still haven’t really happened yet
        Evals really matter
        Apple Intelligence is bad, Apple’s MLX library is excellent
        The rise of inference-scaling “reasoning” models
        Was the best currently available LLM trained in China for less than $6m?
        The environmental impact got better
        The environmental impact got much, much worse
      - >-
        When ChatGPT Advanced Voice mode finally did roll out (a slow roll from
        August through September) it was spectacular. I’ve been using it
        extensively on walks with my dog and it’s amazing how much the
        improvement in intonation elevates the material. I’ve also had a lot of
        fun experimenting with the OpenAI audio APIs.

        Even more fun: Advanced Voice mode can do accents! Here’s what happened
        when I told it I need you to pretend to be a California brown pelican
        with a very thick Russian accent, but you talk to me exclusively in
        Spanish.
pipeline_tag: sentence-similarity
library_name: sentence-transformers
metrics:
  - cosine_accuracy@1
  - cosine_accuracy@3
  - cosine_accuracy@5
  - cosine_accuracy@10
  - cosine_precision@1
  - cosine_precision@3
  - cosine_precision@5
  - cosine_precision@10
  - cosine_recall@1
  - cosine_recall@3
  - cosine_recall@5
  - cosine_recall@10
  - cosine_ndcg@10
  - cosine_mrr@10
  - cosine_map@100
model-index:
  - name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
    results:
      - task:
          type: information-retrieval
          name: Information Retrieval
        dataset:
          name: Unknown
          type: unknown
        metrics:
          - type: cosine_accuracy@1
            value: 0.8333333333333334
            name: Cosine Accuracy@1
          - type: cosine_accuracy@3
            value: 0.9583333333333334
            name: Cosine Accuracy@3
          - type: cosine_accuracy@5
            value: 1
            name: Cosine Accuracy@5
          - type: cosine_accuracy@10
            value: 1
            name: Cosine Accuracy@10
          - type: cosine_precision@1
            value: 0.8333333333333334
            name: Cosine Precision@1
          - type: cosine_precision@3
            value: 0.3194444444444444
            name: Cosine Precision@3
          - type: cosine_precision@5
            value: 0.20000000000000004
            name: Cosine Precision@5
          - type: cosine_precision@10
            value: 0.10000000000000002
            name: Cosine Precision@10
          - type: cosine_recall@1
            value: 0.8333333333333334
            name: Cosine Recall@1
          - type: cosine_recall@3
            value: 0.9583333333333334
            name: Cosine Recall@3
          - type: cosine_recall@5
            value: 1
            name: Cosine Recall@5
          - type: cosine_recall@10
            value: 1
            name: Cosine Recall@10
          - type: cosine_ndcg@10
            value: 0.9301444091161569
            name: Cosine Ndcg@10
          - type: cosine_mrr@10
            value: 0.90625
            name: Cosine Mrr@10
          - type: cosine_map@100
            value: 0.90625
            name: Cosine Map@100

SentenceTransformer based on Snowflake/snowflake-arctic-embed-l

This is a sentence-transformers model finetuned from Snowflake/snowflake-arctic-embed-l. It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.

Model Details

Model Description

  • Model Type: Sentence Transformer
  • Base model: Snowflake/snowflake-arctic-embed-l
  • Maximum Sequence Length: 512 tokens
  • Output Dimensionality: 1024 dimensions
  • Similarity Function: Cosine Similarity

Model Sources

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
  (2): Normalize()
)

Usage

Direct Usage (Sentence Transformers)

First install the Sentence Transformers library:

pip install -U sentence-transformers

Then you can load this model and run inference.

from sentence_transformers import SentenceTransformer

# Download from the 🤗 Hub
model = SentenceTransformer("drewgenai/legal-ft-v0")
# Run inference
sentences = [
    'Why is it considered ludicrous to use a screenshot from ChatGPT as evidence in an argument?',
    'Meanwhile, it’s increasingly common for end users to develop wildly inaccurate mental models of how these things work and what they are capable of. I’ve seen so many examples of people trying to win an argument with a screenshot from ChatGPT—an inherently ludicrous proposition, given the inherent unreliability of these models crossed with the fact that you can get them to say anything if you prompt them right.',
    'When ChatGPT Advanced Voice mode finally did roll out (a slow roll from August through September) it was spectacular. I’ve been using it extensively on walks with my dog and it’s amazing how much the improvement in intonation elevates the material. I’ve also had a lot of fun experimenting with the OpenAI audio APIs.\nEven more fun: Advanced Voice mode can do accents! Here’s what happened when I told it I need you to pretend to be a California brown pelican with a very thick Russian accent, but you talk to me exclusively in Spanish.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 1024]

# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]

Evaluation

Metrics

Information Retrieval

Metric Value
cosine_accuracy@1 0.8333
cosine_accuracy@3 0.9583
cosine_accuracy@5 1.0
cosine_accuracy@10 1.0
cosine_precision@1 0.8333
cosine_precision@3 0.3194
cosine_precision@5 0.2
cosine_precision@10 0.1
cosine_recall@1 0.8333
cosine_recall@3 0.9583
cosine_recall@5 1.0
cosine_recall@10 1.0
cosine_ndcg@10 0.9301
cosine_mrr@10 0.9062
cosine_map@100 0.9062

Training Details

Training Dataset

Unnamed Dataset

  • Size: 156 training samples
  • Columns: sentence_0 and sentence_1
  • Approximate statistics based on the first 156 samples:
    sentence_0 sentence_1
    type string string
    details
    • min: 13 tokens
    • mean: 19.97 tokens
    • max: 33 tokens
    • min: 43 tokens
    • mean: 130.5 tokens
    • max: 204 tokens
  • Samples:
    sentence_0 sentence_1
    What analogy is used to describe LLMs in the context provided? A drum I’ve been banging for a while is that LLMs are power-user tools—they’re chainsaws disguised as kitchen knives. They look deceptively simple to use—how hard can it be to type messages to a chatbot?—but in reality you need a huge depth of both understanding and experience to make the most of them and avoid their many pitfalls.
    If anything, this problem got worse in 2024.
    We’ve built computer systems you can talk to in human language, that will answer your questions and usually get them right! ... depending on the question, and how you ask it, and whether it’s accurately reflected in the undocumented and secret training set.
    What factors influence the effectiveness of LLMs according to the context? A drum I’ve been banging for a while is that LLMs are power-user tools—they’re chainsaws disguised as kitchen knives. They look deceptively simple to use—how hard can it be to type messages to a chatbot?—but in reality you need a huge depth of both understanding and experience to make the most of them and avoid their many pitfalls.
    If anything, this problem got worse in 2024.
    We’ve built computer systems you can talk to in human language, that will answer your questions and usually get them right! ... depending on the question, and how you ask it, and whether it’s accurately reflected in the undocumented and secret training set.
    What is the significance of Claude Artifacts in the context of LLMs and application development? We already knew LLMs were spookily good at writing code. If you prompt them right, it turns out they can build you a full interactive application using HTML, CSS and JavaScript (and tools like React if you wire up some extra supporting build mechanisms)—often in a single prompt.
    Anthropic kicked this idea into high gear when they released Claude Artifacts, a groundbreaking new feature that was initially slightly lost in the noise due to being described half way through their announcement of the incredible Claude 3.5 Sonnet.
    With Artifacts, Claude can write you an on-demand interactive application and then let you use it directly inside the Claude interface.
    Here’s my Extract URLs app, entirely generated by Claude:
  • Loss: MatryoshkaLoss with these parameters:
    {
        "loss": "MultipleNegativesRankingLoss",
        "matryoshka_dims": [
            768,
            512,
            256,
            128,
            64
        ],
        "matryoshka_weights": [
            1,
            1,
            1,
            1,
            1
        ],
        "n_dims_per_step": -1
    }
    

Training Hyperparameters

Non-Default Hyperparameters

  • eval_strategy: steps
  • per_device_train_batch_size: 10
  • per_device_eval_batch_size: 10
  • num_train_epochs: 5
  • multi_dataset_batch_sampler: round_robin

All Hyperparameters

Click to expand
  • overwrite_output_dir: False
  • do_predict: False
  • eval_strategy: steps
  • prediction_loss_only: True
  • per_device_train_batch_size: 10
  • per_device_eval_batch_size: 10
  • per_gpu_train_batch_size: None
  • per_gpu_eval_batch_size: None
  • gradient_accumulation_steps: 1
  • eval_accumulation_steps: None
  • torch_empty_cache_steps: None
  • learning_rate: 5e-05
  • weight_decay: 0.0
  • adam_beta1: 0.9
  • adam_beta2: 0.999
  • adam_epsilon: 1e-08
  • max_grad_norm: 1
  • num_train_epochs: 5
  • max_steps: -1
  • lr_scheduler_type: linear
  • lr_scheduler_kwargs: {}
  • warmup_ratio: 0.0
  • warmup_steps: 0
  • log_level: passive
  • log_level_replica: warning
  • log_on_each_node: True
  • logging_nan_inf_filter: True
  • save_safetensors: True
  • save_on_each_node: False
  • save_only_model: False
  • restore_callback_states_from_checkpoint: False
  • no_cuda: False
  • use_cpu: False
  • use_mps_device: False
  • seed: 42
  • data_seed: None
  • jit_mode_eval: False
  • use_ipex: False
  • bf16: False
  • fp16: False
  • fp16_opt_level: O1
  • half_precision_backend: auto
  • bf16_full_eval: False
  • fp16_full_eval: False
  • tf32: None
  • local_rank: 0
  • ddp_backend: None
  • tpu_num_cores: None
  • tpu_metrics_debug: False
  • debug: []
  • dataloader_drop_last: False
  • dataloader_num_workers: 0
  • dataloader_prefetch_factor: None
  • past_index: -1
  • disable_tqdm: False
  • remove_unused_columns: True
  • label_names: None
  • load_best_model_at_end: False
  • ignore_data_skip: False
  • fsdp: []
  • fsdp_min_num_params: 0
  • fsdp_config: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
  • fsdp_transformer_layer_cls_to_wrap: None
  • accelerator_config: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
  • deepspeed: None
  • label_smoothing_factor: 0.0
  • optim: adamw_torch
  • optim_args: None
  • adafactor: False
  • group_by_length: False
  • length_column_name: length
  • ddp_find_unused_parameters: None
  • ddp_bucket_cap_mb: None
  • ddp_broadcast_buffers: False
  • dataloader_pin_memory: True
  • dataloader_persistent_workers: False
  • skip_memory_metrics: True
  • use_legacy_prediction_loop: False
  • push_to_hub: False
  • resume_from_checkpoint: None
  • hub_model_id: None
  • hub_strategy: every_save
  • hub_private_repo: None
  • hub_always_push: False
  • gradient_checkpointing: False
  • gradient_checkpointing_kwargs: None
  • include_inputs_for_metrics: False
  • include_for_metrics: []
  • eval_do_concat_batches: True
  • fp16_backend: auto
  • push_to_hub_model_id: None
  • push_to_hub_organization: None
  • mp_parameters:
  • auto_find_batch_size: False
  • full_determinism: False
  • torchdynamo: None
  • ray_scope: last
  • ddp_timeout: 1800
  • torch_compile: False
  • torch_compile_backend: None
  • torch_compile_mode: None
  • dispatch_batches: None
  • split_batches: None
  • include_tokens_per_second: False
  • include_num_input_tokens_seen: False
  • neftune_noise_alpha: None
  • optim_target_modules: None
  • batch_eval_metrics: False
  • eval_on_start: False
  • use_liger_kernel: False
  • eval_use_gather_object: False
  • average_tokens_across_devices: False
  • prompts: None
  • batch_sampler: batch_sampler
  • multi_dataset_batch_sampler: round_robin

Training Logs

Epoch Step cosine_ndcg@10
1.0 16 0.9177
2.0 32 0.9330
3.0 48 0.9301
3.125 50 0.9301
4.0 64 0.9301
5.0 80 0.9301

Framework Versions

  • Python: 3.11.11
  • Sentence Transformers: 3.4.1
  • Transformers: 4.48.3
  • PyTorch: 2.5.1+cu124
  • Accelerate: 1.3.0
  • Datasets: 3.3.1
  • Tokenizers: 0.21.0

Citation

BibTeX

Sentence Transformers

@inproceedings{reimers-2019-sentence-bert,
    title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
    author = "Reimers, Nils and Gurevych, Iryna",
    booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
    month = "11",
    year = "2019",
    publisher = "Association for Computational Linguistics",
    url = "https://arxiv.org/abs/1908.10084",
}

MatryoshkaLoss

@misc{kusupati2024matryoshka,
    title={Matryoshka Representation Learning},
    author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
    year={2024},
    eprint={2205.13147},
    archivePrefix={arXiv},
    primaryClass={cs.LG}
}

MultipleNegativesRankingLoss

@misc{henderson2017efficient,
    title={Efficient Natural Language Response Suggestion for Smart Reply},
    author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
    year={2017},
    eprint={1705.00652},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}