drewgenai commited on
Commit
b32da47
·
verified ·
1 Parent(s): d2c16aa

Add new SentenceTransformer model

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 1024,
3
+ "pooling_mode_cls_token": true,
4
+ "pooling_mode_mean_tokens": false,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,672 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - sentence-transformers
4
+ - sentence-similarity
5
+ - feature-extraction
6
+ - generated_from_trainer
7
+ - dataset_size:156
8
+ - loss:MatryoshkaLoss
9
+ - loss:MultipleNegativesRankingLoss
10
+ base_model: Snowflake/snowflake-arctic-embed-l
11
+ widget:
12
+ - source_sentence: How many tokens can Google's Gemini series accept?
13
+ sentences:
14
+ - 'When ChatGPT Advanced Voice mode finally did roll out (a slow roll from August
15
+ through September) it was spectacular. I’ve been using it extensively on walks
16
+ with my dog and it’s amazing how much the improvement in intonation elevates the
17
+ material. I’ve also had a lot of fun experimenting with the OpenAI audio APIs.
18
+
19
+ Even more fun: Advanced Voice mode can do accents! Here’s what happened when I
20
+ told it I need you to pretend to be a California brown pelican with a very thick
21
+ Russian accent, but you talk to me exclusively in Spanish.'
22
+ - 'Gemini 1.5 Pro also illustrated one of the key themes of 2024: increased context
23
+ lengths. Last year most models accepted 4,096 or 8,192 tokens, with the notable
24
+ exception of Claude 2.1 which accepted 200,000. Today every serious provider has
25
+ a 100,000+ token model, and Google’s Gemini series accepts up to 2 million.'
26
+ - 'The idea is seductive: as the internet floods with AI-generated slop the models
27
+ themselves will degenerate, feeding on their own output in a way that leads to
28
+ their inevitable demise!
29
+
30
+ That’s clearly not happening. Instead, we are seeing AI labs increasingly train
31
+ on synthetic content—deliberately creating artificial data to help steer their
32
+ models in the right way.
33
+
34
+ One of the best descriptions I’ve seen of this comes from the Phi-4 technical
35
+ report, which included this:'
36
+ - source_sentence: What are the limitations of Apple's LLM features compared to frontier
37
+ LLMs, according to the context?
38
+ sentences:
39
+ - 'These abilities are just a few weeks old at this point, and I don’t think their
40
+ impact has been fully felt yet. If you haven’t tried them out yet you really should.
41
+
42
+ Both Gemini and OpenAI offer API access to these features as well. OpenAI started
43
+ with a WebSocket API that was quite challenging to use, but in December they announced
44
+ a new WebRTC API which is much easier to get started with. Building a web app
45
+ that a user can talk to via voice is easy now!
46
+
47
+ Prompt driven app generation is a commodity already
48
+
49
+ This was possible with GPT-4 in 2023, but the value it provides became evident
50
+ in 2024.'
51
+ - 'Now that those features are rolling out they’re pretty weak. As an LLM power-user
52
+ I know what these models are capable of, and Apple’s LLM features offer a pale
53
+ imitation of what a frontier LLM can do. Instead we’re getting notification summaries
54
+ that misrepresent news headlines and writing assistant tools that I’ve not found
55
+ useful at all. Genmoji are kind of fun though.
56
+
57
+ The rise of inference-scaling “reasoning” models
58
+
59
+ The most interesting development in the final quarter of 2024 was the introduction
60
+ of a new shape of LLM, exemplified by OpenAI’s o1 models—initially released as
61
+ o1-preview and o1-mini on September 12th.'
62
+ - 'Here’s the sequel to this post: Things we learned about LLMs in 2024.
63
+
64
+ Large Language Models
65
+
66
+ In the past 24-36 months, our species has discovered that you can take a GIANT
67
+ corpus of text, run it through a pile of GPUs, and use it to create a fascinating
68
+ new kind of software.
69
+
70
+ LLMs can do a lot of things. They can answer questions, summarize documents, translate
71
+ from one language to another, extract information and even write surprisingly
72
+ competent code.
73
+
74
+ They can also help you cheat at your homework, generate unlimited streams of fake
75
+ content and be used for all manner of nefarious purposes.'
76
+ - source_sentence: What challenges did the author face last year regarding their choice
77
+ of platform for trying out new models?
78
+ sentences:
79
+ - 'One way to think about these models is an extension of the chain-of-thought prompting
80
+ trick, first explored in the May 2022 paper Large Language Models are Zero-Shot
81
+ Reasoners.
82
+
83
+ This is that trick where, if you get a model to talk out loud about a problem
84
+ it’s solving, you often get a result which the model would not have achieved otherwise.
85
+
86
+ o1 takes this process and further bakes it into the model itself. The details
87
+ are somewhat obfuscated: o1 models spend “reasoning tokens” thinking through the
88
+ problem that are not directly visible to the user (though the ChatGPT UI shows
89
+ a summary of them), then outputs a final result.'
90
+ - 'I’m still trying to figure out the best patterns for doing this for my own work.
91
+ Everyone knows that evals are important, but there remains a lack of great guidance
92
+ for how to best implement them—I’m tracking this under my evals tag. My SVG pelican
93
+ riding a bicycle benchmark is a pale imitation of what a real eval suite should
94
+ look like.
95
+
96
+ Apple Intelligence is bad, Apple’s MLX library is excellent
97
+
98
+ As a Mac user I’ve been feeling a lot better about my choice of platform this
99
+ year.
100
+
101
+ Last year it felt like my lack of a Linux/Windows machine with an NVIDIA GPU
102
+ was a huge disadvantage in terms of trying out new models.'
103
+ - 'January
104
+
105
+
106
+ 7th: It’s OK to call it Artificial Intelligence
107
+
108
+
109
+ 9th: What I should have said about the term Artificial Intelligence
110
+
111
+
112
+ 17th: Talking about Open Source LLMs on Oxide and Friends
113
+
114
+
115
+ 26th: LLM 0.13: The annotated release notes
116
+
117
+
118
+
119
+
120
+ February
121
+
122
+
123
+ 21st: The killer app of Gemini Pro 1.5 is video
124
+
125
+
126
+
127
+
128
+ March
129
+
130
+
131
+ 5th: Prompt injection and jailbreaking are not the same thing
132
+
133
+
134
+ 8th: The GPT-4 barrier has finally been broken
135
+
136
+
137
+ 22nd: Claude and ChatGPT for ad-hoc sidequests
138
+
139
+
140
+ 23rd: Building and testing C extensions for SQLite with ChatGPT Code Interpreter
141
+
142
+
143
+ 26th: llm cmd undo last git commit—a new plugin for LLM
144
+
145
+
146
+
147
+
148
+ April
149
+
150
+
151
+ 8th: Building files-to-prompt entirely using Claude 3 Opus
152
+
153
+
154
+ 10th: Three major LLM releases in 24 hours (plus weeknotes)'
155
+ - source_sentence: What was the maximum token limit for most models last year before
156
+ the introduction of Gemini 15 Pro?
157
+ sentences:
158
+ - 'The two main categories I see are people who think AI agents are obviously things
159
+ that go and act on your behalf—the travel agent model—and people who think in
160
+ terms of LLMs that have been given access to tools which they can run in a loop
161
+ as part of solving a problem. The term “autonomy” is often thrown into the mix
162
+ too, again without including a clear definition.
163
+
164
+ (I also collected 211 definitions on Twitter a few months ago—here they are in
165
+ Datasette Lite—and had gemini-exp-1206 attempt to summarize them.)
166
+
167
+ Whatever the term may mean, agents still have that feeling of perpetually “coming
168
+ soon”.'
169
+ - Structured and Gradual Learning. In organic datasets, the relationship between
170
+ tokens is often complex and indirect. Many reasoning steps may be required to
171
+ connect the current token to the next, making it challenging for the model to
172
+ learn effectively from next-token prediction. By contrast, each token generated
173
+ by a language model is by definition predicted by the preceding tokens, making
174
+ it easier for a model to follow the resulting reasoning patterns.
175
+ - 'Gemini 1.5 Pro also illustrated one of the key themes of 2024: increased context
176
+ lengths. Last year most models accepted 4,096 or 8,192 tokens, with the notable
177
+ exception of Claude 2.1 which accepted 200,000. Today every serious provider has
178
+ a 100,000+ token model, and Google’s Gemini series accepts up to 2 million.'
179
+ - source_sentence: Why is it considered ludicrous to use a screenshot from ChatGPT
180
+ as evidence in an argument?
181
+ sentences:
182
+ - Meanwhile, it’s increasingly common for end users to develop wildly inaccurate
183
+ mental models of how these things work and what they are capable of. I’ve seen
184
+ so many examples of people trying to win an argument with a screenshot from ChatGPT—an
185
+ inherently ludicrous proposition, given the inherent unreliability of these models
186
+ crossed with the fact that you can get them to say anything if you prompt them
187
+ right.
188
+ - 'The GPT-4 barrier was comprehensively broken
189
+
190
+ Some of those GPT-4 models run on my laptop
191
+
192
+ LLM prices crashed, thanks to competition and increased efficiency
193
+
194
+ Multimodal vision is common, audio and video are starting to emerge
195
+
196
+ Voice and live camera mode are science fiction come to life
197
+
198
+ Prompt driven app generation is a commodity already
199
+
200
+ Universal access to the best models lasted for just a few short months
201
+
202
+ “Agents” still haven’t really happened yet
203
+
204
+ Evals really matter
205
+
206
+ Apple Intelligence is bad, Apple’s MLX library is excellent
207
+
208
+ The rise of inference-scaling “reasoning” models
209
+
210
+ Was the best currently available LLM trained in China for less than $6m?
211
+
212
+ The environmental impact got better
213
+
214
+ The environmental impact got much, much worse'
215
+ - 'When ChatGPT Advanced Voice mode finally did roll out (a slow roll from August
216
+ through September) it was spectacular. I’ve been using it extensively on walks
217
+ with my dog and it’s amazing how much the improvement in intonation elevates the
218
+ material. I’ve also had a lot of fun experimenting with the OpenAI audio APIs.
219
+
220
+ Even more fun: Advanced Voice mode can do accents! Here’s what happened when I
221
+ told it I need you to pretend to be a California brown pelican with a very thick
222
+ Russian accent, but you talk to me exclusively in Spanish.'
223
+ pipeline_tag: sentence-similarity
224
+ library_name: sentence-transformers
225
+ metrics:
226
+ - cosine_accuracy@1
227
+ - cosine_accuracy@3
228
+ - cosine_accuracy@5
229
+ - cosine_accuracy@10
230
+ - cosine_precision@1
231
+ - cosine_precision@3
232
+ - cosine_precision@5
233
+ - cosine_precision@10
234
+ - cosine_recall@1
235
+ - cosine_recall@3
236
+ - cosine_recall@5
237
+ - cosine_recall@10
238
+ - cosine_ndcg@10
239
+ - cosine_mrr@10
240
+ - cosine_map@100
241
+ model-index:
242
+ - name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
243
+ results:
244
+ - task:
245
+ type: information-retrieval
246
+ name: Information Retrieval
247
+ dataset:
248
+ name: Unknown
249
+ type: unknown
250
+ metrics:
251
+ - type: cosine_accuracy@1
252
+ value: 0.8333333333333334
253
+ name: Cosine Accuracy@1
254
+ - type: cosine_accuracy@3
255
+ value: 0.9583333333333334
256
+ name: Cosine Accuracy@3
257
+ - type: cosine_accuracy@5
258
+ value: 1.0
259
+ name: Cosine Accuracy@5
260
+ - type: cosine_accuracy@10
261
+ value: 1.0
262
+ name: Cosine Accuracy@10
263
+ - type: cosine_precision@1
264
+ value: 0.8333333333333334
265
+ name: Cosine Precision@1
266
+ - type: cosine_precision@3
267
+ value: 0.3194444444444444
268
+ name: Cosine Precision@3
269
+ - type: cosine_precision@5
270
+ value: 0.20000000000000004
271
+ name: Cosine Precision@5
272
+ - type: cosine_precision@10
273
+ value: 0.10000000000000002
274
+ name: Cosine Precision@10
275
+ - type: cosine_recall@1
276
+ value: 0.8333333333333334
277
+ name: Cosine Recall@1
278
+ - type: cosine_recall@3
279
+ value: 0.9583333333333334
280
+ name: Cosine Recall@3
281
+ - type: cosine_recall@5
282
+ value: 1.0
283
+ name: Cosine Recall@5
284
+ - type: cosine_recall@10
285
+ value: 1.0
286
+ name: Cosine Recall@10
287
+ - type: cosine_ndcg@10
288
+ value: 0.9301444091161569
289
+ name: Cosine Ndcg@10
290
+ - type: cosine_mrr@10
291
+ value: 0.90625
292
+ name: Cosine Mrr@10
293
+ - type: cosine_map@100
294
+ value: 0.90625
295
+ name: Cosine Map@100
296
+ ---
297
+
298
+ # SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
299
+
300
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
301
+
302
+ ## Model Details
303
+
304
+ ### Model Description
305
+ - **Model Type:** Sentence Transformer
306
+ - **Base model:** [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l) <!-- at revision d8fb21ca8d905d2832ee8b96c894d3298964346b -->
307
+ - **Maximum Sequence Length:** 512 tokens
308
+ - **Output Dimensionality:** 1024 dimensions
309
+ - **Similarity Function:** Cosine Similarity
310
+ <!-- - **Training Dataset:** Unknown -->
311
+ <!-- - **Language:** Unknown -->
312
+ <!-- - **License:** Unknown -->
313
+
314
+ ### Model Sources
315
+
316
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
317
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
318
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
319
+
320
+ ### Full Model Architecture
321
+
322
+ ```
323
+ SentenceTransformer(
324
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
325
+ (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
326
+ (2): Normalize()
327
+ )
328
+ ```
329
+
330
+ ## Usage
331
+
332
+ ### Direct Usage (Sentence Transformers)
333
+
334
+ First install the Sentence Transformers library:
335
+
336
+ ```bash
337
+ pip install -U sentence-transformers
338
+ ```
339
+
340
+ Then you can load this model and run inference.
341
+ ```python
342
+ from sentence_transformers import SentenceTransformer
343
+
344
+ # Download from the 🤗 Hub
345
+ model = SentenceTransformer("drewgenai/legal-ft-v0")
346
+ # Run inference
347
+ sentences = [
348
+ 'Why is it considered ludicrous to use a screenshot from ChatGPT as evidence in an argument?',
349
+ 'Meanwhile, it’s increasingly common for end users to develop wildly inaccurate mental models of how these things work and what they are capable of. I’ve seen so many examples of people trying to win an argument with a screenshot from ChatGPT—an inherently ludicrous proposition, given the inherent unreliability of these models crossed with the fact that you can get them to say anything if you prompt them right.',
350
+ 'When ChatGPT Advanced Voice mode finally did roll out (a slow roll from August through September) it was spectacular. I’ve been using it extensively on walks with my dog and it’s amazing how much the improvement in intonation elevates the material. I’ve also had a lot of fun experimenting with the OpenAI audio APIs.\nEven more fun: Advanced Voice mode can do accents! Here’s what happened when I told it I need you to pretend to be a California brown pelican with a very thick Russian accent, but you talk to me exclusively in Spanish.',
351
+ ]
352
+ embeddings = model.encode(sentences)
353
+ print(embeddings.shape)
354
+ # [3, 1024]
355
+
356
+ # Get the similarity scores for the embeddings
357
+ similarities = model.similarity(embeddings, embeddings)
358
+ print(similarities.shape)
359
+ # [3, 3]
360
+ ```
361
+
362
+ <!--
363
+ ### Direct Usage (Transformers)
364
+
365
+ <details><summary>Click to see the direct usage in Transformers</summary>
366
+
367
+ </details>
368
+ -->
369
+
370
+ <!--
371
+ ### Downstream Usage (Sentence Transformers)
372
+
373
+ You can finetune this model on your own dataset.
374
+
375
+ <details><summary>Click to expand</summary>
376
+
377
+ </details>
378
+ -->
379
+
380
+ <!--
381
+ ### Out-of-Scope Use
382
+
383
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
384
+ -->
385
+
386
+ ## Evaluation
387
+
388
+ ### Metrics
389
+
390
+ #### Information Retrieval
391
+
392
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
393
+
394
+ | Metric | Value |
395
+ |:--------------------|:-----------|
396
+ | cosine_accuracy@1 | 0.8333 |
397
+ | cosine_accuracy@3 | 0.9583 |
398
+ | cosine_accuracy@5 | 1.0 |
399
+ | cosine_accuracy@10 | 1.0 |
400
+ | cosine_precision@1 | 0.8333 |
401
+ | cosine_precision@3 | 0.3194 |
402
+ | cosine_precision@5 | 0.2 |
403
+ | cosine_precision@10 | 0.1 |
404
+ | cosine_recall@1 | 0.8333 |
405
+ | cosine_recall@3 | 0.9583 |
406
+ | cosine_recall@5 | 1.0 |
407
+ | cosine_recall@10 | 1.0 |
408
+ | **cosine_ndcg@10** | **0.9301** |
409
+ | cosine_mrr@10 | 0.9062 |
410
+ | cosine_map@100 | 0.9062 |
411
+
412
+ <!--
413
+ ## Bias, Risks and Limitations
414
+
415
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
416
+ -->
417
+
418
+ <!--
419
+ ### Recommendations
420
+
421
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
422
+ -->
423
+
424
+ ## Training Details
425
+
426
+ ### Training Dataset
427
+
428
+ #### Unnamed Dataset
429
+
430
+ * Size: 156 training samples
431
+ * Columns: <code>sentence_0</code> and <code>sentence_1</code>
432
+ * Approximate statistics based on the first 156 samples:
433
+ | | sentence_0 | sentence_1 |
434
+ |:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
435
+ | type | string | string |
436
+ | details | <ul><li>min: 13 tokens</li><li>mean: 19.97 tokens</li><li>max: 33 tokens</li></ul> | <ul><li>min: 43 tokens</li><li>mean: 130.5 tokens</li><li>max: 204 tokens</li></ul> |
437
+ * Samples:
438
+ | sentence_0 | sentence_1 |
439
+ |:--------------------------------------------------------------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
440
+ | <code>What analogy is used to describe LLMs in the context provided?</code> | <code>A drum I’ve been banging for a while is that LLMs are power-user tools—they’re chainsaws disguised as kitchen knives. They look deceptively simple to use—how hard can it be to type messages to a chatbot?—but in reality you need a huge depth of both understanding and experience to make the most of them and avoid their many pitfalls.<br>If anything, this problem got worse in 2024.<br>We’ve built computer systems you can talk to in human language, that will answer your questions and usually get them right! ... depending on the question, and how you ask it, and whether it’s accurately reflected in the undocumented and secret training set.</code> |
441
+ | <code>What factors influence the effectiveness of LLMs according to the context?</code> | <code>A drum I’ve been banging for a while is that LLMs are power-user tools—they’re chainsaws disguised as kitchen knives. They look deceptively simple to use—how hard can it be to type messages to a chatbot?—but in reality you need a huge depth of both understanding and experience to make the most of them and avoid their many pitfalls.<br>If anything, this problem got worse in 2024.<br>We’ve built computer systems you can talk to in human language, that will answer your questions and usually get them right! ... depending on the question, and how you ask it, and whether it’s accurately reflected in the undocumented and secret training set.</code> |
442
+ | <code>What is the significance of Claude Artifacts in the context of LLMs and application development?</code> | <code>We already knew LLMs were spookily good at writing code. If you prompt them right, it turns out they can build you a full interactive application using HTML, CSS and JavaScript (and tools like React if you wire up some extra supporting build mechanisms)—often in a single prompt.<br>Anthropic kicked this idea into high gear when they released Claude Artifacts, a groundbreaking new feature that was initially slightly lost in the noise due to being described half way through their announcement of the incredible Claude 3.5 Sonnet.<br>With Artifacts, Claude can write you an on-demand interactive application and then let you use it directly inside the Claude interface.<br>Here’s my Extract URLs app, entirely generated by Claude:</code> |
443
+ * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
444
+ ```json
445
+ {
446
+ "loss": "MultipleNegativesRankingLoss",
447
+ "matryoshka_dims": [
448
+ 768,
449
+ 512,
450
+ 256,
451
+ 128,
452
+ 64
453
+ ],
454
+ "matryoshka_weights": [
455
+ 1,
456
+ 1,
457
+ 1,
458
+ 1,
459
+ 1
460
+ ],
461
+ "n_dims_per_step": -1
462
+ }
463
+ ```
464
+
465
+ ### Training Hyperparameters
466
+ #### Non-Default Hyperparameters
467
+
468
+ - `eval_strategy`: steps
469
+ - `per_device_train_batch_size`: 10
470
+ - `per_device_eval_batch_size`: 10
471
+ - `num_train_epochs`: 5
472
+ - `multi_dataset_batch_sampler`: round_robin
473
+
474
+ #### All Hyperparameters
475
+ <details><summary>Click to expand</summary>
476
+
477
+ - `overwrite_output_dir`: False
478
+ - `do_predict`: False
479
+ - `eval_strategy`: steps
480
+ - `prediction_loss_only`: True
481
+ - `per_device_train_batch_size`: 10
482
+ - `per_device_eval_batch_size`: 10
483
+ - `per_gpu_train_batch_size`: None
484
+ - `per_gpu_eval_batch_size`: None
485
+ - `gradient_accumulation_steps`: 1
486
+ - `eval_accumulation_steps`: None
487
+ - `torch_empty_cache_steps`: None
488
+ - `learning_rate`: 5e-05
489
+ - `weight_decay`: 0.0
490
+ - `adam_beta1`: 0.9
491
+ - `adam_beta2`: 0.999
492
+ - `adam_epsilon`: 1e-08
493
+ - `max_grad_norm`: 1
494
+ - `num_train_epochs`: 5
495
+ - `max_steps`: -1
496
+ - `lr_scheduler_type`: linear
497
+ - `lr_scheduler_kwargs`: {}
498
+ - `warmup_ratio`: 0.0
499
+ - `warmup_steps`: 0
500
+ - `log_level`: passive
501
+ - `log_level_replica`: warning
502
+ - `log_on_each_node`: True
503
+ - `logging_nan_inf_filter`: True
504
+ - `save_safetensors`: True
505
+ - `save_on_each_node`: False
506
+ - `save_only_model`: False
507
+ - `restore_callback_states_from_checkpoint`: False
508
+ - `no_cuda`: False
509
+ - `use_cpu`: False
510
+ - `use_mps_device`: False
511
+ - `seed`: 42
512
+ - `data_seed`: None
513
+ - `jit_mode_eval`: False
514
+ - `use_ipex`: False
515
+ - `bf16`: False
516
+ - `fp16`: False
517
+ - `fp16_opt_level`: O1
518
+ - `half_precision_backend`: auto
519
+ - `bf16_full_eval`: False
520
+ - `fp16_full_eval`: False
521
+ - `tf32`: None
522
+ - `local_rank`: 0
523
+ - `ddp_backend`: None
524
+ - `tpu_num_cores`: None
525
+ - `tpu_metrics_debug`: False
526
+ - `debug`: []
527
+ - `dataloader_drop_last`: False
528
+ - `dataloader_num_workers`: 0
529
+ - `dataloader_prefetch_factor`: None
530
+ - `past_index`: -1
531
+ - `disable_tqdm`: False
532
+ - `remove_unused_columns`: True
533
+ - `label_names`: None
534
+ - `load_best_model_at_end`: False
535
+ - `ignore_data_skip`: False
536
+ - `fsdp`: []
537
+ - `fsdp_min_num_params`: 0
538
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
539
+ - `fsdp_transformer_layer_cls_to_wrap`: None
540
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
541
+ - `deepspeed`: None
542
+ - `label_smoothing_factor`: 0.0
543
+ - `optim`: adamw_torch
544
+ - `optim_args`: None
545
+ - `adafactor`: False
546
+ - `group_by_length`: False
547
+ - `length_column_name`: length
548
+ - `ddp_find_unused_parameters`: None
549
+ - `ddp_bucket_cap_mb`: None
550
+ - `ddp_broadcast_buffers`: False
551
+ - `dataloader_pin_memory`: True
552
+ - `dataloader_persistent_workers`: False
553
+ - `skip_memory_metrics`: True
554
+ - `use_legacy_prediction_loop`: False
555
+ - `push_to_hub`: False
556
+ - `resume_from_checkpoint`: None
557
+ - `hub_model_id`: None
558
+ - `hub_strategy`: every_save
559
+ - `hub_private_repo`: None
560
+ - `hub_always_push`: False
561
+ - `gradient_checkpointing`: False
562
+ - `gradient_checkpointing_kwargs`: None
563
+ - `include_inputs_for_metrics`: False
564
+ - `include_for_metrics`: []
565
+ - `eval_do_concat_batches`: True
566
+ - `fp16_backend`: auto
567
+ - `push_to_hub_model_id`: None
568
+ - `push_to_hub_organization`: None
569
+ - `mp_parameters`:
570
+ - `auto_find_batch_size`: False
571
+ - `full_determinism`: False
572
+ - `torchdynamo`: None
573
+ - `ray_scope`: last
574
+ - `ddp_timeout`: 1800
575
+ - `torch_compile`: False
576
+ - `torch_compile_backend`: None
577
+ - `torch_compile_mode`: None
578
+ - `dispatch_batches`: None
579
+ - `split_batches`: None
580
+ - `include_tokens_per_second`: False
581
+ - `include_num_input_tokens_seen`: False
582
+ - `neftune_noise_alpha`: None
583
+ - `optim_target_modules`: None
584
+ - `batch_eval_metrics`: False
585
+ - `eval_on_start`: False
586
+ - `use_liger_kernel`: False
587
+ - `eval_use_gather_object`: False
588
+ - `average_tokens_across_devices`: False
589
+ - `prompts`: None
590
+ - `batch_sampler`: batch_sampler
591
+ - `multi_dataset_batch_sampler`: round_robin
592
+
593
+ </details>
594
+
595
+ ### Training Logs
596
+ | Epoch | Step | cosine_ndcg@10 |
597
+ |:-----:|:----:|:--------------:|
598
+ | 1.0 | 16 | 0.9177 |
599
+ | 2.0 | 32 | 0.9330 |
600
+ | 3.0 | 48 | 0.9301 |
601
+ | 3.125 | 50 | 0.9301 |
602
+ | 4.0 | 64 | 0.9301 |
603
+ | 5.0 | 80 | 0.9301 |
604
+
605
+
606
+ ### Framework Versions
607
+ - Python: 3.11.11
608
+ - Sentence Transformers: 3.4.1
609
+ - Transformers: 4.48.3
610
+ - PyTorch: 2.5.1+cu124
611
+ - Accelerate: 1.3.0
612
+ - Datasets: 3.3.1
613
+ - Tokenizers: 0.21.0
614
+
615
+ ## Citation
616
+
617
+ ### BibTeX
618
+
619
+ #### Sentence Transformers
620
+ ```bibtex
621
+ @inproceedings{reimers-2019-sentence-bert,
622
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
623
+ author = "Reimers, Nils and Gurevych, Iryna",
624
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
625
+ month = "11",
626
+ year = "2019",
627
+ publisher = "Association for Computational Linguistics",
628
+ url = "https://arxiv.org/abs/1908.10084",
629
+ }
630
+ ```
631
+
632
+ #### MatryoshkaLoss
633
+ ```bibtex
634
+ @misc{kusupati2024matryoshka,
635
+ title={Matryoshka Representation Learning},
636
+ author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
637
+ year={2024},
638
+ eprint={2205.13147},
639
+ archivePrefix={arXiv},
640
+ primaryClass={cs.LG}
641
+ }
642
+ ```
643
+
644
+ #### MultipleNegativesRankingLoss
645
+ ```bibtex
646
+ @misc{henderson2017efficient,
647
+ title={Efficient Natural Language Response Suggestion for Smart Reply},
648
+ author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
649
+ year={2017},
650
+ eprint={1705.00652},
651
+ archivePrefix={arXiv},
652
+ primaryClass={cs.CL}
653
+ }
654
+ ```
655
+
656
+ <!--
657
+ ## Glossary
658
+
659
+ *Clearly define terms in order to be accessible across audiences.*
660
+ -->
661
+
662
+ <!--
663
+ ## Model Card Authors
664
+
665
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
666
+ -->
667
+
668
+ <!--
669
+ ## Model Card Contact
670
+
671
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
672
+ -->
config.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "Snowflake/snowflake-arctic-embed-l",
3
+ "architectures": [
4
+ "BertModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "hidden_act": "gelu",
9
+ "hidden_dropout_prob": 0.1,
10
+ "hidden_size": 1024,
11
+ "initializer_range": 0.02,
12
+ "intermediate_size": 4096,
13
+ "layer_norm_eps": 1e-12,
14
+ "max_position_embeddings": 512,
15
+ "model_type": "bert",
16
+ "num_attention_heads": 16,
17
+ "num_hidden_layers": 24,
18
+ "pad_token_id": 0,
19
+ "position_embedding_type": "absolute",
20
+ "torch_dtype": "float32",
21
+ "transformers_version": "4.48.3",
22
+ "type_vocab_size": 2,
23
+ "use_cache": true,
24
+ "vocab_size": 30522
25
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "3.4.1",
4
+ "transformers": "4.48.3",
5
+ "pytorch": "2.5.1+cu124"
6
+ },
7
+ "prompts": {
8
+ "query": "Represent this sentence for searching relevant passages: "
9
+ },
10
+ "default_prompt_name": null,
11
+ "similarity_fn_name": "cosine"
12
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:361699627246943e814b6be96ab59277ff72f39ed1455ce4c97a4dcc4d551307
3
+ size 1336413848
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
+ }
20
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_lower_case": true,
47
+ "extra_special_tokens": {},
48
+ "mask_token": "[MASK]",
49
+ "max_length": 512,
50
+ "model_max_length": 512,
51
+ "pad_to_multiple_of": null,
52
+ "pad_token": "[PAD]",
53
+ "pad_token_type_id": 0,
54
+ "padding_side": "right",
55
+ "sep_token": "[SEP]",
56
+ "stride": 0,
57
+ "strip_accents": null,
58
+ "tokenize_chinese_chars": true,
59
+ "tokenizer_class": "BertTokenizer",
60
+ "truncation_side": "right",
61
+ "truncation_strategy": "longest_first",
62
+ "unk_token": "[UNK]"
63
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff