johnbrumett-uofu commited on
Commit
0bd63f6
·
verified ·
1 Parent(s): d4a70c8

Add new SentenceTransformer model

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 1024,
3
+ "pooling_mode_cls_token": true,
4
+ "pooling_mode_mean_tokens": false,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,706 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - sentence-transformers
4
+ - sentence-similarity
5
+ - feature-extraction
6
+ - generated_from_trainer
7
+ - dataset_size:156
8
+ - loss:MatryoshkaLoss
9
+ - loss:MultipleNegativesRankingLoss
10
+ base_model: Snowflake/snowflake-arctic-embed-l
11
+ widget:
12
+ - source_sentence: 2. What new shape of LLM was introduced in the final quarter of
13
+ 2024, and what were the names of the initial models released?
14
+ sentences:
15
+ - '17th: AI for Data Journalism: demonstrating what we can do with this stuff right
16
+ now
17
+
18
+
19
+ 22nd: Options for accessing Llama 3 from the terminal using LLM
20
+
21
+
22
+
23
+
24
+ May
25
+
26
+
27
+ 8th: Slop is the new name for unwanted AI-generated content
28
+
29
+
30
+ 15th: ChatGPT in “4o” mode is not running the new features yet
31
+
32
+
33
+ 29th: Training is not the same as chatting: ChatGPT and other LLMs don’t remember
34
+ everything you say
35
+
36
+
37
+
38
+
39
+ June
40
+
41
+
42
+ 6th: Accidental prompt injection against RAG applications
43
+
44
+
45
+ 10th: Thoughts on the WWDC 2024 keynote on Apple Intelligence
46
+
47
+
48
+ 17th: Language models on the command-line
49
+
50
+
51
+ 21st: Building search-based RAG using Claude, Datasette and Val Town
52
+
53
+
54
+ 27th: Open challenges for AI engineering
55
+
56
+
57
+
58
+
59
+ July
60
+
61
+
62
+ 14th: Imitation Intelligence, my keynote for PyCon US 2024'
63
+ - 'Now that those features are rolling out they’re pretty weak. As an LLM power-user
64
+ I know what these models are capable of, and Apple’s LLM features offer a pale
65
+ imitation of what a frontier LLM can do. Instead we’re getting notification summaries
66
+ that misrepresent news headlines and writing assistant tools that I’ve not found
67
+ useful at all. Genmoji are kind of fun though.
68
+
69
+ The rise of inference-scaling “reasoning” models
70
+
71
+ The most interesting development in the final quarter of 2024 was the introduction
72
+ of a new shape of LLM, exemplified by OpenAI’s o1 models—initially released as
73
+ o1-preview and o1-mini on September 12th.'
74
+ - 'I’m still trying to figure out the best patterns for doing this for my own work.
75
+ Everyone knows that evals are important, but there remains a lack of great guidance
76
+ for how to best implement them—I’m tracking this under my evals tag. My SVG pelican
77
+ riding a bicycle benchmark is a pale imitation of what a real eval suite should
78
+ look like.
79
+
80
+ Apple Intelligence is bad, Apple’s MLX library is excellent
81
+
82
+ As a Mac user I’ve been feeling a lot better about my choice of platform this
83
+ year.
84
+
85
+ Last year it felt like my lack of a Linux/Windows machine with an NVIDIA GPU
86
+ was a huge disadvantage in terms of trying out new models.'
87
+ - source_sentence: 2. In what year does the author expect the prompt-driven custom
88
+ interface feature to be widely integrated into products?
89
+ sentences:
90
+ - 'The models may have got more capable, but most of the limitations remained the
91
+ same. OpenAI’s o1 may finally be able to (mostly) count the Rs in strawberry,
92
+ but its abilities are still limited by its nature as an LLM and the constraints
93
+ placed on it by the harness it’s running in. o1 can’t run web searches or use
94
+ Code Interpreter, but GPT-4o can—both in that same ChatGPT UI. (o1 will pretend
95
+ to do those things if you ask it to, a regression to the URL hallucinations bug
96
+ from early 2023).
97
+
98
+ What are we doing about this? Not much. Most users are thrown in at the deep end.
99
+ The default LLM chat UI is like taking brand new computer users, dropping them
100
+ into a Linux terminal and expecting them to figure it all out.'
101
+ - 'This prompt-driven custom interface feature is so powerful and easy to build
102
+ (once you’ve figured out the gnarly details of browser sandboxing) that I expect
103
+ it to show up as a feature in a wide range of products in 2025.
104
+
105
+ Universal access to the best models lasted for just a few short months
106
+
107
+ For a few short months this year all three of the best available models—GPT-4o,
108
+ Claude 3.5 Sonnet and Gemini 1.5 Pro—were freely available to most of the world.'
109
+ - 'Against this photo of butterflies at the California Academy of Sciences:
110
+
111
+
112
+
113
+ A shallow dish, likely a hummingbird or butterfly feeder, is red. Pieces of orange
114
+ slices of fruit are visible inside the dish.
115
+
116
+ Two butterflies are positioned in the feeder, one is a dark brown/black butterfly
117
+ with white/cream-colored markings. The other is a large, brown butterfly with
118
+ patterns of lighter brown, beige, and black markings, including prominent eye
119
+ spots. The larger brown butterfly appears to be feeding on the fruit.'
120
+ - source_sentence: 2. What is the license under which Alibaba's QwQ model was released?
121
+ sentences:
122
+ - The most recent twist, again from December (December was a lot) is live video.
123
+ ChatGPT voice mode now provides the option to share your camera feed with the
124
+ model and talk about what you can see in real time. Google Gemini have a preview
125
+ of the same feature, which they managed to ship the day before ChatGPT did.
126
+ - 'OpenAI are not the only game in town here. Google released their first entrant
127
+ in the category, gemini-2.0-flash-thinking-exp, on December 19th.
128
+
129
+ Alibaba’s Qwen team released their QwQ model on November 28th—under an Apache
130
+ 2.0 license, and that one I could run on my own machine. They followed that up
131
+ with a vision reasoning model called QvQ on December 24th, which I also ran locally.
132
+
133
+ DeepSeek made their DeepSeek-R1-Lite-Preview model available to try out through
134
+ their chat interface on November 20th.
135
+
136
+ To understand more about inference scaling I recommend Is AI progress slowing
137
+ down? by Arvind Narayanan and Sayash Kapoor.'
138
+ - 'Stuff we figured out about AI in 2023
139
+
140
+
141
+
142
+
143
+
144
+
145
+
146
+
147
+
148
+
149
+
150
+
151
+
152
+
153
+
154
+
155
+
156
+
157
+
158
+
159
+
160
+
161
+ Simon Willison’s Weblog
162
+
163
+ Subscribe
164
+
165
+
166
+
167
+
168
+
169
+
170
+
171
+ Stuff we figured out about AI in 2023
172
+
173
+ 31st December 2023
174
+
175
+ 2023 was the breakthrough year for Large Language Models (LLMs). I think it’s
176
+ OK to call these AI—they’re the latest and (currently) most interesting development
177
+ in the academic field of Artificial Intelligence that dates back to the 1950s.
178
+
179
+ Here’s my attempt to round up the highlights in one place!'
180
+ - source_sentence: 1. What is the significance of the cost reduction mentioned in
181
+ the context regarding LLMs in 2024?
182
+ sentences:
183
+ - 'I think people who complain that LLM improvement has slowed are often missing
184
+ the enormous advances in these multi-modal models. Being able to run prompts against
185
+ images (and audio and video) is a fascinating new way to apply these models.
186
+
187
+ Voice and live camera mode are science fiction come to life
188
+
189
+ The audio and live video modes that have started to emerge deserve a special mention.
190
+
191
+ The ability to talk to ChatGPT first arrived in September 2023, but it was mostly
192
+ an illusion: OpenAI used their excellent Whisper speech-to-text model and a new
193
+ text-to-speech model (creatively named tts-1) to enable conversations with the
194
+ ChatGPT mobile apps, but the actual model just saw text.'
195
+ - 'I like people who are skeptical of this stuff. The hype has been deafening for
196
+ more than two years now, and there are enormous quantities of snake oil and misinformation
197
+ out there. A lot of very bad decisions are being made based on that hype. Being
198
+ critical is a virtue.
199
+
200
+ If we want people with decision-making authority to make good decisions about
201
+ how to apply these tools we first need to acknowledge that there ARE good applications,
202
+ and then help explain how to put those into practice while avoiding the many unintiutive
203
+ traps.
204
+
205
+ (If you still don’t think there are any good applications at all I’m not sure
206
+ why you made it to this point in the article!)'
207
+ - '260 input tokens, 92 output tokens. Cost approximately 0.0024 cents (that’s less
208
+ than a 400th of a cent).
209
+
210
+ This increase in efficiency and reduction in price is my single favourite trend
211
+ from 2024. I want the utility of LLMs at a fraction of the energy cost and it
212
+ looks like that’s what we’re getting.
213
+
214
+ Multimodal vision is common, audio and video are starting to emerge
215
+
216
+ My butterfly example above illustrates another key trend from 2024: the rise of
217
+ multi-modal LLMs.
218
+
219
+ A year ago the single most notable example of these was GPT-4 Vision, released
220
+ at OpenAI’s DevDay in November 2023. Google’s multi-modal Gemini 1.0 was announced
221
+ on December 7th 2023 so it also (just) makes it into the 2023 window.'
222
+ - source_sentence: 1. What challenges do LLMs face in distinguishing truth from fiction?
223
+ sentences:
224
+ - 'Terminology aside, I remain skeptical as to their utility based, once again,
225
+ on the challenge of gullibility. LLMs believe anything you tell them. Any systems
226
+ that attempts to make meaningful decisions on your behalf will run into the same
227
+ roadblock: how good is a travel agent, or a digital assistant, or even a research
228
+ tool if it can’t distinguish truth from fiction?
229
+
230
+ Just the other day Google Search was caught serving up an entirely fake description
231
+ of the non-existant movie “Encanto 2”. It turned out to be summarizing an imagined
232
+ movie listing from a fan fiction wiki.'
233
+ - Structured and Gradual Learning. In organic datasets, the relationship between
234
+ tokens is often complex and indirect. Many reasoning steps may be required to
235
+ connect the current token to the next, making it challenging for the model to
236
+ learn effectively from next-token prediction. By contrast, each token generated
237
+ by a language model is by definition predicted by the preceding tokens, making
238
+ it easier for a model to follow the resulting reasoning patterns.
239
+ - 'The May 13th announcement of GPT-4o included a demo of a brand new voice mode,
240
+ where the true multi-modal GPT-4o (the o is for “omni”) model could accept audio
241
+ input and output incredibly realistic sounding speech without needing separate
242
+ TTS or STT models.
243
+
244
+ The demo also sounded conspicuously similar to Scarlett Johansson... and after
245
+ she complained the voice from the demo, Skye, never made it to a production product.
246
+
247
+ The delay in releasing the new voice mode after the initial demo caused quite
248
+ a lot of confusion. I wrote about that in ChatGPT in “4o” mode is not running
249
+ the new features yet.'
250
+ pipeline_tag: sentence-similarity
251
+ library_name: sentence-transformers
252
+ metrics:
253
+ - cosine_accuracy@1
254
+ - cosine_accuracy@3
255
+ - cosine_accuracy@5
256
+ - cosine_accuracy@10
257
+ - cosine_precision@1
258
+ - cosine_precision@3
259
+ - cosine_precision@5
260
+ - cosine_precision@10
261
+ - cosine_recall@1
262
+ - cosine_recall@3
263
+ - cosine_recall@5
264
+ - cosine_recall@10
265
+ - cosine_ndcg@10
266
+ - cosine_mrr@10
267
+ - cosine_map@100
268
+ model-index:
269
+ - name: SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
270
+ results:
271
+ - task:
272
+ type: information-retrieval
273
+ name: Information Retrieval
274
+ dataset:
275
+ name: Unknown
276
+ type: unknown
277
+ metrics:
278
+ - type: cosine_accuracy@1
279
+ value: 0.8333333333333334
280
+ name: Cosine Accuracy@1
281
+ - type: cosine_accuracy@3
282
+ value: 0.9583333333333334
283
+ name: Cosine Accuracy@3
284
+ - type: cosine_accuracy@5
285
+ value: 1.0
286
+ name: Cosine Accuracy@5
287
+ - type: cosine_accuracy@10
288
+ value: 1.0
289
+ name: Cosine Accuracy@10
290
+ - type: cosine_precision@1
291
+ value: 0.8333333333333334
292
+ name: Cosine Precision@1
293
+ - type: cosine_precision@3
294
+ value: 0.3194444444444444
295
+ name: Cosine Precision@3
296
+ - type: cosine_precision@5
297
+ value: 0.20000000000000004
298
+ name: Cosine Precision@5
299
+ - type: cosine_precision@10
300
+ value: 0.10000000000000002
301
+ name: Cosine Precision@10
302
+ - type: cosine_recall@1
303
+ value: 0.8333333333333334
304
+ name: Cosine Recall@1
305
+ - type: cosine_recall@3
306
+ value: 0.9583333333333334
307
+ name: Cosine Recall@3
308
+ - type: cosine_recall@5
309
+ value: 1.0
310
+ name: Cosine Recall@5
311
+ - type: cosine_recall@10
312
+ value: 1.0
313
+ name: Cosine Recall@10
314
+ - type: cosine_ndcg@10
315
+ value: 0.9301444091161569
316
+ name: Cosine Ndcg@10
317
+ - type: cosine_mrr@10
318
+ value: 0.90625
319
+ name: Cosine Mrr@10
320
+ - type: cosine_map@100
321
+ value: 0.90625
322
+ name: Cosine Map@100
323
+ ---
324
+
325
+ # SentenceTransformer based on Snowflake/snowflake-arctic-embed-l
326
+
327
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
328
+
329
+ ## Model Details
330
+
331
+ ### Model Description
332
+ - **Model Type:** Sentence Transformer
333
+ - **Base model:** [Snowflake/snowflake-arctic-embed-l](https://huggingface.co/Snowflake/snowflake-arctic-embed-l) <!-- at revision d8fb21ca8d905d2832ee8b96c894d3298964346b -->
334
+ - **Maximum Sequence Length:** 512 tokens
335
+ - **Output Dimensionality:** 1024 dimensions
336
+ - **Similarity Function:** Cosine Similarity
337
+ <!-- - **Training Dataset:** Unknown -->
338
+ <!-- - **Language:** Unknown -->
339
+ <!-- - **License:** Unknown -->
340
+
341
+ ### Model Sources
342
+
343
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
344
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
345
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
346
+
347
+ ### Full Model Architecture
348
+
349
+ ```
350
+ SentenceTransformer(
351
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
352
+ (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
353
+ (2): Normalize()
354
+ )
355
+ ```
356
+
357
+ ## Usage
358
+
359
+ ### Direct Usage (Sentence Transformers)
360
+
361
+ First install the Sentence Transformers library:
362
+
363
+ ```bash
364
+ pip install -U sentence-transformers
365
+ ```
366
+
367
+ Then you can load this model and run inference.
368
+ ```python
369
+ from sentence_transformers import SentenceTransformer
370
+
371
+ # Download from the 🤗 Hub
372
+ model = SentenceTransformer("johnbrumett-uofu/legal-ft-v0")
373
+ # Run inference
374
+ sentences = [
375
+ '1. What challenges do LLMs face in distinguishing truth from fiction?',
376
+ 'Terminology aside, I remain skeptical as to their utility based, once again, on the challenge of gullibility. LLMs believe anything you tell them. Any systems that attempts to make meaningful decisions on your behalf will run into the same roadblock: how good is a travel agent, or a digital assistant, or even a research tool if it can’t distinguish truth from fiction?\nJust the other day Google Search was caught serving up an entirely fake description of the non-existant movie “Encanto 2”. It turned out to be summarizing an imagined movie listing from a fan fiction wiki.',
377
+ 'The May 13th announcement of GPT-4o included a demo of a brand new voice mode, where the true multi-modal GPT-4o (the o is for “omni”) model could accept audio input and output incredibly realistic sounding speech without needing separate TTS or STT models.\nThe demo also sounded conspicuously similar to Scarlett Johansson... and after she complained the voice from the demo, Skye, never made it to a production product.\nThe delay in releasing the new voice mode after the initial demo caused quite a lot of confusion. I wrote about that in ChatGPT in “4o” mode is not running the new features yet.',
378
+ ]
379
+ embeddings = model.encode(sentences)
380
+ print(embeddings.shape)
381
+ # [3, 1024]
382
+
383
+ # Get the similarity scores for the embeddings
384
+ similarities = model.similarity(embeddings, embeddings)
385
+ print(similarities.shape)
386
+ # [3, 3]
387
+ ```
388
+
389
+ <!--
390
+ ### Direct Usage (Transformers)
391
+
392
+ <details><summary>Click to see the direct usage in Transformers</summary>
393
+
394
+ </details>
395
+ -->
396
+
397
+ <!--
398
+ ### Downstream Usage (Sentence Transformers)
399
+
400
+ You can finetune this model on your own dataset.
401
+
402
+ <details><summary>Click to expand</summary>
403
+
404
+ </details>
405
+ -->
406
+
407
+ <!--
408
+ ### Out-of-Scope Use
409
+
410
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
411
+ -->
412
+
413
+ ## Evaluation
414
+
415
+ ### Metrics
416
+
417
+ #### Information Retrieval
418
+
419
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
420
+
421
+ | Metric | Value |
422
+ |:--------------------|:-----------|
423
+ | cosine_accuracy@1 | 0.8333 |
424
+ | cosine_accuracy@3 | 0.9583 |
425
+ | cosine_accuracy@5 | 1.0 |
426
+ | cosine_accuracy@10 | 1.0 |
427
+ | cosine_precision@1 | 0.8333 |
428
+ | cosine_precision@3 | 0.3194 |
429
+ | cosine_precision@5 | 0.2 |
430
+ | cosine_precision@10 | 0.1 |
431
+ | cosine_recall@1 | 0.8333 |
432
+ | cosine_recall@3 | 0.9583 |
433
+ | cosine_recall@5 | 1.0 |
434
+ | cosine_recall@10 | 1.0 |
435
+ | **cosine_ndcg@10** | **0.9301** |
436
+ | cosine_mrr@10 | 0.9062 |
437
+ | cosine_map@100 | 0.9062 |
438
+
439
+ <!--
440
+ ## Bias, Risks and Limitations
441
+
442
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
443
+ -->
444
+
445
+ <!--
446
+ ### Recommendations
447
+
448
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
449
+ -->
450
+
451
+ ## Training Details
452
+
453
+ ### Training Dataset
454
+
455
+ #### Unnamed Dataset
456
+
457
+ * Size: 156 training samples
458
+ * Columns: <code>sentence_0</code> and <code>sentence_1</code>
459
+ * Approximate statistics based on the first 156 samples:
460
+ | | sentence_0 | sentence_1 |
461
+ |:--------|:----------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
462
+ | type | string | string |
463
+ | details | <ul><li>min: 16 tokens</li><li>mean: 22.5 tokens</li><li>max: 37 tokens</li></ul> | <ul><li>min: 43 tokens</li><li>mean: 130.53 tokens</li><li>max: 204 tokens</li></ul> |
464
+ * Samples:
465
+ | sentence_0 | sentence_1 |
466
+ |:-------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
467
+ | <code>1. What key themes and pivotal moments in the field of Large Language Models were identified in 2024?</code> | <code>Things we learned about LLMs in 2024<br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br>Simon Willison’s Weblog<br>Subscribe<br><br><br><br><br><br><br>Things we learned about LLMs in 2024<br>31st December 2024<br>A lot has happened in the world of Large Language Models over the course of 2024. Here’s a review of things we figured out about the field in the past twelve months, plus my attempt at identifying key themes and pivotal moments.<br>This is a sequel to my review of 2023.<br>In this article:</code> |
468
+ | <code>2. How does the review of 2024 compare to the review of 2023 regarding advancements in LLMs?</code> | <code>Things we learned about LLMs in 2024<br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br><br>Simon Willison’s Weblog<br>Subscribe<br><br><br><br><br><br><br>Things we learned about LLMs in 2024<br>31st December 2024<br>A lot has happened in the world of Large Language Models over the course of 2024. Here’s a review of things we figured out about the field in the past twelve months, plus my attempt at identifying key themes and pivotal moments.<br>This is a sequel to my review of 2023.<br>In this article:</code> |
469
+ | <code>1. What advancements have been made in multimodal vision and audio/video capabilities in LLMs?</code> | <code>The GPT-4 barrier was comprehensively broken<br>Some of those GPT-4 models run on my laptop<br>LLM prices crashed, thanks to competition and increased efficiency<br>Multimodal vision is common, audio and video are starting to emerge<br>Voice and live camera mode are science fiction come to life<br>Prompt driven app generation is a commodity already<br>Universal access to the best models lasted for just a few short months<br>“Agents” still haven’t really happened yet<br>Evals really matter<br>Apple Intelligence is bad, Apple’s MLX library is excellent<br>The rise of inference-scaling “reasoning” models<br>Was the best currently available LLM trained in China for less than $6m?<br>The environmental impact got better<br>The environmental impact got much, much worse</code> |
470
+ * Loss: [<code>MatryoshkaLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#matryoshkaloss) with these parameters:
471
+ ```json
472
+ {
473
+ "loss": "MultipleNegativesRankingLoss",
474
+ "matryoshka_dims": [
475
+ 768,
476
+ 512,
477
+ 256,
478
+ 128,
479
+ 64
480
+ ],
481
+ "matryoshka_weights": [
482
+ 1,
483
+ 1,
484
+ 1,
485
+ 1,
486
+ 1
487
+ ],
488
+ "n_dims_per_step": -1
489
+ }
490
+ ```
491
+
492
+ ### Training Hyperparameters
493
+ #### Non-Default Hyperparameters
494
+
495
+ - `eval_strategy`: steps
496
+ - `per_device_train_batch_size`: 10
497
+ - `per_device_eval_batch_size`: 10
498
+ - `num_train_epochs`: 10
499
+ - `multi_dataset_batch_sampler`: round_robin
500
+
501
+ #### All Hyperparameters
502
+ <details><summary>Click to expand</summary>
503
+
504
+ - `overwrite_output_dir`: False
505
+ - `do_predict`: False
506
+ - `eval_strategy`: steps
507
+ - `prediction_loss_only`: True
508
+ - `per_device_train_batch_size`: 10
509
+ - `per_device_eval_batch_size`: 10
510
+ - `per_gpu_train_batch_size`: None
511
+ - `per_gpu_eval_batch_size`: None
512
+ - `gradient_accumulation_steps`: 1
513
+ - `eval_accumulation_steps`: None
514
+ - `torch_empty_cache_steps`: None
515
+ - `learning_rate`: 5e-05
516
+ - `weight_decay`: 0.0
517
+ - `adam_beta1`: 0.9
518
+ - `adam_beta2`: 0.999
519
+ - `adam_epsilon`: 1e-08
520
+ - `max_grad_norm`: 1
521
+ - `num_train_epochs`: 10
522
+ - `max_steps`: -1
523
+ - `lr_scheduler_type`: linear
524
+ - `lr_scheduler_kwargs`: {}
525
+ - `warmup_ratio`: 0.0
526
+ - `warmup_steps`: 0
527
+ - `log_level`: passive
528
+ - `log_level_replica`: warning
529
+ - `log_on_each_node`: True
530
+ - `logging_nan_inf_filter`: True
531
+ - `save_safetensors`: True
532
+ - `save_on_each_node`: False
533
+ - `save_only_model`: False
534
+ - `restore_callback_states_from_checkpoint`: False
535
+ - `no_cuda`: False
536
+ - `use_cpu`: False
537
+ - `use_mps_device`: False
538
+ - `seed`: 42
539
+ - `data_seed`: None
540
+ - `jit_mode_eval`: False
541
+ - `use_ipex`: False
542
+ - `bf16`: False
543
+ - `fp16`: False
544
+ - `fp16_opt_level`: O1
545
+ - `half_precision_backend`: auto
546
+ - `bf16_full_eval`: False
547
+ - `fp16_full_eval`: False
548
+ - `tf32`: None
549
+ - `local_rank`: 0
550
+ - `ddp_backend`: None
551
+ - `tpu_num_cores`: None
552
+ - `tpu_metrics_debug`: False
553
+ - `debug`: []
554
+ - `dataloader_drop_last`: False
555
+ - `dataloader_num_workers`: 0
556
+ - `dataloader_prefetch_factor`: None
557
+ - `past_index`: -1
558
+ - `disable_tqdm`: False
559
+ - `remove_unused_columns`: True
560
+ - `label_names`: None
561
+ - `load_best_model_at_end`: False
562
+ - `ignore_data_skip`: False
563
+ - `fsdp`: []
564
+ - `fsdp_min_num_params`: 0
565
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
566
+ - `fsdp_transformer_layer_cls_to_wrap`: None
567
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
568
+ - `deepspeed`: None
569
+ - `label_smoothing_factor`: 0.0
570
+ - `optim`: adamw_torch
571
+ - `optim_args`: None
572
+ - `adafactor`: False
573
+ - `group_by_length`: False
574
+ - `length_column_name`: length
575
+ - `ddp_find_unused_parameters`: None
576
+ - `ddp_bucket_cap_mb`: None
577
+ - `ddp_broadcast_buffers`: False
578
+ - `dataloader_pin_memory`: True
579
+ - `dataloader_persistent_workers`: False
580
+ - `skip_memory_metrics`: True
581
+ - `use_legacy_prediction_loop`: False
582
+ - `push_to_hub`: False
583
+ - `resume_from_checkpoint`: None
584
+ - `hub_model_id`: None
585
+ - `hub_strategy`: every_save
586
+ - `hub_private_repo`: None
587
+ - `hub_always_push`: False
588
+ - `gradient_checkpointing`: False
589
+ - `gradient_checkpointing_kwargs`: None
590
+ - `include_inputs_for_metrics`: False
591
+ - `include_for_metrics`: []
592
+ - `eval_do_concat_batches`: True
593
+ - `fp16_backend`: auto
594
+ - `push_to_hub_model_id`: None
595
+ - `push_to_hub_organization`: None
596
+ - `mp_parameters`:
597
+ - `auto_find_batch_size`: False
598
+ - `full_determinism`: False
599
+ - `torchdynamo`: None
600
+ - `ray_scope`: last
601
+ - `ddp_timeout`: 1800
602
+ - `torch_compile`: False
603
+ - `torch_compile_backend`: None
604
+ - `torch_compile_mode`: None
605
+ - `dispatch_batches`: None
606
+ - `split_batches`: None
607
+ - `include_tokens_per_second`: False
608
+ - `include_num_input_tokens_seen`: False
609
+ - `neftune_noise_alpha`: None
610
+ - `optim_target_modules`: None
611
+ - `batch_eval_metrics`: False
612
+ - `eval_on_start`: False
613
+ - `use_liger_kernel`: False
614
+ - `eval_use_gather_object`: False
615
+ - `average_tokens_across_devices`: False
616
+ - `prompts`: None
617
+ - `batch_sampler`: batch_sampler
618
+ - `multi_dataset_batch_sampler`: round_robin
619
+
620
+ </details>
621
+
622
+ ### Training Logs
623
+ | Epoch | Step | cosine_ndcg@10 |
624
+ |:-----:|:----:|:--------------:|
625
+ | 1.0 | 16 | 0.9148 |
626
+ | 2.0 | 32 | 0.9301 |
627
+ | 3.0 | 48 | 0.9609 |
628
+ | 3.125 | 50 | 0.9609 |
629
+ | 4.0 | 64 | 0.9283 |
630
+ | 5.0 | 80 | 0.9301 |
631
+ | 6.0 | 96 | 0.9455 |
632
+ | 6.25 | 100 | 0.9455 |
633
+ | 7.0 | 112 | 0.9455 |
634
+ | 8.0 | 128 | 0.9455 |
635
+ | 9.0 | 144 | 0.9301 |
636
+ | 9.375 | 150 | 0.9301 |
637
+ | 10.0 | 160 | 0.9301 |
638
+
639
+
640
+ ### Framework Versions
641
+ - Python: 3.11.11
642
+ - Sentence Transformers: 3.4.1
643
+ - Transformers: 4.48.3
644
+ - PyTorch: 2.5.1+cu124
645
+ - Accelerate: 1.3.0
646
+ - Datasets: 3.3.0
647
+ - Tokenizers: 0.21.0
648
+
649
+ ## Citation
650
+
651
+ ### BibTeX
652
+
653
+ #### Sentence Transformers
654
+ ```bibtex
655
+ @inproceedings{reimers-2019-sentence-bert,
656
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
657
+ author = "Reimers, Nils and Gurevych, Iryna",
658
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
659
+ month = "11",
660
+ year = "2019",
661
+ publisher = "Association for Computational Linguistics",
662
+ url = "https://arxiv.org/abs/1908.10084",
663
+ }
664
+ ```
665
+
666
+ #### MatryoshkaLoss
667
+ ```bibtex
668
+ @misc{kusupati2024matryoshka,
669
+ title={Matryoshka Representation Learning},
670
+ author={Aditya Kusupati and Gantavya Bhatt and Aniket Rege and Matthew Wallingford and Aditya Sinha and Vivek Ramanujan and William Howard-Snyder and Kaifeng Chen and Sham Kakade and Prateek Jain and Ali Farhadi},
671
+ year={2024},
672
+ eprint={2205.13147},
673
+ archivePrefix={arXiv},
674
+ primaryClass={cs.LG}
675
+ }
676
+ ```
677
+
678
+ #### MultipleNegativesRankingLoss
679
+ ```bibtex
680
+ @misc{henderson2017efficient,
681
+ title={Efficient Natural Language Response Suggestion for Smart Reply},
682
+ author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
683
+ year={2017},
684
+ eprint={1705.00652},
685
+ archivePrefix={arXiv},
686
+ primaryClass={cs.CL}
687
+ }
688
+ ```
689
+
690
+ <!--
691
+ ## Glossary
692
+
693
+ *Clearly define terms in order to be accessible across audiences.*
694
+ -->
695
+
696
+ <!--
697
+ ## Model Card Authors
698
+
699
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
700
+ -->
701
+
702
+ <!--
703
+ ## Model Card Contact
704
+
705
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
706
+ -->
config.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "Snowflake/snowflake-arctic-embed-l",
3
+ "architectures": [
4
+ "BertModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "hidden_act": "gelu",
9
+ "hidden_dropout_prob": 0.1,
10
+ "hidden_size": 1024,
11
+ "initializer_range": 0.02,
12
+ "intermediate_size": 4096,
13
+ "layer_norm_eps": 1e-12,
14
+ "max_position_embeddings": 512,
15
+ "model_type": "bert",
16
+ "num_attention_heads": 16,
17
+ "num_hidden_layers": 24,
18
+ "pad_token_id": 0,
19
+ "position_embedding_type": "absolute",
20
+ "torch_dtype": "float32",
21
+ "transformers_version": "4.48.3",
22
+ "type_vocab_size": 2,
23
+ "use_cache": true,
24
+ "vocab_size": 30522
25
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "3.4.1",
4
+ "transformers": "4.48.3",
5
+ "pytorch": "2.5.1+cu124"
6
+ },
7
+ "prompts": {
8
+ "query": "Represent this sentence for searching relevant passages: "
9
+ },
10
+ "default_prompt_name": null,
11
+ "similarity_fn_name": "cosine"
12
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5e08be883fcb7646f4bf8f849838f77d9bf61ca943c674d02b0d134a266ad869
3
+ size 1336413848
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
+ }
20
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 512,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_lower_case": true,
47
+ "extra_special_tokens": {},
48
+ "mask_token": "[MASK]",
49
+ "max_length": 512,
50
+ "model_max_length": 512,
51
+ "pad_to_multiple_of": null,
52
+ "pad_token": "[PAD]",
53
+ "pad_token_type_id": 0,
54
+ "padding_side": "right",
55
+ "sep_token": "[SEP]",
56
+ "stride": 0,
57
+ "strip_accents": null,
58
+ "tokenize_chinese_chars": true,
59
+ "tokenizer_class": "BertTokenizer",
60
+ "truncation_side": "right",
61
+ "truncation_strategy": "longest_first",
62
+ "unk_token": "[UNK]"
63
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff