lw2134 commited on
Commit
197d3cd
1 Parent(s): 49ad234

Add new SentenceTransformer model.

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 1024,
3
+ "pooling_mode_cls_token": true,
4
+ "pooling_mode_mean_tokens": false,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false,
7
+ "pooling_mode_weightedmean_tokens": false,
8
+ "pooling_mode_lasttoken": false,
9
+ "include_prompt": true
10
+ }
README.md ADDED
@@ -0,0 +1,1143 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: Alibaba-NLP/gte-large-en-v1.5
3
+ library_name: sentence-transformers
4
+ metrics:
5
+ - cosine_accuracy@1
6
+ - cosine_accuracy@3
7
+ - cosine_accuracy@5
8
+ - cosine_accuracy@10
9
+ - cosine_precision@1
10
+ - cosine_precision@3
11
+ - cosine_precision@5
12
+ - cosine_precision@10
13
+ - cosine_recall@1
14
+ - cosine_recall@3
15
+ - cosine_recall@5
16
+ - cosine_recall@10
17
+ - cosine_ndcg@10
18
+ - cosine_mrr@10
19
+ - cosine_map@100
20
+ - dot_accuracy@1
21
+ - dot_accuracy@3
22
+ - dot_accuracy@5
23
+ - dot_accuracy@10
24
+ - dot_precision@1
25
+ - dot_precision@3
26
+ - dot_precision@5
27
+ - dot_precision@10
28
+ - dot_recall@1
29
+ - dot_recall@3
30
+ - dot_recall@5
31
+ - dot_recall@10
32
+ - dot_ndcg@10
33
+ - dot_mrr@10
34
+ - dot_map@100
35
+ pipeline_tag: sentence-similarity
36
+ tags:
37
+ - sentence-transformers
38
+ - sentence-similarity
39
+ - feature-extraction
40
+ - generated_from_trainer
41
+ - dataset_size:586
42
+ - loss:MultipleNegativesRankingLoss
43
+ widget:
44
+ - source_sentence: Explain the spectrum of openness in AI systems as described in
45
+ the document. How do open-source AI systems differ from fully closed AI systems
46
+ in terms of accessibility and innovation?
47
+ sentences:
48
+ - 'targets of cyber attacks; or
49
+
50
+           (iii)  permitting the evasion of human control or oversight through
51
+
52
+ means of deception or obfuscation.
53
+
54
+ Models meet this definition even if they are provided to end users with
55
+
56
+ technical safeguards that attempt to prevent users from taking advantage of
57
+
58
+ the relevant unsafe capabilities. 
59
+
60
+      (l)  The term “Federal law enforcement agency” has the meaning set forth
61
+
62
+ in section 21(a) of Executive Order 14074 of May 25, 2022 (Advancing
63
+
64
+ Effective, Accountable Policing and Criminal Justice Practices To Enhance
65
+
66
+ Public Trust and Public Safety).
67
+
68
+      (m)  The term “floating-point operation” means any mathematical
69
+
70
+ operation or assignment involving floating-point numbers, which are a
71
+
72
+ subset of the real numbers typically represented on computers by an integer
73
+
74
+ of fixed precision scaled by an integer exponent of a fixed base.
75
+
76
+      (n)  The term “foreign person” has the meaning set forth in section 5(c)
77
+ of
78
+
79
+ Executive Order 13984 of January 19, 2021 (Taking Additional Steps To
80
+
81
+ Address the National Emergency With Respect to Significant Malicious
82
+
83
+ Cyber-Enabled Activities).
84
+
85
+      (o)  The terms “foreign reseller” and “foreign reseller of United States
86
+
87
+ Infrastructure as a Service Products” mean a foreign person who has
88
+
89
+ established an Infrastructure as a Service Account to provide Infrastructure
90
+
91
+ as a Service Products subsequently, in whole or in part, to a third party.
92
+
93
+      (p)  The term “generative AI” means the class of AI models that emulate
94
+
95
+ the structure and characteristics of input data in order to generate derived
96
+
97
+ synthetic content.  This can include images, videos, audio, text, and other
98
+
99
+ digital content.
100
+
101
+      (q)  The terms “Infrastructure as a Service Product,” “United States
102
+
103
+ Infrastructure as a Service Product,” “United States Infrastructure as a
104
+
105
+ Service Provider,” and “Infrastructure as a Service Account” each have the
106
+
107
+ respective meanings given to those terms in section 5 of Executive Order
108
+
109
+ 13984.
110
+
111
+      (r)  The term “integer operation” means any mathematical operation or
112
+
113
+ assignment involving only integers, or whole numbers expressed without a
114
+
115
+ decimal point.05/10/2024, 16:36 Executive Order on the Safe, Secure, and Trustworthy
116
+ Development and Use of Artificial Intelligence | The White House
117
+
118
+ https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artific…
119
+ 7/59'
120
+ - "AI safety, enable next-generation medical diagnoses and further other\ncritical\
121
+ \ AI priorities.\n\0\0 Released a for designing safe, secure, and trustworthy\
122
+ \ AI tools\nfor use in education. The Department of Education’s guide discusses\n\
123
+ how developers of educational technologies can design AI that benefits\nstudents\
124
+ \ and teachers while advancing equity, civil rights, trust, and\ntransparency.\
125
+ \ This work builds on the Department’s 2023 \noutlining recommendations for the\
126
+ \ use of AI in teaching and learning.\n\0\0 Published guidance on evaluating the\
127
+ \ eligibility of patent claims\ninvolving inventions related to AI technology, as\
128
+ \ well as other\nemerging technologies. The guidance by the U.S. Patent and Trademark\n\
129
+ Office will guide those inventing in the AI space to protect their AI\ninventions\
130
+ \ and assist patent examiners reviewing applications for\npatents on AI inventions.\n\
131
+ \0\0 Issued a on federal research and development (R&D) to\nadvance trustworthy\
132
+ \ AI over the past four years. The report by the\nNational Science and Technology\
133
+ \ Council examines an annual federal AI\nR&D budget of nearly $3 billion.\n\0\0\
134
+ \ Launched a $23 million initiative to promote the use of privacy-\nenhancing\
135
+ \ technologies to solve real-world problems, including\nrelated to AI. Working\
136
+ \ with industry and agency partners, NSF will\ninvest through its new Privacy-preserving\
137
+ \ Data Sharing in Practice\nprogram in efforts to apply, mature, and scale privacy-enhancing\n\
138
+ technologies for specific use cases and establish testbeds to accelerate\ntheir\
139
+ \ adoption.\n\0\0 Announced millions of dollars in further investments to advance\n\
140
+ responsible AI development and use throughout our society. These\ninclude $30\
141
+ \ million invested through NSF’s Experiential Learning in\nEmerging and Novel\
142
+ \ Technologies program—which supports inclusive\nexperiential learning in fields\
143
+ \ like AI—and $10 million through NSF’s\nExpandAI program, which helps build capacity\
144
+ \ in AI research at\nminority-serving institutions while fostering the development\
145
+ \ of a\ndiverse, AI-ready workforce.\nAdvancing U.S. Leadership Abroad\nPresident\
146
+ \ Biden’s Executive Order emphasized that the United States lead\nglobal efforts\
147
+ \ to unlock AI’s potential and meet its challenges. To advance\nU.S. leadership\
148
+ \ on AI, agencies have:guide\nreport\nreport05/10/2024, 16:35 FACT SHEET: Biden-Harris\
149
+ \ Administration Announces New AI Actions and Receives Additional Major Voluntary\
150
+ \ Commitment on AI | The…\nhttps://www.whitehouse.gov/briefing-room/statements-releases/2024/07/26/fact-sheet-biden-harris-administration-announces-new-ai-actions-and-receives-addit…\
151
+ \ 4/10"
152
+ - "50 Governing AI for Humanity processes such as the recent scientific report\
153
+ \ \non the risks of advanced AI commissioned by \nthe United Kingdom,25 and relevant\
154
+ \ regional \norganizations.\ne. A steering committee would develop a research\
155
+ \ \nagenda ensuring the inclusivity of views and \nincorporation of ethical considerations,\
156
+ \ oversee \nthe allocation of resources, foster collaboration \nwith a network\
157
+ \ of academic institutions and \nother stakeholders, and review the panel’s \n\
158
+ activities and deliverables.100 By drawing on the unique convening power of the\
159
+ \ \nUnited Nations and inclusive global reach across \nstakeholder groups, an\
160
+ \ international scientific panel \ncan deliver trusted scientific collaboration\
161
+ \ processes \nand outputs and correct information asymmetries \nin ways that address\
162
+ \ the representation and \ncoordination gaps identified in paragraphs 66 and \n\
163
+ 73, thereby promoting equitable and effective \ninternational AI governance.\n\
164
+ Among the topics discussed in our consultations was the ongoing debate over open\
165
+ \ versus closed AI systems. \nAI systems that are open in varying degrees are\
166
+ \ often referred to as “open-source AI”, but this is somewhat of a \nmisnomer\
167
+ \ when compared with open-source software (code). It is important to recognize\
168
+ \ that openness in AI \nsystems is more of a spectrum than a single attribute.\n\
169
+ One article explained that a “fully closed AI system is only accessible to a particular\
170
+ \ group. It could be an AI \ndeveloper company or a specific group within it,\
171
+ \ mainly for internal research and development purposes. On the \nother hand,\
172
+ \ more open systems may allow public access or make available certain parts, such\
173
+ \ as data, code, or \nmodel characteristics, to facilitate external AI development.”a\n\
174
+ Open-source AI systems in the generative AI field present both risks and opportunities.\
175
+ \ Companies often cite “AI \nsafety” as a reason for not disclosing system specifications,\
176
+ \ reflecting the ongoing tension between open and \nclosed approaches in the industry.\
177
+ \ Debates typically revolve around two extremes: full openness, which entails\
178
+ \ \nsharing all model components and data sets; and partial openness, which involves\
179
+ \ disclosing only model weights. \nOpen-source AI systems encourage innovation\
180
+ \ and are often a requirement for public funding. On the open \nextreme of the\
181
+ \ spectrum, when the underlying code is made freely available, developers around\
182
+ \ the world can \nexperiment, improve and create new applications. This fosters\
183
+ \ a collaborative environment where ideas and \nexpertise are readily shared.\
184
+ \ Some industry leaders argue that this openness is vital to innovation and economic\
185
+ \ \ngrowth.\nHowever, in most cases, open-source AI models are available as application\
186
+ \ programming interfaces. In this case, \nthe original code is not shared, the\
187
+ \ original weights are never changed and model updates become new models. \nAdditionally,\
188
+ \ open-source models tend to be smaller and more transparent. This transparency\
189
+ \ can build trust, \nallow for ethical considerations to be proactively addressed,\
190
+ \ and support validation and replication because users \ncan examine the inner\
191
+ \ workings of the AI system, understand its decision-making process and identify\
192
+ \ potential \nbiases.Box 9: Open versus closed AI systems\na Angela Luna, “The\
193
+ \ open or closed AI dilemma”, 2 May 2024. Available at https://bipartisanpolicy.org/blog/the-open-or-closed-ai-dilemma\
194
+ \ .\n25 International Scientific Report on the Safety of Advanced AI: Interim\
195
+ \ Report. Available at https://gov.uk/government/publications/international-scientific-report-\n\
196
+ on-the-safety-of-advanced-ai ."
197
+ - source_sentence: What role does the report propose for the United Nations in establishing
198
+ a governance regime for AI, and how does it envision this regime contributing
199
+ to a new social contract that protects vulnerable populations?
200
+ sentences:
201
+ - "HUMAN ALTERNATIVES, \nCONSIDERATION, AND \nFALLBACK \nHOW THESE PRINCIPLES CAN\
202
+ \ MOVE INTO PRACTICE\nReal-life examples of how these principles can become reality,\
203
+ \ through laws, policies, and practical \ntechnical and sociotechnical approaches\
204
+ \ to protecting rights, opportunities, and access. \nHealthcare “navigators” help\
205
+ \ people find their way through online signup forms to choose \nand obtain healthcare.\
206
+ \ A Navigator is “an individual or organization that's trained and able to help\
207
+ \ \nconsumers, small businesses, and their employees as they look for health coverage\
208
+ \ options through the \nMarketplace (a government web site), including completing\
209
+ \ eligibility and enrollment forms.”106 For \nthe 2022 plan year, the Biden-Harris\
210
+ \ Administration increased funding so that grantee organizations could \n“train\
211
+ \ and certify more than 1,500 Navigators to help uninsured consumers find affordable\
212
+ \ and comprehensive \nhealth coverage. ”107\nThe customer service industry has\
213
+ \ successfully integrated automated services such as \nchat-bots and AI-driven\
214
+ \ call response systems with escalation to a human support team.\n108 Many businesses\
215
+ \ now use partially automated customer service platforms that help answer customer\
216
+ \ \nquestions and compile common problems for human agents to review. These integrated\
217
+ \ human-AI \nsystems allow companies to provide faster customer care while maintaining\
218
+ \ human agents to answer \ncalls or otherwise respond to complicated requests.\
219
+ \ Using both AI and human agents is viewed as key to \nsuccessful customer service.109\n\
220
+ Ballot curing laws in at least 24 states require a fallback system that allows\
221
+ \ voters to \ncorrect their ballot and have it counted in the case that a voter\
222
+ \ signature matching algorithm incorrectly flags their ballot as invalid or there\
223
+ \ is another issue with their ballot, and review by an election official does\
224
+ \ not rectify the problem. Some federal courts have found that such cure procedures\
225
+ \ are constitutionally required.\n110 Ballot \ncuring processes vary among states,\
226
+ \ and include direct phone calls, emails, or mail contact by election \nofficials.111\
227
+ \ Voters are asked to provide alternative information or a new signature to verify\
228
+ \ the validity of their \nballot. \n52"
229
+ - "SECTION TITLE\nHUMAN ALTERNATIVES , C ONSIDERATION , AND FALLBACK\nYou should\
230
+ \ be able to opt out, where appropriate, and have access to a person who can quickly\
231
+ \ \nconsider and remedy problems you encounter. You should be able to opt out\
232
+ \ from automated systems in \nfavor of a human alternative, where appropriate.\
233
+ \ Appropriateness should be determined based on reasonable expectations in a given\
234
+ \ context and with a focus on ensuring broad accessibility and protecting the\
235
+ \ public from especially harmful impacts. In some cases, a human or other alternative\
236
+ \ may be required by law. You should have access to timely human consideration\
237
+ \ and remedy by a fallback and escalation process if an automated system fails,\
238
+ \ it produces an error, or you would like to appeal or contest its impacts on\
239
+ \ you. Human consideration and fallback should be accessible, equitable, effective,\
240
+ \ maintained, accompanied by appropriate operator training, and should not impose\
241
+ \ an unreasonable burden on the public. Automated systems with an intended use\
242
+ \ within sensi\n-\ntive domains, including, but not limited to, criminal justice,\
243
+ \ employment, education, and health, should additional -\nly be tailored to the\
244
+ \ purpose, provide meaningful access for oversight, include training for any people\
245
+ \ interacting with the system, and incorporate human consideration for adverse\
246
+ \ or high-risk decisions. Reporting that includes a description of these human\
247
+ \ governance processes and assessment of their timeliness, accessibility, outcomes,\
248
+ \ and effectiveness should be made public whenever possible. \nDefinitions for\
249
+ \ key terms in The Blueprint for an AI Bill of Rights can be found in Applying\
250
+ \ the Blueprint for an AI Bill of Rights. \nAccompanying analysis and tools for\
251
+ \ actualizing each principle can be found in the Technical Companion. \n7"
252
+ - "Final Report 21E. Reflections on institutional \nmodels\nlxiv Discussions\
253
+ \ about AI often resolve into extremes. \nIn our consultations around the world,\
254
+ \ we engaged \nwith those who see a future of boundless goods \nprovided by ever-cheaper,\
255
+ \ ever-more-helpful AI \nsystems. We also spoke with those wary of darker \nfutures,\
256
+ \ of division and unemployment, and even \nextinction.8\nlxv We do not know whether\
257
+ \ the utopian or dystopian \nfuture is more likely. Equally, we are mindful that\
258
+ \ \nthe technology may go in a direction that does \naway with this duality. This\
259
+ \ report focuses on \nthe near-term opportunities and risks, based on \nscience\
260
+ \ and grounded in fact. \nlxvi The seven recommendations outlined above offer\
261
+ \ \nour best hope for reaping the benefits of AI, while \nminimizing and mitigating\
262
+ \ the risks, as AI continues \nevolving. We are also mindful of the practical\
263
+ \ \nchallenges to international institution-building \non a larger scale. This\
264
+ \ is why we are proposing a \nnetworked institutional approach, with light and\
265
+ \ \nagile support. If or when risks become more acute \nand the stakes for opportunities\
266
+ \ escalate, such \ncalculations may change. \nlxvii The world wars led to the\
267
+ \ modern international \nsystem; the development of ever-more-powerful \nchemical,\
268
+ \ biological and nuclear weapons led \nto regimes limiting their spread and promoting\
269
+ \ \npeaceful uses of the underlying technologies. \nEvolving understanding of\
270
+ \ our common humanity \nled to the modern human rights system and our \nongoing\
271
+ \ commitment to the SDGs for all. Climate \nchange evolved from a niche concern\
272
+ \ to a global \nchallenge.lxviii AI may similarly rise to a level that requires\
273
+ \ more \nresources and more authority than is proposed \nin the above-mentioned\
274
+ \ recommendations, \ninto harder functions of norm elaboration, \nimplementation,\
275
+ \ monitoring, verification and \nvalidation, enforcement, accountability, remedies\
276
+ \ \nfor harm and emergency responses. Reflecting on \nsuch institutional models,\
277
+ \ therefore, is prudent. The \nfinal section of this report seeks to contribute\
278
+ \ to \nthat effort.\n4. A call to action\nlxix We remain optimistic about the\
279
+ \ future with AI and \nits positive potential. That optimism depends, \nhowever,\
280
+ \ on realism about the risks and the \ninadequacy of structures and incentives\
281
+ \ currently \nin place. The technology is too important, and the \nstakes are\
282
+ \ too high, to rely only on market forces \nand a fragmented patchwork of national\
283
+ \ and \nmultilateral action.\nlxx The United Nations can be the vehicle for a\
284
+ \ new \nsocial contract for AI that ensures global buy-\nin for a governance regime\
285
+ \ which protects and \nempowers us all. Such a social contract will ensure \n\
286
+ that opportunities are fairly distributed, and the \nrisks are not loaded on to\
287
+ \ the most vulnerable – or \npassed on to future generations, as we have seen,\
288
+ \ \ntragically, with climate change.\nlxxi As a group and as individuals from\
289
+ \ across many \nfields of expertise, organizations and parts of the \nworld, we\
290
+ \ look forward to continuing this crucial \nconversation. Together with the many\
291
+ \ others we \nhave connected with on this journey, and the global \ncommunity\
292
+ \ they represent, we hope that this report \ncontributes to our combined efforts\
293
+ \ to govern AI \nfor humanity.\n8 See https://safe.ai/work/statement-on-ai-risk\
294
+ \ ."
295
+ - source_sentence: What are the potential consequences of coordination gaps between
296
+ various AI governance initiatives, as highlighted in the context information?
297
+ sentences:
298
+ - "44 Governing AI for Humanity B. Coordination gaps\n72 The ongoing emergence\
299
+ \ and evolution of AI \ngovernance initiatives are not guaranteed to \nwork together\
300
+ \ effectively for humanity. Instead, \ncoordination gaps have appeared. Effective\
301
+ \ \nhandshaking between the selective plurilateral \ninitiatives (see fig. 8)\
302
+ \ and other regional initiatives is \nnot assured, risking incompatibility between\
303
+ \ regions.\n73 Nor are there global mechanisms for all international \nstandards\
304
+ \ development organizations (see fig. 7), \ninternational scientific research\
305
+ \ initiatives or AI \ncapacity-building initiatives to coordinate with each \n\
306
+ other, undermining interoperability of approaches \nand resulting in fragmentation.\
307
+ \ The resulting \ncoordination gaps between various sub-global \ninitiatives are\
308
+ \ in some cases best addressed at the \nglobal level.\n74 A separate set of coordination\
309
+ \ gaps arise within \nthe United Nations system, reflected in the array of \n\
310
+ diverse United Nations documents and initiatives \nin relation to AI. Figure 9\
311
+ \ shows 27 United Nations-\nrelated instruments in specific domains that may \n\
312
+ apply to AI – 23 of them are binding and will require \ninterpretation as they\
313
+ \ pertain to AI. A further 29 \ndomain-level documents from the United Nations\
314
+ \ \nand related organizations focus specifically on AI, \nnone of which are binding.17\
315
+ \ In some cases, these \ncan address AI risks and harness AI benefits in \nspecific\
316
+ \ domains.75 The level of activity shows the importance of AI \nto United Nations\
317
+ \ programmes. As AI expands to \naffect ever-wider aspects of society, there will\
318
+ \ be \ngrowing calls for diverse parts of the United Nations \nsystem to act,\
319
+ \ including through binding norms. \nIt also shows the ad hoc nature of the responses,\
320
+ \ \nwhich have largely developed organically in specific \ndomains and without\
321
+ \ an overarching strategy. The \nresulting coordination gaps invite overlaps and\
322
+ \ \nhinder interoperability and impact.\n76 The number and diversity of approaches\
323
+ \ are a sign \nthat the United Nations system is responding to \nan emerging issue.\
324
+ \ With proper orchestration, and \nin combination with processes taking a holistic\
325
+ \ \napproach, these efforts can offer an efficient and \nsustainable pathway to\
326
+ \ inclusive international AI \ngovernance in specific domains. This could enable\
327
+ \ \nmeaningful, harmonized and coordinated impacts \non areas such as health,\
328
+ \ education, technical \nstandards and ethics, instead of merely contributing\
329
+ \ \nto the proliferation of initiatives and institutions \nin this growing field.\
330
+ \ International law, including \ninternational human rights law, provides a shared\
331
+ \ \nnormative foundation for all AI-related efforts, \nthereby facilitating coordination\
332
+ \ and coherence."
333
+ - "\0\0 Issued a comprehensive plan for U.S. engagement on global AI\nstandards. The\
334
+ \ plan, developed by the NIST, incorporates broad public\nand private-sector input,\
335
+ \ identifies objectives and priority areas for AI\nstandards work, and lays out\
336
+ \ actions for U.S. stakeholders including U.S.\nagencies. NIST and others agencies\
337
+ \ will report on priority actions in 180\ndays. \n\0\0 Developed for managing\
338
+ \ risks to human rights posed by AI.\nThe Department of State’s “Risk Management\
339
+ \ Profile for AI and Human\nRights”—developed in close coordination with NIST and\
340
+ \ the U.S. Agency\nfor International Development—recommends actions based on the\
341
+ \ NIST\nAI Risk Management Framework to governments, the private sector, and\n\
342
+ civil society worldwide, to identify and manage risks to human rights\narising\
343
+ \ from the design, development, deployment, and use of AI. \n\0\0 Launched a global\
344
+ \ network of AI Safety Institutes and other\ngovernment-backed scientific offices\
345
+ \ to advance AI safety at a technical\nlevel. This network will accelerate critical\
346
+ \ information exchange and\ndrive toward common or compatible safety evaluations\
347
+ \ and policies.\n\0\0 Launched a landmark United Nations General Assembly resolution.\n\
348
+ The unanimously adopted resolution, with more than 100 co-sponsors,\nlays out\
349
+ \ a common vision for countries around the world to promote the\nsafe and secure\
350
+ \ use of AI to address global challenges.\n\0\0 Expanded global support for the\
351
+ \ U.S.-led Political Declaration on the\nResponsible Military Use of Artificial\
352
+ \ Intelligence and\nAutonomy.  Fifty-five nations now endorse the political declaration,\n\
353
+ which outlines a set of norms for the responsible development,\ndeployment, and\
354
+ \ use of military AI capabilities.\nThe Table below summarizes many of the activities\
355
+ \ that federal agencies\nhave completed in response to the Executive Order:guidance05/10/2024,\
356
+ \ 16:35 FACT SHEET: Biden-Harris Administration Announces New AI Actions and Receives\
357
+ \ Additional Major Voluntary Commitment on AI | The…\nhttps://www.whitehouse.gov/briefing-room/statements-releases/2024/07/26/fact-sheet-biden-harris-administration-announces-new-ai-actions-and-receives-addit…\
358
+ \ 5/10"
359
+ - "Final Report 55f. In addition, diverse stakeholders – in particular \ntechnology\
360
+ \ companies and civil society \nrepresentatives – could be invited to engage \n\
361
+ through existing institutions detailed below, as \nwell as policy workshops on\
362
+ \ particular aspects \nof AI governance such as limits (if any) of open-\nsource\
363
+ \ approaches to the most advanced forms \nof AI, thresholds for tracking and reporting\
364
+ \ of \nAI incidents, application of human rights law to \nnovel use cases, or\
365
+ \ the use of competition law/\nantitrust to address concentrations of power \n\
366
+ among technology companies.30\ng. The proposed AI office could also curate a \n\
367
+ repository of AI governance examples, including \nlegislation, policies and institutions\
368
+ \ from \naround the world for consideration of the policy \ndialogue, working\
369
+ \ with existing efforts, such as \nOECD.\n109 Notwithstanding the two General\
370
+ \ Assembly \nresolutions on AI in 2024, there is currently \nno mandated institutionalized\
371
+ \ dialogue on \nAI governance at the United Nations that \ncorresponds to the\
372
+ \ reliably inclusive vision of this \nrecommendation. Similar processes do exist\
373
+ \ at \nthe international level, but primarily in regional or \nplurilateral constellations\
374
+ \ (para. 57), which are not \nreliably inclusive and global.\n110 Complementing\
375
+ \ a fluid process of plurilateral and \nregional AI summits,31 the United Nations\
376
+ \ can \noffer a stable home for dialogue on AI governance. \nInclusion by design\
377
+ \ – a crucial requirement for \nplaying a stabilizing role in geopolitically delicate\
378
+ \ \ntimes – can also address representation and \ncoordination gaps identified\
379
+ \ in paragraphs 64 and \n72, promoting more effective collective action on AI\
380
+ \ \ngovernance in the common interest of all countries. AI standards exchange\
381
+ \ \n \nRecommendation 3: AI standards exchange \n \nWe recommend the creation\
382
+ \ of an AI standards \nexchange, bringing together representatives from \nnational\
383
+ \ and international standard-development \norganizations, technology companies,\
384
+ \ civil society \nand representatives from the international scientific \npanel.\
385
+ \ It would be tasked with:\na. Developing and maintaining a register of \ndefinitions\
386
+ \ and applicable standards for \nmeasuring and evaluating AI systems;\nb. Debating\
387
+ \ and evaluating the standards and the \nprocesses for creating them; and\nc.\
388
+ \ Identifying gaps where new standards are \nneeded.\n111 When AI systems were\
389
+ \ first explored, few standards \nexisted to help to navigate or measure this\
390
+ \ new \nfrontier. The Turing Test – of whether a machine can \nexhibit behaviour\
391
+ \ equivalent to (or indistinguishable \nfrom) a human being – captured the popular\
392
+ \ \nimagination, but is of more cultural than scientific \nsignificance. Indeed,\
393
+ \ it is telling that some of \nthe greatest computational advances have been \n\
394
+ measured by their success in games, such as when \na computer could beat humans\
395
+ \ at chess, Go, poker \nor Jeopardy. Such measures were easily understood \nby\
396
+ \ non-specialists, but were neither rigorous nor \nparticularly scientific.\n\
397
+ 112 More recently, there has been a proliferation of \nstandards. Figure 13 illustrates\
398
+ \ the increasing \nnumber of relevant standards adopted by ITU, the \nInternational\
399
+ \ Organization for Standardization (ISO), \nthe International Electrotechnical\
400
+ \ Commission \n(IEC) and the Institute of Electrical and Electronics \nEngineers\
401
+ \ (IEEE).32\n30 Such a gathering could also provide an opportunity for multi-stakeholder\
402
+ \ debate of any hardening of the global governance of AI. These might include,\
403
+ \ for \nexample, prohibitions on the development of uncontainable or uncontrollable\
404
+ \ AI systems, or requirements that all AI systems be sufficiently transparent\
405
+ \ so that \ntheir consequences can be traced back to a legal actor that can assume\
406
+ \ responsibility for them.\n31 Although multiple AI summits have helped a subset\
407
+ \ of 20–30 countries to align on AI safety issues, participation has been inconsistent:\
408
+ \ Brazil, China and \nIreland endorsed the Bletchley Declaration in November 2023,\
409
+ \ but not the Seoul Ministerial Statement six months later (see fig. 12). Conversely,\
410
+ \ Mexico and \nNew Zealand endorsed the Seoul Ministerial Statement, but did not\
411
+ \ endorse the Bletchley Declaration.\n32 Many new standards are also emerging\
412
+ \ at the national and multinational levels, such as the United States White House\
413
+ \ Voluntary AI Commitments and the \nEuropean Union Codes of Practice for the\
414
+ \ AI Act."
415
+ - source_sentence: Describe the minimum set of criteria that should be included in
416
+ the incident reporting process for GAI systems, according to the organizational
417
+ practices established for identifying incidents.
418
+ sentences:
419
+ - "APPENDIX\nSummaries of Additional Engagements: \n•OSTP created an email address\
420
+ \ ( [email protected] ) to solicit comments from the public on the use of\n\
421
+ artificial intelligence and other data-driven technologies in their lives.\n•OSTP\
422
+ \ issued a Request For Information (RFI) on the use and governance of biometric\
423
+ \ technologies.113 The\npurpose of this RFI was to understand the extent and variety\
424
+ \ of biometric technologies in past, current, or\nplanned use; the domains in\
425
+ \ which these technologies are being used; the entities making use of them; currentprinciples,\
426
+ \ practices, or policies governing their use; and the stakeholders that are, or\
427
+ \ may be, impacted by theiruse or regulation. The 130 responses to this RFI are\
428
+ \ available in full online\n114 and were submitted by the below\nlisted organizations\
429
+ \ and individuals:\nAccenture \nAccess Now ACT | The App Association AHIP \nAIethicist.org\
430
+ \ \nAirlines for America Alliance for Automotive Innovation Amelia Winger-Bearskin\
431
+ \ American Civil Liberties Union American Civil Liberties Union of Massachusetts\
432
+ \ American Medical Association ARTICLE19 Attorneys General of the District of\
433
+ \ Columbia, Illinois, Maryland, Michigan, Minnesota, New York, North Carolina,\
434
+ \ Oregon, Vermont, and Washington Avanade Aware Barbara Evans Better Identity\
435
+ \ Coalition Bipartisan Policy Center Brandon L. Garrett and Cynthia Rudin Brian\
436
+ \ Krupp Brooklyn Defender Services BSA | The Software Alliance Carnegie Mellon\
437
+ \ University Center for Democracy & Technology Center for New Democratic Processes\
438
+ \ Center for Research and Education on Accessible Technology and Experiences at\
439
+ \ University of Washington, Devva Kasnitz, L Jean Camp, Jonathan Lazar, Harry\
440
+ \ Hochheiser Center on Privacy & Technology at Georgetown Law Cisco Systems City\
441
+ \ of Portland Smart City PDX Program CLEAR Clearview AI Cognoa Color of Change\
442
+ \ Common Sense Media Computing Community Consortium at Computing Research Association\
443
+ \ Connected Health Initiative Consumer Technology Association Courtney Radsch\
444
+ \ Coworker Cyber Farm Labs Data & Society Research Institute Data for Black Lives\
445
+ \ Data to Actionable Knowledge Lab at Harvard University Deloitte Dev Technology\
446
+ \ Group Digital Therapeutics Alliance Digital Welfare State & Human Rights Project\
447
+ \ and Center for Human Rights and Global Justice at New York University School\
448
+ \ of Law, and Temple University Institute for Law, Innovation & Technology Dignari\
449
+ \ Douglas Goddard Edgar Dworsky Electronic Frontier Foundation Electronic Privacy\
450
+ \ Information Center, Center for Digital Democracy, and Consumer Federation of\
451
+ \ America FaceTec Fight for the Future Ganesh Mani Georgia Tech Research Institute\
452
+ \ Google Health Information Technology Research and Development Interagency Working\
453
+ \ Group HireVue HR Policy Association ID.me Identity and Data Sciences Laboratory\
454
+ \ at Science Applications International Corporation Information Technology and\
455
+ \ Innovation Foundation Information Technology Industry Council Innocence Project\
456
+ \ Institute for Human-Centered Artificial Intelligence at Stanford University\
457
+ \ Integrated Justice Information Systems Institute International Association of\
458
+ \ Chiefs of Police International Biometrics + Identity Association International\
459
+ \ Business Machines Corporation International Committee of the Red Cross Inventionphysics\
460
+ \ iProov Jacob Boudreau Jennifer K. Wagner, Dan Berger, Margaret Hu, and Sara\
461
+ \ Katsanis Jonathan Barry-Blocker Joseph Turow Joy Buolamwini Joy Mack Karen Bureau\
462
+ \ Lamont Gholston Lawyers’ Committee for Civil Rights Under Law \n60"
463
+ - "19 GV-4.1-003 Establish policies, procedures, and processes for oversight functions\
464
+ \ (e.g., senior \nleadership, legal, compliance, including internal evaluation\
465
+ \ ) across the GAI \nlifecycle, from problem formulation and supply chains to\
466
+ \ system decommission. Value Chain and Component \nIntegration \nAI Actor Tasks:\
467
+ \ AI Deployment, AI Design, AI Development, Operation and Monitoring \n \nGOVERN\
468
+ \ 4.2: Organizational teams document the risks and potential impacts of the AI\
469
+ \ technology they design, develop, deploy, \nevaluate, and use, and they communicate\
470
+ \ about the impacts more broadly. \nAction ID Suggested Action GAI Risks \n\
471
+ GV-4.2-001 Establish terms of use and terms of service for GAI systems . Intellectual\
472
+ \ Property ; Dangerous , \nViolent, or Hateful Content ; \nObscene, Degrading,\
473
+ \ and/or \nAbusive Content \nGV-4.2-002 Include relevant AI Actors in the GAI\
474
+ \ system risk identification process. Human -AI Configuration \nGV-4.2-0 03 Verify\
475
+ \ that downstream GAI system impacts (such as the use of third -party \nplugins)\
476
+ \ are included in the impact documentation process. Value Chain and Component\
477
+ \ \nIntegration \nAI Actor Tasks: AI Deployment, AI Design, AI Development,\
478
+ \ Operation and Monitoring \n \nGOVERN 4.3: Organizational practices are in place\
479
+ \ to enable AI testing, identification of incidents, and information sharing. \
480
+ \ \nAction ID Suggested Action GAI Risks \nGV4.3-- 001 Establish policies for\
481
+ \ measuring the effectiveness of employed content \nprovenance methodologies (e.g.,\
482
+ \ cryptography, watermarking, steganography, etc.) Information Integrity \nGV-4.3-002\
483
+ \ Establish o rganizational practices to identify the minimum set of criteria\
484
+ \ \nnecessary for GAI system incident reporting such as: System ID (auto -generated\
485
+ \ \nmost likely), Title, Reporter, System/Source, Data Reported, Date of Incident,\
486
+ \ Description, Impact(s), Stakeholder(s) Impacted. Information Security"
487
+ - "72 Governing AI for Humanity Box 15: Possible functions and first-year deliverables\
488
+ \ of the AI office\nThe AI office should have a light structure and aim to be\
489
+ \ agile, trusted and networked. Where necessary, it should \noperate in a “hub\
490
+ \ and spoke” manner to connect to other parts of the United Nations system and\
491
+ \ beyond.\nOutreach could include serving as a key node in a so-called soft coordination\
492
+ \ architecture between Member \nStates, plurilateral networks, civil society organizations,\
493
+ \ academia and technology companies in a regime complex \nthat weaves together\
494
+ \ to solve problems collaboratively through networking, and as a safe, trusted\
495
+ \ place to \nconvene on relevant topics. Ambitiously, it could become the glue\
496
+ \ that helps to hold such other evolving networks \ntogether.\nSupporting the\
497
+ \ various initiatives proposed in this report includes the important function\
498
+ \ of ensuring inclusiveness \nat speed in delivering outputs such as scientific\
499
+ \ reports, governance dialogue and identifying appropriate follow-\nup entities.\n\
500
+ Common understanding :\n• Facilitate recruitment of and support the international\
501
+ \ scientific panel.\nCommon ground :\n• Service policy dialogues with multi-stakeholder\
502
+ \ inputs in support of interoperability and policy learning. \nAn initial priority\
503
+ \ topic is the articulation of risk thresholds and safety frameworks across jurisdictions\n\
504
+ • Support ITU, ISO/IEC and IEEE on setting up the AI standards exchange.\nCommon\
505
+ \ benefits :\n• Support the AI capacity development network with an initial focus\
506
+ \ on building public interest AI capacity \namong public officials and social\
507
+ \ entrepreneurs. Define the initial network vision, outcomes, go vernance \nstructure,\
508
+ \ partnerships and operational mechanisms.\n• Define the vision, outcomes, governance\
509
+ \ structure and operational mechanisms for the global fund for AI, \nand seek\
510
+ \ feedback from Member States, industry and civil society stakeholders on the\
511
+ \ proposal, with a \nview to funding initial projects within six months of establishment.\n\
512
+ • Prepare and publish an annual list of prioritized investment areas to guide\
513
+ \ both the global fund for AI and \ninvestments outside that structure.\nCoherent\
514
+ \ effort :\n• Establish lightweight mechanisms that support Member States and\
515
+ \ other relevant organizations to be \nmore connected, coordinated and effective\
516
+ \ in pursuing their global AI governance efforts.\n• Prepare initial frameworks\
517
+ \ to guide and monitor the AI office’s work, including a global governance risk\
518
+ \ \ntaxonomy, a global AI policy landscape review and a global stakeholder map.\n\
519
+ • Develop and implement quarterly reporting and periodic in-person presentations\
520
+ \ to Member States on \nthe AI office’s progress against its workplan and establish\
521
+ \ feedback channels to support adjustments as \nneeded.\n• Establish a steering\
522
+ \ committee jointly led by the AI office, ITU, UNC TAD, UNESCO and other relevant\
523
+ \ \nUnited Nations entities and organizations to accelerate the work of the United\
524
+ \ Nations in service of the \nfunctions above, and review progress of the accelerated\
525
+ \ efforts every three months.\n• Promote joint learning and development opportunities\
526
+ \ for Member State representatives to support them \nto carry out their responsibilities\
527
+ \ for global AI governance, in cooperation with relevant United Nations \nentities\
528
+ \ and organizations such as the United Nations Institute for Training and Research\
529
+ \ and the United \nNations University."
530
+ - source_sentence: What are some of the legal frameworks mentioned in the context
531
+ that aim to protect personal information, and how do they relate to data privacy
532
+ concerns?
533
+ sentences:
534
+ - "NOTICE & \nEXPLANATION \nWHAT SHOULD BE EXPECTED OF AUTOMATED SYSTEMS\nThe expectations\
535
+ \ for automated systems are meant to serve as a blueprint for the development\
536
+ \ of additional \ntechnical standards and practices that are tailored for particular\
537
+ \ sectors and contexts. \nTailored to the level of risk. An assessment should\
538
+ \ be done to determine the level of risk of the auto -\nmated system. In settings\
539
+ \ where the consequences are high as determined by a risk assessment, or extensive\
540
+ \ \noversight is expected (e.g., in criminal justice or some public sector settings),\
541
+ \ explanatory mechanisms should be built into the system design so that the system’s\
542
+ \ full behavior can be explained in advance (i.e., only fully transparent models\
543
+ \ should be used), rather than as an after-the-decision interpretation. In other\
544
+ \ settings, the extent of explanation provided should be tailored to the risk\
545
+ \ level. \nValid. The explanation provided by a system should accurately reflect\
546
+ \ the factors and the influences that led \nto a particular decision, and should\
547
+ \ be meaningful for the particular customization based on purpose, target, and\
548
+ \ level of risk. While approximation and simplification may be necessary for the\
549
+ \ system to succeed based on the explanatory purpose and target of the explanation,\
550
+ \ or to account for the risk of fraud or other concerns related to revealing decision-making\
551
+ \ information, such simplifications should be done in a scientifically supportable\
552
+ \ way. Where appropriate based on the explanatory system, error ranges for the\
553
+ \ explanation should be calculated and included in the explanation, with the choice\
554
+ \ of presentation of such information balanced with usability and overall interface\
555
+ \ complexity concerns. \nDemonstrate protections for notice and explanation \n\
556
+ Reporting. Summary reporting should document the determinations made based on\
557
+ \ the above consider -\nations, including: the responsible entities for accountability\
558
+ \ purposes; the goal and use cases for the system, identified users, and impacted\
559
+ \ populations; the assessment of notice clarity and timeliness; the assessment\
560
+ \ of the explanation's validity and accessibility; the assessment of the level\
561
+ \ of risk; and the account and assessment of how explanations are tailored, including\
562
+ \ to the purpose, the recipient of the explanation, and the level of risk. Individualized\
563
+ \ profile information should be made readily available to the greatest extent\
564
+ \ possible that includes explanations for any system impacts or inferences. Reporting\
565
+ \ should be provided in a clear plain language and machine-readable manner. \n\
566
+ 44"
567
+ - "25 MP-2.3-002 Review and document accuracy, representativeness, relevance, suitability\
568
+ \ of data \nused at different stages of AI life cycle. Harmful Bias and Homogenization\
569
+ \ ; \nIntellectual Property \nMP-2.3-003 Deploy and document fact -checking techniques\
570
+ \ to verify the accuracy and \nveracity of information generated by GAI systems,\
571
+ \ especially when the \ninformation comes from multiple (or unknown) sources.\
572
+ \ Information Integrity \nMP-2.3-004 Develop and implement testing techniques\
573
+ \ to identify GAI produced content (e.g., synthetic media) that might be indistinguishable\
574
+ \ from human -generated content. Information Integrity \nMP-2.3-005 Implement\
575
+ \ plans for GAI systems to undergo regular adversarial testing to identify \n\
576
+ vulnerabilities and potential manipulation or misuse. Information Security \n\
577
+ AI Actor Tasks: AI Development, Domain Experts, TEVV \n \nMAP 3.4: Processes\
578
+ \ for operator and practitioner proficiency with AI system performance and trustworthiness\
579
+ \ – and relevant \ntechnical standards and certifications – are defined, assessed,\
580
+ \ and documented. \nAction ID Suggested Action GAI Risks \nMP-3.4-001 Evaluate\
581
+ \ whether GAI operators and end -users can accurately understand \ncontent lineage\
582
+ \ and origin. Human -AI Configuration ; \nInformation Integrity \nMP-3.4-002\
583
+ \ Adapt existing training programs to include modules on digital content \ntransparency.\
584
+ \ Information Integrity \nMP-3.4-003 Develop certification programs that test\
585
+ \ proficiency in managing GAI risks and \ninterpreting content provenance, relevant\
586
+ \ to specific industry and context. Information Integrity \nMP-3.4-004 Delineate\
587
+ \ human proficiency tests from tests of GAI capabilities. Human -AI Configuration\
588
+ \ \nMP-3.4-005 Implement systems to continually monitor and track the outcomes\
589
+ \ of human- GAI \nconfigurations for future refinement and improvements . Human\
590
+ \ -AI Configuration ; \nInformation Integrity \nMP-3.4-006 Involve the end -users,\
591
+ \ practitioners, and operators in GAI system in prototyping \nand testing activities.\
592
+ \ Make sure these tests cover various scenarios , such as crisis \nsituations\
593
+ \ or ethically sensitive contexts. Human -AI Configuration ; \nInformation Integrity\
594
+ \ ; Harmful Bias \nand Homogenization ; Dangerous , \nViolent, or Hateful Content\
595
+ \ \nAI Actor Tasks: AI Design, AI Development, Domain Experts, End -Users, Human\
596
+ \ Factors, Operation and Monitoring"
597
+ - '65. See, e.g., Scott Ikeda. Major Data Broker Exposes 235 Million Social Media
598
+ Profiles in Data Lead: Info
599
+
600
+ Appears to Have Been Scraped Without Permission. CPO Magazine. Aug. 28, 2020.
601
+ https://
602
+
603
+ www.cpomagazine.com/cyber-security/major-data-broker-exposes-235-million-social-media-profiles-
604
+
605
+ in-data-leak/; Lily Hay Newman. 1.2 Billion Records Found Exposed Online in a
606
+ Single Server . WIRED,
607
+
608
+ Nov. 22, 2019. https://www.wired.com/story/billion-records-exposed-online/
609
+
610
+ 66.Lola Fadulu. Facial Recognition Technology in Public Housing Prompts Backlash
611
+ . New York Times.
612
+
613
+ Sept. 24, 2019.
614
+
615
+ https://www.nytimes.com/2019/09/24/us/politics/facial-recognition-technology-housing.html
616
+
617
+ 67. Jo Constantz. ‘They Were Spying On Us’: Amazon, Walmart, Use Surveillance
618
+ Technology to Bust
619
+
620
+ Unions. Newsweek. Dec. 13, 2021.
621
+
622
+ https://www.newsweek.com/they-were-spying-us-amazon-walmart-use-surveillance-technology-bust-
623
+
624
+ unions-1658603
625
+
626
+ 68. See, e.g., enforcement actions by the FTC against the photo storage app Everalbaum
627
+
628
+ (https://www.ftc.gov/legal-library/browse/cases-proceedings/192-3172-everalbum-inc-matter),
629
+ and
630
+
631
+ against Weight Watchers and their subsidiary Kurbo(https://www.ftc.gov/legal-library/browse/cases-proceedings/1923228-weight-watchersww)
632
+
633
+ 69. See, e.g., HIPAA, Pub. L 104-191 (1996); Fair Debt Collection Practices Act
634
+ (FDCPA), Pub. L. 95-109
635
+
636
+ (1977); Family Educational Rights and Privacy Act (FERPA) (20 U.S.C. § 1232g),
637
+ Children''s Online
638
+
639
+ Privacy Protection Act of 1998, 15 U.S.C. 6501–6505, and Confidential Information
640
+ Protection andStatistical Efficiency Act (CIPSEA) (116 Stat. 2899)
641
+
642
+ 70. Marshall Allen. You Snooze, You Lose: Insurers Make The Old Adage Literally
643
+ True . ProPublica. Nov.
644
+
645
+ 21, 2018.
646
+
647
+ https://www.propublica.org/article/you-snooze-you-lose-insurers-make-the-old-adage-literally-true
648
+
649
+ 71.Charles Duhigg. How Companies Learn Your Secrets. The New York Times. Feb.
650
+ 16, 2012.
651
+
652
+ https://www.nytimes.com/2012/02/19/magazine/shopping-habits.html72. Jack Gillum
653
+ and Jeff Kao. Aggression Detectors: The Unproven, Invasive Surveillance Technology
654
+
655
+ Schools are Using to Monitor Students. ProPublica. Jun. 25, 2019.
656
+
657
+ https://features.propublica.org/aggression-detector/the-unproven-invasive-surveillance-technology-
658
+
659
+ schools-are-using-to-monitor-students/
660
+
661
+ 73.Drew Harwell. Cheating-detection companies made millions during the pandemic.
662
+ Now students are
663
+
664
+ fighting back. Washington Post. Nov. 12, 2020.
665
+
666
+ https://www.washingtonpost.com/technology/2020/11/12/test-monitoring-student-revolt/
667
+
668
+ 74. See, e.g., Heather Morrison. Virtual Testing Puts Disabled Students at a Disadvantage.
669
+ Government
670
+
671
+ Technology. May 24, 2022.
672
+
673
+ https://www.govtech.com/education/k-12/virtual-testing-puts-disabled-students-at-a-disadvantage;
674
+
675
+ Lydia X. Z. Brown, Ridhi Shetty, Matt Scherer, and Andrew Crawford. Ableism And
676
+ Disability
677
+
678
+ Discrimination In New Surveillance Technologies: How new surveillance technologies
679
+ in education,
680
+
681
+ policing, health care, and the workplace disproportionately harm disabled people
682
+ . Center for Democracy
683
+
684
+ and Technology Report. May 24, 2022.https://cdt.org/insights/ableism-and-disability-discrimination-in-new-surveillance-technologies-how-new-surveillance-technologies-in-education-policing-health-care-and-the-workplace-disproportionately-harm-disabled-people/
685
+
686
+ 69'
687
+ model-index:
688
+ - name: SentenceTransformer based on Alibaba-NLP/gte-large-en-v1.5
689
+ results:
690
+ - task:
691
+ type: information-retrieval
692
+ name: Information Retrieval
693
+ dataset:
694
+ name: Unknown
695
+ type: unknown
696
+ metrics:
697
+ - type: cosine_accuracy@1
698
+ value: 0.71875
699
+ name: Cosine Accuracy@1
700
+ - type: cosine_accuracy@3
701
+ value: 0.921875
702
+ name: Cosine Accuracy@3
703
+ - type: cosine_accuracy@5
704
+ value: 0.96875
705
+ name: Cosine Accuracy@5
706
+ - type: cosine_accuracy@10
707
+ value: 1.0
708
+ name: Cosine Accuracy@10
709
+ - type: cosine_precision@1
710
+ value: 0.71875
711
+ name: Cosine Precision@1
712
+ - type: cosine_precision@3
713
+ value: 0.30729166666666663
714
+ name: Cosine Precision@3
715
+ - type: cosine_precision@5
716
+ value: 0.19374999999999998
717
+ name: Cosine Precision@5
718
+ - type: cosine_precision@10
719
+ value: 0.09999999999999999
720
+ name: Cosine Precision@10
721
+ - type: cosine_recall@1
722
+ value: 0.71875
723
+ name: Cosine Recall@1
724
+ - type: cosine_recall@3
725
+ value: 0.921875
726
+ name: Cosine Recall@3
727
+ - type: cosine_recall@5
728
+ value: 0.96875
729
+ name: Cosine Recall@5
730
+ - type: cosine_recall@10
731
+ value: 1.0
732
+ name: Cosine Recall@10
733
+ - type: cosine_ndcg@10
734
+ value: 0.8727659974381962
735
+ name: Cosine Ndcg@10
736
+ - type: cosine_mrr@10
737
+ value: 0.8304687500000002
738
+ name: Cosine Mrr@10
739
+ - type: cosine_map@100
740
+ value: 0.8304687500000001
741
+ name: Cosine Map@100
742
+ - type: dot_accuracy@1
743
+ value: 0.734375
744
+ name: Dot Accuracy@1
745
+ - type: dot_accuracy@3
746
+ value: 0.921875
747
+ name: Dot Accuracy@3
748
+ - type: dot_accuracy@5
749
+ value: 0.96875
750
+ name: Dot Accuracy@5
751
+ - type: dot_accuracy@10
752
+ value: 1.0
753
+ name: Dot Accuracy@10
754
+ - type: dot_precision@1
755
+ value: 0.734375
756
+ name: Dot Precision@1
757
+ - type: dot_precision@3
758
+ value: 0.30729166666666663
759
+ name: Dot Precision@3
760
+ - type: dot_precision@5
761
+ value: 0.19374999999999998
762
+ name: Dot Precision@5
763
+ - type: dot_precision@10
764
+ value: 0.09999999999999999
765
+ name: Dot Precision@10
766
+ - type: dot_recall@1
767
+ value: 0.734375
768
+ name: Dot Recall@1
769
+ - type: dot_recall@3
770
+ value: 0.921875
771
+ name: Dot Recall@3
772
+ - type: dot_recall@5
773
+ value: 0.96875
774
+ name: Dot Recall@5
775
+ - type: dot_recall@10
776
+ value: 1.0
777
+ name: Dot Recall@10
778
+ - type: dot_ndcg@10
779
+ value: 0.8785327200386421
780
+ name: Dot Ndcg@10
781
+ - type: dot_mrr@10
782
+ value: 0.8382812500000002
783
+ name: Dot Mrr@10
784
+ - type: dot_map@100
785
+ value: 0.8382812500000001
786
+ name: Dot Map@100
787
+ ---
788
+
789
+ # SentenceTransformer based on Alibaba-NLP/gte-large-en-v1.5
790
+
791
+ This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Alibaba-NLP/gte-large-en-v1.5](https://huggingface.co/Alibaba-NLP/gte-large-en-v1.5). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
792
+
793
+ ## Model Details
794
+
795
+ ### Model Description
796
+ - **Model Type:** Sentence Transformer
797
+ - **Base model:** [Alibaba-NLP/gte-large-en-v1.5](https://huggingface.co/Alibaba-NLP/gte-large-en-v1.5) <!-- at revision 104333d6af6f97649377c2afbde10a7704870c7b -->
798
+ - **Maximum Sequence Length:** 8192 tokens
799
+ - **Output Dimensionality:** 1024 tokens
800
+ - **Similarity Function:** Cosine Similarity
801
+ <!-- - **Training Dataset:** Unknown -->
802
+ <!-- - **Language:** Unknown -->
803
+ <!-- - **License:** Unknown -->
804
+
805
+ ### Model Sources
806
+
807
+ - **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
808
+ - **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
809
+ - **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
810
+
811
+ ### Full Model Architecture
812
+
813
+ ```
814
+ SentenceTransformer(
815
+ (0): Transformer({'max_seq_length': 8192, 'do_lower_case': False}) with Transformer model: NewModel
816
+ (1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
817
+ )
818
+ ```
819
+
820
+ ## Usage
821
+
822
+ ### Direct Usage (Sentence Transformers)
823
+
824
+ First install the Sentence Transformers library:
825
+
826
+ ```bash
827
+ pip install -U sentence-transformers
828
+ ```
829
+
830
+ Then you can load this model and run inference.
831
+ ```python
832
+ from sentence_transformers import SentenceTransformer
833
+
834
+ # Download from the 🤗 Hub
835
+ model = SentenceTransformer("sentence_transformers_model_id")
836
+ # Run inference
837
+ sentences = [
838
+ 'What are some of the legal frameworks mentioned in the context that aim to protect personal information, and how do they relate to data privacy concerns?',
839
+ "65. See, e.g., Scott Ikeda. Major Data Broker Exposes 235 Million Social Media Profiles in Data Lead: Info\nAppears to Have Been Scraped Without Permission. CPO Magazine. Aug. 28, 2020. https://\nwww.cpomagazine.com/cyber-security/major-data-broker-exposes-235-million-social-media-profiles-\nin-data-leak/; Lily Hay Newman. 1.2 Billion Records Found Exposed Online in a Single Server . WIRED,\nNov. 22, 2019. https://www.wired.com/story/billion-records-exposed-online/\n66.Lola Fadulu. Facial Recognition Technology in Public Housing Prompts Backlash . New York Times.\nSept. 24, 2019.\nhttps://www.nytimes.com/2019/09/24/us/politics/facial-recognition-technology-housing.html\n67. Jo Constantz. ‘They Were Spying On Us’: Amazon, Walmart, Use Surveillance Technology to Bust\nUnions. Newsweek. Dec. 13, 2021.\nhttps://www.newsweek.com/they-were-spying-us-amazon-walmart-use-surveillance-technology-bust-\nunions-1658603\n68. See, e.g., enforcement actions by the FTC against the photo storage app Everalbaum\n(https://www.ftc.gov/legal-library/browse/cases-proceedings/192-3172-everalbum-inc-matter), and\nagainst Weight Watchers and their subsidiary Kurbo(https://www.ftc.gov/legal-library/browse/cases-proceedings/1923228-weight-watchersww)\n69. See, e.g., HIPAA, Pub. L 104-191 (1996); Fair Debt Collection Practices Act (FDCPA), Pub. L. 95-109\n(1977); Family Educational Rights and Privacy Act (FERPA) (20 U.S.C. § 1232g), Children's Online\nPrivacy Protection Act of 1998, 15 U.S.C. 6501–6505, and Confidential Information Protection andStatistical Efficiency Act (CIPSEA) (116 Stat. 2899)\n70. Marshall Allen. You Snooze, You Lose: Insurers Make The Old Adage Literally True . ProPublica. Nov.\n21, 2018.\nhttps://www.propublica.org/article/you-snooze-you-lose-insurers-make-the-old-adage-literally-true\n71.Charles Duhigg. How Companies Learn Your Secrets. The New York Times. Feb. 16, 2012.\nhttps://www.nytimes.com/2012/02/19/magazine/shopping-habits.html72. Jack Gillum and Jeff Kao. Aggression Detectors: The Unproven, Invasive Surveillance Technology\nSchools are Using to Monitor Students. ProPublica. Jun. 25, 2019.\nhttps://features.propublica.org/aggression-detector/the-unproven-invasive-surveillance-technology-\nschools-are-using-to-monitor-students/\n73.Drew Harwell. Cheating-detection companies made millions during the pandemic. Now students are\nfighting back. Washington Post. Nov. 12, 2020.\nhttps://www.washingtonpost.com/technology/2020/11/12/test-monitoring-student-revolt/\n74. See, e.g., Heather Morrison. Virtual Testing Puts Disabled Students at a Disadvantage. Government\nTechnology. May 24, 2022.\nhttps://www.govtech.com/education/k-12/virtual-testing-puts-disabled-students-at-a-disadvantage;\nLydia X. Z. Brown, Ridhi Shetty, Matt Scherer, and Andrew Crawford. Ableism And Disability\nDiscrimination In New Surveillance Technologies: How new surveillance technologies in education,\npolicing, health care, and the workplace disproportionately harm disabled people . Center for Democracy\nand Technology Report. May 24, 2022.https://cdt.org/insights/ableism-and-disability-discrimination-in-new-surveillance-technologies-how-new-surveillance-technologies-in-education-policing-health-care-and-the-workplace-disproportionately-harm-disabled-people/\n69",
840
+ '25 MP-2.3-002 Review and document accuracy, representativeness, relevance, suitability of data \nused at different stages of AI life cycle. Harmful Bias and Homogenization ; \nIntellectual Property \nMP-2.3-003 Deploy and document fact -checking techniques to verify the accuracy and \nveracity of information generated by GAI systems, especially when the \ninformation comes from multiple (or unknown) sources. Information Integrity \nMP-2.3-004 Develop and implement testing techniques to identify GAI produced content (e.g., synthetic media) that might be indistinguishable from human -generated content. Information Integrity \nMP-2.3-005 Implement plans for GAI systems to undergo regular adversarial testing to identify \nvulnerabilities and potential manipulation or misuse. Information Security \nAI Actor Tasks: AI Development, Domain Experts, TEVV \n \nMAP 3.4: Processes for operator and practitioner proficiency with AI system performance and trustworthiness – and relevant \ntechnical standards and certifications – are defined, assessed, and documented. \nAction ID Suggested Action GAI Risks \nMP-3.4-001 Evaluate whether GAI operators and end -users can accurately understand \ncontent lineage and origin. Human -AI Configuration ; \nInformation Integrity \nMP-3.4-002 Adapt existing training programs to include modules on digital content \ntransparency. Information Integrity \nMP-3.4-003 Develop certification programs that test proficiency in managing GAI risks and \ninterpreting content provenance, relevant to specific industry and context. Information Integrity \nMP-3.4-004 Delineate human proficiency tests from tests of GAI capabilities. Human -AI Configuration \nMP-3.4-005 Implement systems to continually monitor and track the outcomes of human- GAI \nconfigurations for future refinement and improvements . Human -AI Configuration ; \nInformation Integrity \nMP-3.4-006 Involve the end -users, practitioners, and operators in GAI system in prototyping \nand testing activities. Make sure these tests cover various scenarios , such as crisis \nsituations or ethically sensitive contexts. Human -AI Configuration ; \nInformation Integrity ; Harmful Bias \nand Homogenization ; Dangerous , \nViolent, or Hateful Content \nAI Actor Tasks: AI Design, AI Development, Domain Experts, End -Users, Human Factors, Operation and Monitoring',
841
+ ]
842
+ embeddings = model.encode(sentences)
843
+ print(embeddings.shape)
844
+ # [3, 1024]
845
+
846
+ # Get the similarity scores for the embeddings
847
+ similarities = model.similarity(embeddings, embeddings)
848
+ print(similarities.shape)
849
+ # [3, 3]
850
+ ```
851
+
852
+ <!--
853
+ ### Direct Usage (Transformers)
854
+
855
+ <details><summary>Click to see the direct usage in Transformers</summary>
856
+
857
+ </details>
858
+ -->
859
+
860
+ <!--
861
+ ### Downstream Usage (Sentence Transformers)
862
+
863
+ You can finetune this model on your own dataset.
864
+
865
+ <details><summary>Click to expand</summary>
866
+
867
+ </details>
868
+ -->
869
+
870
+ <!--
871
+ ### Out-of-Scope Use
872
+
873
+ *List how the model may foreseeably be misused and address what users ought not to do with the model.*
874
+ -->
875
+
876
+ ## Evaluation
877
+
878
+ ### Metrics
879
+
880
+ #### Information Retrieval
881
+
882
+ * Evaluated with [<code>InformationRetrievalEvaluator</code>](https://sbert.net/docs/package_reference/sentence_transformer/evaluation.html#sentence_transformers.evaluation.InformationRetrievalEvaluator)
883
+
884
+ | Metric | Value |
885
+ |:--------------------|:-----------|
886
+ | cosine_accuracy@1 | 0.7188 |
887
+ | cosine_accuracy@3 | 0.9219 |
888
+ | cosine_accuracy@5 | 0.9688 |
889
+ | cosine_accuracy@10 | 1.0 |
890
+ | cosine_precision@1 | 0.7188 |
891
+ | cosine_precision@3 | 0.3073 |
892
+ | cosine_precision@5 | 0.1937 |
893
+ | cosine_precision@10 | 0.1 |
894
+ | cosine_recall@1 | 0.7188 |
895
+ | cosine_recall@3 | 0.9219 |
896
+ | cosine_recall@5 | 0.9688 |
897
+ | cosine_recall@10 | 1.0 |
898
+ | cosine_ndcg@10 | 0.8728 |
899
+ | cosine_mrr@10 | 0.8305 |
900
+ | cosine_map@100 | 0.8305 |
901
+ | dot_accuracy@1 | 0.7344 |
902
+ | dot_accuracy@3 | 0.9219 |
903
+ | dot_accuracy@5 | 0.9688 |
904
+ | dot_accuracy@10 | 1.0 |
905
+ | dot_precision@1 | 0.7344 |
906
+ | dot_precision@3 | 0.3073 |
907
+ | dot_precision@5 | 0.1937 |
908
+ | dot_precision@10 | 0.1 |
909
+ | dot_recall@1 | 0.7344 |
910
+ | dot_recall@3 | 0.9219 |
911
+ | dot_recall@5 | 0.9688 |
912
+ | dot_recall@10 | 1.0 |
913
+ | dot_ndcg@10 | 0.8785 |
914
+ | dot_mrr@10 | 0.8383 |
915
+ | **dot_map@100** | **0.8383** |
916
+
917
+ <!--
918
+ ## Bias, Risks and Limitations
919
+
920
+ *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
921
+ -->
922
+
923
+ <!--
924
+ ### Recommendations
925
+
926
+ *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
927
+ -->
928
+
929
+ ## Training Details
930
+
931
+ ### Training Dataset
932
+
933
+ #### Unnamed Dataset
934
+
935
+
936
+ * Size: 586 training samples
937
+ * Columns: <code>sentence_0</code> and <code>sentence_1</code>
938
+ * Approximate statistics based on the first 586 samples:
939
+ | | sentence_0 | sentence_1 |
940
+ |:--------|:-----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
941
+ | type | string | string |
942
+ | details | <ul><li>min: 20 tokens</li><li>mean: 35.95 tokens</li><li>max: 60 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 545.8 tokens</li><li>max: 1018 tokens</li></ul> |
943
+ * Samples:
944
+ | sentence_0 | sentence_1 |
945
+ |:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
946
+ | <code>What are the primary objectives outlined in the "Blueprint for an AI Bill of Rights" as it pertains to the American people?</code> | <code>BLUEPRINT FOR AN <br>AI B ILL OF <br>RIGHTS <br>MAKING AUTOMATED <br>SYSTEMS WORK FOR <br>THE AMERICAN PEOPLE <br>OCTOBER 2022</code> |
947
+ | <code>In what ways does the document propose to ensure that automated systems are designed and implemented to benefit society?</code> | <code>BLUEPRINT FOR AN <br>AI B ILL OF <br>RIGHTS <br>MAKING AUTOMATED <br>SYSTEMS WORK FOR <br>THE AMERICAN PEOPLE <br>OCTOBER 2022</code> |
948
+ | <code>What is the primary purpose of the Blueprint for an AI Bill of Rights as published by the White House Office of Science and Technology Policy in October 2022?</code> | <code>About this Document <br>The Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People was <br>published by the White House Office of Science and Technology Policy in October 2022. This framework was <br>released one year after OSTP announced the launch of a process to develop “a bill of rights for an AI-powered <br>world.” Its release follows a year of public engagement to inform this initiative. The framework is available <br>online at: https://www.whitehouse.gov/ostp/ai-bill-of-rights <br>About the Office of Science and Technology Policy <br>The Office of Science and Technology Policy (OSTP) was established by the National Science and Technology <br>Policy, Organization, and Priorities Act of 1976 to provide the President and others within the Executive Office <br>of the President with advice on the scientific, engineering, and technological aspects of the economy, national <br>security, health, foreign relations, the environment, and the technological recovery and use of resources, among <br>other topics. OSTP leads interagency science and technology policy coordination efforts, assists the Office of <br>Management and Budget (OMB) with an annual review and analysis of Federal research and development in <br>budgets, and serves as a source of scientific and technological analysis and judgment for the President with <br>respect to major policies, plans, and programs of the Federal Government. <br>Legal Disclaimer <br>The Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People is a white paper <br>published by the White House Office of Science and Technology Policy. It is intended to support the <br>development of policies and practices that protect civil rights and promote democratic values in the building, <br>deployment, and governance of automated systems. <br>The Blueprint for an AI Bill of Rights is non-binding and does not constitute U.S. government policy. It <br>does not supersede, modify, or direct an interpretation of any existing statute, regulation, policy, or <br>international instrument. It does not constitute binding guidance for the public or Federal agencies and <br>therefore does not require compliance with the principles described herein. It also is not determinative of what <br>the U.S. government’s position will be in any international negotiation. Adoption of these principles may not <br>meet the requirements of existing statutes, regulations, policies, or international instruments, or the <br>requirements of the Federal agencies that enforce them. These principles are not intended to, and do not, <br>prohibit or limit any lawful activity of a government agency, including law enforcement, national security, or <br>intelligence activities. <br>The appropriate application of the principles set forth in this white paper depends significantly on the <br>context in which automated systems are being utilized. In some circumstances, application of these principles <br>in whole or in part may not be appropriate given the intended use of automated systems to achieve government <br>agency missions. Future sector-specific guidance will likely be necessary and important for guiding the use of <br>automated systems in certain settings such as AI systems used as part of school building security or automated <br>health diagnostic systems. <br>The Blueprint for an AI Bill of Rights recognizes that law enforcement activities require a balancing of <br>equities, for example, between the protection of sensitive law enforcement information and the principle of <br>notice; as such, notice may not be appropriate, or may need to be adjusted to protect sources, methods, and <br>other law enforcement equities. Even in contexts where these principles may not apply in whole or in part, <br>federal departments and agencies remain subject to judicial, privacy, and civil liberties oversight as well as <br>existing policies and safeguards that govern automated systems, including, for example, Executive Order 13960, <br>Promoting the Use of Trustworthy Artificial Intelligence in the Federal Government (December 2020). <br>This white paper recognizes that national security (which includes certain law enforcement and <br>homeland security activities) and defense activities are of increased sensitivity and interest to our nation’s <br>adversaries and are often subject to special requirements, such as those governing classified information and <br>other protected data. Such activities require alternative, compatible safeguards through existing policies that <br>govern automated systems and AI, such as the Department of Defense (DOD) AI Ethical Principles and <br>Responsible AI Implementation Pathway and the Intelligence Community (IC) AI Ethics Principles and <br>Framework. The implementation of these policies to national security and defense activities can be informed by <br>the Blueprint for an AI Bill of Rights where feasible.</code> |
949
+ * Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
950
+ ```json
951
+ {
952
+ "scale": 20.0,
953
+ "similarity_fct": "cos_sim"
954
+ }
955
+ ```
956
+
957
+ ### Training Hyperparameters
958
+ #### Non-Default Hyperparameters
959
+
960
+ - `eval_strategy`: steps
961
+ - `per_device_train_batch_size`: 5
962
+ - `per_device_eval_batch_size`: 5
963
+ - `num_train_epochs`: 2
964
+ - `multi_dataset_batch_sampler`: round_robin
965
+
966
+ #### All Hyperparameters
967
+ <details><summary>Click to expand</summary>
968
+
969
+ - `overwrite_output_dir`: False
970
+ - `do_predict`: False
971
+ - `eval_strategy`: steps
972
+ - `prediction_loss_only`: True
973
+ - `per_device_train_batch_size`: 5
974
+ - `per_device_eval_batch_size`: 5
975
+ - `per_gpu_train_batch_size`: None
976
+ - `per_gpu_eval_batch_size`: None
977
+ - `gradient_accumulation_steps`: 1
978
+ - `eval_accumulation_steps`: None
979
+ - `torch_empty_cache_steps`: None
980
+ - `learning_rate`: 5e-05
981
+ - `weight_decay`: 0.0
982
+ - `adam_beta1`: 0.9
983
+ - `adam_beta2`: 0.999
984
+ - `adam_epsilon`: 1e-08
985
+ - `max_grad_norm`: 1
986
+ - `num_train_epochs`: 2
987
+ - `max_steps`: -1
988
+ - `lr_scheduler_type`: linear
989
+ - `lr_scheduler_kwargs`: {}
990
+ - `warmup_ratio`: 0.0
991
+ - `warmup_steps`: 0
992
+ - `log_level`: passive
993
+ - `log_level_replica`: warning
994
+ - `log_on_each_node`: True
995
+ - `logging_nan_inf_filter`: True
996
+ - `save_safetensors`: True
997
+ - `save_on_each_node`: False
998
+ - `save_only_model`: False
999
+ - `restore_callback_states_from_checkpoint`: False
1000
+ - `no_cuda`: False
1001
+ - `use_cpu`: False
1002
+ - `use_mps_device`: False
1003
+ - `seed`: 42
1004
+ - `data_seed`: None
1005
+ - `jit_mode_eval`: False
1006
+ - `use_ipex`: False
1007
+ - `bf16`: False
1008
+ - `fp16`: False
1009
+ - `fp16_opt_level`: O1
1010
+ - `half_precision_backend`: auto
1011
+ - `bf16_full_eval`: False
1012
+ - `fp16_full_eval`: False
1013
+ - `tf32`: None
1014
+ - `local_rank`: 0
1015
+ - `ddp_backend`: None
1016
+ - `tpu_num_cores`: None
1017
+ - `tpu_metrics_debug`: False
1018
+ - `debug`: []
1019
+ - `dataloader_drop_last`: False
1020
+ - `dataloader_num_workers`: 0
1021
+ - `dataloader_prefetch_factor`: None
1022
+ - `past_index`: -1
1023
+ - `disable_tqdm`: False
1024
+ - `remove_unused_columns`: True
1025
+ - `label_names`: None
1026
+ - `load_best_model_at_end`: False
1027
+ - `ignore_data_skip`: False
1028
+ - `fsdp`: []
1029
+ - `fsdp_min_num_params`: 0
1030
+ - `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
1031
+ - `fsdp_transformer_layer_cls_to_wrap`: None
1032
+ - `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
1033
+ - `deepspeed`: None
1034
+ - `label_smoothing_factor`: 0.0
1035
+ - `optim`: adamw_torch
1036
+ - `optim_args`: None
1037
+ - `adafactor`: False
1038
+ - `group_by_length`: False
1039
+ - `length_column_name`: length
1040
+ - `ddp_find_unused_parameters`: None
1041
+ - `ddp_bucket_cap_mb`: None
1042
+ - `ddp_broadcast_buffers`: False
1043
+ - `dataloader_pin_memory`: True
1044
+ - `dataloader_persistent_workers`: False
1045
+ - `skip_memory_metrics`: True
1046
+ - `use_legacy_prediction_loop`: False
1047
+ - `push_to_hub`: False
1048
+ - `resume_from_checkpoint`: None
1049
+ - `hub_model_id`: None
1050
+ - `hub_strategy`: every_save
1051
+ - `hub_private_repo`: False
1052
+ - `hub_always_push`: False
1053
+ - `gradient_checkpointing`: False
1054
+ - `gradient_checkpointing_kwargs`: None
1055
+ - `include_inputs_for_metrics`: False
1056
+ - `eval_do_concat_batches`: True
1057
+ - `fp16_backend`: auto
1058
+ - `push_to_hub_model_id`: None
1059
+ - `push_to_hub_organization`: None
1060
+ - `mp_parameters`:
1061
+ - `auto_find_batch_size`: False
1062
+ - `full_determinism`: False
1063
+ - `torchdynamo`: None
1064
+ - `ray_scope`: last
1065
+ - `ddp_timeout`: 1800
1066
+ - `torch_compile`: False
1067
+ - `torch_compile_backend`: None
1068
+ - `torch_compile_mode`: None
1069
+ - `dispatch_batches`: None
1070
+ - `split_batches`: None
1071
+ - `include_tokens_per_second`: False
1072
+ - `include_num_input_tokens_seen`: False
1073
+ - `neftune_noise_alpha`: None
1074
+ - `optim_target_modules`: None
1075
+ - `batch_eval_metrics`: False
1076
+ - `eval_on_start`: False
1077
+ - `eval_use_gather_object`: False
1078
+ - `batch_sampler`: batch_sampler
1079
+ - `multi_dataset_batch_sampler`: round_robin
1080
+
1081
+ </details>
1082
+
1083
+ ### Training Logs
1084
+ | Epoch | Step | dot_map@100 |
1085
+ |:------:|:----:|:-----------:|
1086
+ | 0.4237 | 50 | 0.8383 |
1087
+
1088
+
1089
+ ### Framework Versions
1090
+ - Python: 3.10.12
1091
+ - Sentence Transformers: 3.1.1
1092
+ - Transformers: 4.44.2
1093
+ - PyTorch: 2.4.1+cu121
1094
+ - Accelerate: 0.34.2
1095
+ - Datasets: 3.0.1
1096
+ - Tokenizers: 0.19.1
1097
+
1098
+ ## Citation
1099
+
1100
+ ### BibTeX
1101
+
1102
+ #### Sentence Transformers
1103
+ ```bibtex
1104
+ @inproceedings{reimers-2019-sentence-bert,
1105
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
1106
+ author = "Reimers, Nils and Gurevych, Iryna",
1107
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
1108
+ month = "11",
1109
+ year = "2019",
1110
+ publisher = "Association for Computational Linguistics",
1111
+ url = "https://arxiv.org/abs/1908.10084",
1112
+ }
1113
+ ```
1114
+
1115
+ #### MultipleNegativesRankingLoss
1116
+ ```bibtex
1117
+ @misc{henderson2017efficient,
1118
+ title={Efficient Natural Language Response Suggestion for Smart Reply},
1119
+ author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
1120
+ year={2017},
1121
+ eprint={1705.00652},
1122
+ archivePrefix={arXiv},
1123
+ primaryClass={cs.CL}
1124
+ }
1125
+ ```
1126
+
1127
+ <!--
1128
+ ## Glossary
1129
+
1130
+ *Clearly define terms in order to be accessible across audiences.*
1131
+ -->
1132
+
1133
+ <!--
1134
+ ## Model Card Authors
1135
+
1136
+ *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
1137
+ -->
1138
+
1139
+ <!--
1140
+ ## Model Card Contact
1141
+
1142
+ *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
1143
+ -->
config.json ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "Alibaba-NLP/gte-large-en-v1.5",
3
+ "architectures": [
4
+ "NewModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.0,
7
+ "auto_map": {
8
+ "AutoConfig": "Alibaba-NLP/new-impl--configuration.NewConfig",
9
+ "AutoModel": "Alibaba-NLP/new-impl--modeling.NewModel",
10
+ "AutoModelForMaskedLM": "Alibaba-NLP/new-impl--modeling.NewForMaskedLM",
11
+ "AutoModelForMultipleChoice": "Alibaba-NLP/new-impl--modeling.NewForMultipleChoice",
12
+ "AutoModelForQuestionAnswering": "Alibaba-NLP/new-impl--modeling.NewForQuestionAnswering",
13
+ "AutoModelForSequenceClassification": "Alibaba-NLP/new-impl--modeling.NewForSequenceClassification",
14
+ "AutoModelForTokenClassification": "Alibaba-NLP/new-impl--modeling.NewForTokenClassification"
15
+ },
16
+ "classifier_dropout": null,
17
+ "hidden_act": "gelu",
18
+ "hidden_dropout_prob": 0.1,
19
+ "hidden_size": 1024,
20
+ "initializer_range": 0.02,
21
+ "intermediate_size": 4096,
22
+ "layer_norm_eps": 1e-12,
23
+ "layer_norm_type": "layer_norm",
24
+ "logn_attention_clip1": false,
25
+ "logn_attention_scale": false,
26
+ "max_position_embeddings": 8192,
27
+ "model_type": "new",
28
+ "num_attention_heads": 16,
29
+ "num_hidden_layers": 24,
30
+ "pack_qkv": true,
31
+ "pad_token_id": 0,
32
+ "position_embedding_type": "rope",
33
+ "rope_scaling": {
34
+ "factor": 2.0,
35
+ "type": "ntk"
36
+ },
37
+ "rope_theta": 160000,
38
+ "torch_dtype": "float32",
39
+ "transformers_version": "4.44.2",
40
+ "type_vocab_size": 2,
41
+ "unpad_inputs": false,
42
+ "use_memory_efficient_attention": false,
43
+ "vocab_size": 30528
44
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "3.1.1",
4
+ "transformers": "4.44.2",
5
+ "pytorch": "2.4.1+cu121"
6
+ },
7
+ "prompts": {},
8
+ "default_prompt_name": null,
9
+ "similarity_fn_name": null
10
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3c0b605f1be5bbdb1437fb0f484850b1a0bfcbe06f8529b618131c370fbbf190
3
+ size 1736585680
modules.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ }
14
+ ]
onnx/config.json ADDED
@@ -0,0 +1,45 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "policy_gte_large_2plus/",
3
+ "architectures": [
4
+ "NewModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.0,
7
+ "auto_map": {
8
+ "AutoConfig": "configuration.NewConfig",
9
+ "AutoModel": "Alibaba-NLP/new-impl--modeling.NewModel",
10
+ "AutoModelForMaskedLM": "Alibaba-NLP/new-impl--modeling.NewForMaskedLM",
11
+ "AutoModelForMultipleChoice": "Alibaba-NLP/new-impl--modeling.NewForMultipleChoice",
12
+ "AutoModelForQuestionAnswering": "Alibaba-NLP/new-impl--modeling.NewForQuestionAnswering",
13
+ "AutoModelForSequenceClassification": "Alibaba-NLP/new-impl--modeling.NewForSequenceClassification",
14
+ "AutoModelForTokenClassification": "Alibaba-NLP/new-impl--modeling.NewForTokenClassification"
15
+ },
16
+ "classifier_dropout": null,
17
+ "export_model_type": "transformer",
18
+ "hidden_act": "gelu",
19
+ "hidden_dropout_prob": 0.1,
20
+ "hidden_size": 1024,
21
+ "initializer_range": 0.02,
22
+ "intermediate_size": 4096,
23
+ "layer_norm_eps": 1e-12,
24
+ "layer_norm_type": "layer_norm",
25
+ "logn_attention_clip1": false,
26
+ "logn_attention_scale": false,
27
+ "max_position_embeddings": 8192,
28
+ "model_type": "new",
29
+ "num_attention_heads": 16,
30
+ "num_hidden_layers": 24,
31
+ "pack_qkv": true,
32
+ "pad_token_id": 0,
33
+ "position_embedding_type": "rope",
34
+ "rope_scaling": {
35
+ "factor": 2.0,
36
+ "type": "ntk"
37
+ },
38
+ "rope_theta": 160000,
39
+ "torch_dtype": "float32",
40
+ "transformers_version": "4.44.2",
41
+ "type_vocab_size": 2,
42
+ "unpad_inputs": false,
43
+ "use_memory_efficient_attention": false,
44
+ "vocab_size": 30528
45
+ }
onnx/configuration.py ADDED
@@ -0,0 +1,145 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2024 The GTE Team Authors and Alibaba Group.
3
+ # Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved.
4
+ #
5
+ # Licensed under the Apache License, Version 2.0 (the "License");
6
+ # you may not use this file except in compliance with the License.
7
+ # You may obtain a copy of the License at
8
+ #
9
+ # http://www.apache.org/licenses/LICENSE-2.0
10
+ #
11
+ # Unless required by applicable law or agreed to in writing, software
12
+ # distributed under the License is distributed on an "AS IS" BASIS,
13
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14
+ # See the License for the specific language governing permissions and
15
+ # limitations under the License.
16
+ """ NEW model configuration"""
17
+ from transformers.configuration_utils import PretrainedConfig
18
+ from transformers.utils import logging
19
+
20
+ logger = logging.get_logger(__name__)
21
+
22
+
23
+ class NewConfig(PretrainedConfig):
24
+ r"""
25
+ This is the configuration class to store the configuration of a [`NewModel`] or a [`TFNewModel`]. It is used to
26
+ instantiate a NEW model according to the specified arguments, defining the model architecture. Instantiating a
27
+ configuration with the defaults will yield a similar configuration to that of the NEW
28
+ [izhx/new-base-en](https://huggingface.co/izhx/new-base-en) architecture.
29
+
30
+ Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
31
+ documentation from [`PretrainedConfig`] for more information.
32
+
33
+
34
+ Args:
35
+ vocab_size (`int`, *optional*, defaults to 30522):
36
+ Vocabulary size of the NEW model. Defines the number of different tokens that can be represented by the
37
+ `inputs_ids` passed when calling [`NewModel`] or [`TFNewModel`].
38
+ hidden_size (`int`, *optional*, defaults to 768):
39
+ Dimensionality of the encoder layers and the pooler layer.
40
+ num_hidden_layers (`int`, *optional*, defaults to 12):
41
+ Number of hidden layers in the Transformer encoder.
42
+ num_attention_heads (`int`, *optional*, defaults to 12):
43
+ Number of attention heads for each attention layer in the Transformer encoder.
44
+ intermediate_size (`int`, *optional*, defaults to 3072):
45
+ Dimensionality of the "intermediate" (often named feed-forward) layer in the Transformer encoder.
46
+ hidden_act (`str` or `Callable`, *optional*, defaults to `"gelu"`):
47
+ The non-linear activation function (function or string) in the encoder and pooler. If string, `"gelu"`,
48
+ `"relu"`, `"silu"` and `"gelu_new"` are supported.
49
+ hidden_dropout_prob (`float`, *optional*, defaults to 0.1):
50
+ The dropout probability for all fully connected layers in the embeddings, encoder, and pooler.
51
+ attention_probs_dropout_prob (`float`, *optional*, defaults to 0.1):
52
+ The dropout ratio for the attention probabilities.
53
+ max_position_embeddings (`int`, *optional*, defaults to 512):
54
+ The maximum sequence length that this model might ever be used with. Typically set this to something large
55
+ just in case (e.g., 512 or 1024 or 2048).
56
+ type_vocab_size (`int`, *optional*, defaults to 2):
57
+ The vocabulary size of the `token_type_ids` passed when calling [`NewModel`] or [`TFNewModel`].
58
+ initializer_range (`float`, *optional*, defaults to 0.02):
59
+ The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
60
+ layer_norm_eps (`float`, *optional*, defaults to 1e-12):
61
+ The epsilon used by the layer normalization layers.
62
+ position_embedding_type (`str`, *optional*, defaults to `"rope"`):
63
+ Type of position embedding. Choose one of `"absolute"`, `"rope"`.
64
+ rope_theta (`float`, *optional*, defaults to 10000.0):
65
+ The base period of the RoPE embeddings.
66
+ rope_scaling (`Dict`, *optional*):
67
+ Dictionary containing the scaling configuration for the RoPE embeddings. Currently supports two scaling
68
+ strategies: linear and dynamic. Their scaling factor must be a float greater than 1. The expected format is
69
+ `{"type": strategy name, "factor": scaling factor}`. When using this flag, don't update
70
+ `max_position_embeddings` to the expected new maximum. See the following thread for more information on how
71
+ these scaling strategies behave:
72
+ https://www.reddit.com/r/LocalLLaMA/comments/14mrgpr/dynamically_scaled_rope_further_increases/. This is an
73
+ experimental feature, subject to breaking API changes in future versions.
74
+ classifier_dropout (`float`, *optional*):
75
+ The dropout ratio for the classification head.
76
+
77
+ Examples:
78
+
79
+ ```python
80
+ >>> from transformers import NewConfig, NewModel
81
+
82
+ >>> # Initializing a NEW izhx/new-base-en style configuration
83
+ >>> configuration = NewConfig()
84
+
85
+ >>> # Initializing a model (with random weights) from the izhx/new-base-en style configuration
86
+ >>> model = NewModel(configuration)
87
+
88
+ >>> # Accessing the model configuration
89
+ >>> configuration = model.config
90
+ ```"""
91
+
92
+ model_type = "new"
93
+
94
+ def __init__(
95
+ self,
96
+ vocab_size=30528,
97
+ hidden_size=768,
98
+ num_hidden_layers=12,
99
+ num_attention_heads=12,
100
+ intermediate_size=3072,
101
+ hidden_act="gelu",
102
+ hidden_dropout_prob=0.1,
103
+ attention_probs_dropout_prob=0.0,
104
+ max_position_embeddings=2048,
105
+ type_vocab_size=1,
106
+ initializer_range=0.02,
107
+ layer_norm_type='layer_norm',
108
+ layer_norm_eps=1e-12,
109
+ # pad_token_id=0,
110
+ position_embedding_type="rope",
111
+ rope_theta=10000.0,
112
+ rope_scaling=None,
113
+ classifier_dropout=None,
114
+ pack_qkv=True,
115
+ unpad_inputs=False,
116
+ use_memory_efficient_attention=False,
117
+ logn_attention_scale=False,
118
+ logn_attention_clip1=False,
119
+ **kwargs,
120
+ ):
121
+ super().__init__(**kwargs)
122
+
123
+ self.vocab_size = vocab_size
124
+ self.hidden_size = hidden_size
125
+ self.num_hidden_layers = num_hidden_layers
126
+ self.num_attention_heads = num_attention_heads
127
+ self.hidden_act = hidden_act
128
+ self.intermediate_size = intermediate_size
129
+ self.hidden_dropout_prob = hidden_dropout_prob
130
+ self.attention_probs_dropout_prob = attention_probs_dropout_prob
131
+ self.max_position_embeddings = max_position_embeddings
132
+ self.type_vocab_size = type_vocab_size
133
+ self.initializer_range = initializer_range
134
+ self.layer_norm_type = layer_norm_type
135
+ self.layer_norm_eps = layer_norm_eps
136
+ self.position_embedding_type = position_embedding_type
137
+ self.rope_theta = rope_theta
138
+ self.rope_scaling = rope_scaling
139
+ self.classifier_dropout = classifier_dropout
140
+
141
+ self.pack_qkv = pack_qkv
142
+ self.unpad_inputs = unpad_inputs
143
+ self.use_memory_efficient_attention = use_memory_efficient_attention
144
+ self.logn_attention_scale = logn_attention_scale
145
+ self.logn_attention_clip1 = logn_attention_clip1
onnx/model.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:343e333536b1f293902c0ce8c2622de443abe9ba2e023149e1891e7efd758d92
3
+ size 1745854634
onnx/special_tokens_map.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
+ }
onnx/tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
onnx/tokenizer_config.json ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_lower_case": true,
47
+ "mask_token": "[MASK]",
48
+ "max_length": 8000,
49
+ "model_max_length": 8192,
50
+ "pad_to_multiple_of": null,
51
+ "pad_token": "[PAD]",
52
+ "pad_token_type_id": 0,
53
+ "padding_side": "right",
54
+ "sep_token": "[SEP]",
55
+ "stride": 0,
56
+ "strip_accents": null,
57
+ "tokenize_chinese_chars": true,
58
+ "tokenizer_class": "BertTokenizer",
59
+ "truncation_side": "right",
60
+ "truncation_strategy": "longest_first",
61
+ "unk_token": "[UNK]"
62
+ }
onnx/vocab.txt ADDED
The diff for this file is too large to render. See raw diff
 
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 8192,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": {
3
+ "content": "[CLS]",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "mask_token": {
10
+ "content": "[MASK]",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "[PAD]",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ },
23
+ "sep_token": {
24
+ "content": "[SEP]",
25
+ "lstrip": false,
26
+ "normalized": false,
27
+ "rstrip": false,
28
+ "single_word": false
29
+ },
30
+ "unk_token": {
31
+ "content": "[UNK]",
32
+ "lstrip": false,
33
+ "normalized": false,
34
+ "rstrip": false,
35
+ "single_word": false
36
+ }
37
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "added_tokens_decoder": {
3
+ "0": {
4
+ "content": "[PAD]",
5
+ "lstrip": false,
6
+ "normalized": false,
7
+ "rstrip": false,
8
+ "single_word": false,
9
+ "special": true
10
+ },
11
+ "100": {
12
+ "content": "[UNK]",
13
+ "lstrip": false,
14
+ "normalized": false,
15
+ "rstrip": false,
16
+ "single_word": false,
17
+ "special": true
18
+ },
19
+ "101": {
20
+ "content": "[CLS]",
21
+ "lstrip": false,
22
+ "normalized": false,
23
+ "rstrip": false,
24
+ "single_word": false,
25
+ "special": true
26
+ },
27
+ "102": {
28
+ "content": "[SEP]",
29
+ "lstrip": false,
30
+ "normalized": false,
31
+ "rstrip": false,
32
+ "single_word": false,
33
+ "special": true
34
+ },
35
+ "103": {
36
+ "content": "[MASK]",
37
+ "lstrip": false,
38
+ "normalized": false,
39
+ "rstrip": false,
40
+ "single_word": false,
41
+ "special": true
42
+ }
43
+ },
44
+ "clean_up_tokenization_spaces": true,
45
+ "cls_token": "[CLS]",
46
+ "do_lower_case": true,
47
+ "mask_token": "[MASK]",
48
+ "max_length": 8000,
49
+ "model_max_length": 8192,
50
+ "pad_to_multiple_of": null,
51
+ "pad_token": "[PAD]",
52
+ "pad_token_type_id": 0,
53
+ "padding_side": "right",
54
+ "sep_token": "[SEP]",
55
+ "stride": 0,
56
+ "strip_accents": null,
57
+ "tokenize_chinese_chars": true,
58
+ "tokenizer_class": "BertTokenizer",
59
+ "truncation_side": "right",
60
+ "truncation_strategy": "longest_first",
61
+ "unk_token": "[UNK]"
62
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff