c01zaut commited on
Commit
ffef5d5
·
verified ·
1 Parent(s): b206cde

Upload folder using huggingface_hub

Browse files
CODE_OF_CONDUCT.md ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ # Microsoft Open Source Code of Conduct
2
+
3
+ This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
4
+
5
+ Resources:
6
+
7
+ - [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/)
8
+ - [Microsoft Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/)
9
+ - Contact [[email protected]](mailto:[email protected]) with questions or concerns
LICENSE ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ MICROSOFT RESEARCH LICENSE TERMS
2
+
3
+ IF YOU LIVE IN THE UNITED STATES, PLEASE READ THE “BINDING ARBITRATION AND CLASS ACTION WAIVER” SECTION BELOW. IT AFFECTS HOW DISPUTES ARE RESOLVED.
4
+
5
+ These license terms are an agreement between you and Microsoft Corporation (or one of its affiliates). They apply to the source code, object code, machine learning models, or data (collectively “Materials”) that accompany this license. IF YOU COMPLY WITH THESE LICENSE TERMS, YOU HAVE THE RIGHTS BELOW. BY USING THE MATERIALS, YOU ACCEPT THESE TERMS.
6
+
7
+ 1) INSTALLATION AND USE RIGHTS TO THE MATERIALS.
8
+
9
+ Subject to the terms of this agreement, you have the below rights, if applicable, to use the Materials solely for non-commercial, non-revenue generating, research purposes:
10
+
11
+ a) Source Code. If source code is included, you may use and modify the source code, but you may not distribute the source code.
12
+ b) Object Code. If object code is included, you may use the object code, but you may not distribute the object code.
13
+ c) Models. If machine learning model(s) are included, you may use the model(s), but you may not distribute the models.
14
+ d) Data. If data is included, you may use and modify the data, but your use and modification must be consistent with the consent under which the data was provided and/or gathered and you may not distribute the data or your modifications to the data.
15
+
16
+ 2) SCOPE OF LICENSE. The Materials are licensed, not sold. Microsoft reserves all other rights. Unless applicable law gives you more rights despite this limitation, you will not (and have no right to):
17
+
18
+ a) work around any technical limitations in the Materials that only allow you to use it in certain ways;
19
+ b) reverse engineer, decompile or disassemble the Materials;
20
+ c) remove, minimize, block, or modify any notices of Microsoft or its suppliers in the Materials;
21
+ d) use the Materials in any way that is against the law or to create or propagate malware; or
22
+ e) share, publish, distribute or lend the Materials, provide the Materials as a stand-alone hosted solution for others to use, or transfer the Materials or this agreement to any third party.
23
+
24
+ 3) PERSONAL DATA. If the data (set forth in Section 1(c) above) includes or is found to include any data that enables any ability to identify an individual (“Personal Data”), you will not use such Personal Data for any purpose other than was authorized and consented to by the data subject/research participant. You will not use Personal Data to contact any person. You will keep Personal Data in strict confidence. You will not share any Personal Data that is collected or in your possession with any third party for any reason and as required under the original consent agreement. Further, you will destroy the Personal Data and any backup or copies, immediately upon the completion of your research.
25
+
26
+ 4) LICENSE TO MICROSOFT. Notwithstanding the limitations in Section 1, you may distribute your modifications back to Microsoft, and if you do provide Microsoft with modifications of the Materials, you hereby grant Microsoft, without any restrictions or limitations, a non-exclusive, perpetual, irrevocable, royalty-free, assignable and sub-licensable license, to reproduce, publicly perform or display, install, use, modify, post, distribute, make and have made, sell and transfer such modifications and derivatives for any purpose.
27
+
28
+ 5) PUBLICATION. You may publish (or present papers or articles) on your results from using the Materials provided that no material or substantial portion of the Materials is included in any such publication or presentation.
29
+
30
+ 6) FEEDBACK. Any feedback about the Materials provided by you to us is voluntarily given, and Microsoft shall be free to use the feedback as it sees fit without obligation or restriction of any kind, even if the feedback is designated by you as confidential. Such feedback shall be considered a contribution and licensed to Microsoft under the terms of Section 4 above.
31
+
32
+ 7) COMPLIANCE WITH TRADE LAWS. You acknowledge that the Materials may be subject to applicable trade laws in one or more countries. You will comply with all relevant laws and regulations applicable to the import or export of the Materials, including but not limited to, trade laws such as the U.S. Export Administration Regulations or other end-user, end use, and destination restrictions by the U.S. and other governments, as well as sanctions regulations administered by the U.S. Office of Foreign Assets Control. Microsoft may suspend or terminate the agreement immediately to the extent that Microsoft reasonably concludes that continued performance would violate trade laws or put it at risk of becoming subject to sanctions or penalties under trade laws. For additional information, see www.microsoft.com/exporting.
33
+
34
+ 8) SUPPORT SERVICES. Microsoft is not obligated under this agreement to provide any support services for the Materials. Any support provided is “as is”, “with all faults”, and without warranty of any kind.
35
+
36
+ 9) BINDING ARBITRATION AND CLASS ACTION WAIVER. This Section applies if you live in (or, if a business, your principal place of business is in) the United States. If you and Microsoft have a dispute, you and Microsoft agree to try for 60 days to resolve it informally. If you and Microsoft can’t, you and Microsoft agree to binding individual arbitration before the American Arbitration Association under the Federal Arbitration Act (“FAA”), and not to sue in court in front of a judge or jury. Instead, a neutral arbitrator will decide. Class action lawsuits, class-wide arbitrations, private attorney-general actions, and any other proceeding where someone acts in a representative capacity are not allowed; nor is combining individual proceedings without the consent of all parties. The complete Arbitration Agreement contains more terms and is at aka.ms/arb-agreement-1. You and Microsoft agree to these terms.
37
+
38
+ 10) ENTIRE AGREEMENT. This agreement, and any other terms Microsoft may provide for supplements, updates, or third-party applications, is the entire agreement for the Materials.
39
+
40
+ 11) APPLICABLE LAW AND PLACE TO RESOLVE DISPUTES. If you acquired the Materials in the United States or Canada, the laws of the state or province where you live (or, if a business, where your principal place of business is located) govern the interpretation of this agreement, claims for its breach, and all other claims (including consumer protection, unfair competition, and tort claims), regardless of conflict of laws principles, except that the FAA governs everything related to arbitration. If you acquired the Materials in any other country, its laws apply, except that the FAA governs everything related to arbitration. If U.S. federal jurisdiction exists, you and Microsoft consent to exclusive jurisdiction and venue in the federal court in King County, Washington for all disputes heard in court (excluding arbitration). If not, you and Microsoft consent to exclusive jurisdiction and venue in the Superior Court of King County, Washington for all disputes heard in court (excluding arbitration).
41
+
42
+ 12) CONSUMER RIGHTS; REGIONAL VARIATIONS. This agreement describes certain legal rights. You may have other rights, including consumer rights, under the laws of your state, province, or country. Separate and apart from your relationship with Microsoft, you may also have rights with respect to the party from which you acquired the Materials. This agreement does not change those other rights if the laws of your state, province, or country do not permit it to do so. For example, if you acquired the Materials in one of the below regions, or mandatory country law applies, then the following provisions apply to you:
43
+
44
+ a) Australia. You have statutory guarantees under the Australian Consumer Law and nothing in this agreement is intended to affect those rights.
45
+
46
+ b) Canada. If you acquired this software in Canada, you may stop receiving updates by turning off the automatic update feature, disconnecting your device from the Internet (if and when you re-connect to the Internet, however, the Materials will resume checking for and installing updates), or uninstalling the Materials. The product documentation, if any, may also specify how to turn off updates for your specific device or software.
47
+
48
+ c) Germany and Austria.
49
+
50
+ i. Warranty. The properly licensed software will perform substantially as described in any Microsoft materials that accompany the Materials. However, Microsoft gives no contractual guarantee in relation to the licensed software.
51
+
52
+ ii. Limitation of Liability. In case of intentional conduct, gross negligence, claims based on the Product Liability Act, as well as, in case of death or personal or physical injury, Microsoft is liable according to the statutory law.
53
+
54
+ Subject to the foregoing clause (ii), Microsoft will only be liable for slight negligence if Microsoft is in breach of such material contractual obligations, the fulfillment of which facilitate the due performance of this agreement, the breach of which would endanger the purpose of this agreement and the compliance with which a party may constantly trust in (so-called "cardinal obligations"). In other cases of slight negligence, Microsoft will not be liable for slight negligence.
55
+
56
+ 13) DISCLAIMER OF WARRANTY. THE MATERIALS ARE LICENSED “AS IS.” YOU BEAR THE RISK OF USING THEM. MICROSOFT GIVES NO EXPRESS WARRANTIES, GUARANTEES, OR CONDITIONS. TO THE EXTENT PERMITTED UNDER APPLICABLE LAWS, MICROSOFT EXCLUDES ALL IMPLIED WARRANTIES, INCLUDING MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, AND NON-INFRINGEMENT.
57
+
58
+ 14) LIMITATION ON AND EXCLUSION OF DAMAGES. IF YOU HAVE ANY BASIS FOR RECOVERING DAMAGES DESPITE THE PRECEDING DISCLAIMER OF WARRANTY, YOU CAN RECOVER FROM MICROSOFT AND ITS SUPPLIERS ONLY DIRECT DAMAGES UP TO U.S. $5.00. YOU CANNOT RECOVER ANY OTHER DAMAGES, INCLUDING CONSEQUENTIAL, LOST PROFITS, SPECIAL, INDIRECT OR INCIDENTAL DAMAGES.
59
+
60
+ This limitation applies to (a) anything related to the Materials, services, content (including code) on third party Internet sites, or third party applications; and (b) claims for breach of contract, warranty, guarantee, or condition; strict liability, negligence, or other tort; or any other claim; in each case to the extent permitted by applicable law.
61
+
62
+ It also applies even if Microsoft knew or should have known about the possibility of the damages. The above limitation or exclusion may not apply to you because your state, province, or country may not allow the exclusion or limitation of incidental, consequential, or other damages.
README.md ADDED
@@ -0,0 +1,126 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: msrla
4
+ license_link: LICENSE
5
+ ---
6
+ # Phi-4
7
+
8
+ Phi-4 is a state-of-the-art open model built upon a blend of synthetic datasets, data from filtered public domain websites, and acquired academic books and Q&A datasets. The goal of this approach was to ensure that small capable models were trained with data focused on high quality and advanced reasoning.
9
+
10
+ Phi-4 underwent a rigorous enhancement and alignment process, incorporating both supervised fine-tuning and direct preference optimization to ensure precise instruction adherence and robust safety measures.
11
+
12
+ For more information, reference the [Phi-4 Technical Report](https://www.microsoft.com/en-us/research/uploads/prod/2024/12/P4TechReport.pdf).
13
+
14
+ ### Model Architecture
15
+
16
+ Phi-4 is a 14B parameters, dense decoder-only transformer model.
17
+
18
+ ### Training Data
19
+
20
+ Our training data is an extension of the data used for Phi-3 and includes a wide variety of sources from:
21
+
22
+ 1. Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code.
23
+
24
+ 2. Newly created synthetic, "textbook-like" data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.).
25
+
26
+ 3. Acquired academic books and Q&A datasets.
27
+
28
+ 4. High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness.
29
+
30
+
31
+ Multilingual data constitutes about 8% of our overall data. We are focusing on the quality of data that could potentially improve the reasoning ability for the model, and we filter the publicly available documents to contain the correct level of knowledge.
32
+
33
+ Intended Use
34
+ ------------
35
+
36
+ ### Primary Use Cases
37
+
38
+ Our model is designed to accelerate research on language models, for use as a building block for generative AI powered features. It provides uses for general purpose AI systems and applications (primarily in English) which require:
39
+
40
+ 1. Memory/compute constrained environments.
41
+ 2. Latency bound scenarios.
42
+ 3. Reasoning and logic.
43
+
44
+ ### Out-of-Scope Use Cases
45
+
46
+ Our models is not specifically designed or evaluated for all downstream purposes, thus:
47
+
48
+ 1. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fairness before using within a specific downstream use case, particularly for high-risk scenarios.
49
+ 2. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case, including the model’s focus on English.
50
+ 3. Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under.
51
+
52
+ Safety
53
+ ------
54
+
55
+ ### Approach
56
+
57
+ Phi-4 has adopted a robust safety post-training approach. This approach leverages a variety of both open-source and in-house generated synthetic datasets. The overall technique employed to do the safety alignment is a combination of SFT (Supervised Fine-Tuning) and iterative DPO (Direct Preference Optimization), including publicly available datasets focusing on helpfulness and harmlessness as well as various questions and answers targeted to multiple safety categories.
58
+
59
+ ### Safety Evaluation and Red-Teaming
60
+
61
+ Prior to release, Phi-4 followed a multi-faceted evaluation approach. Quantitative evaluation was conducted with multiple open-source safety benchmarks and in-house tools utilizing adversarial conversation simulation. For qualitative safety evaluation, we collaborated with the independent AI Red Team (AIRT) at Microsoft to assess safety risks posed by `phi-4` in both average and adversarial user scenarios. In the average user scenario, AIRT emulated typical single-turn and multi-turn interactions to identify potentially risky behaviors. The adversarial user scenario tested a wide range of techniques aimed at intentionally subverting the model’s safety training including jailbreaks, encoding-based attacks, multi-turn attacks, and adversarial suffix attacks.
62
+
63
+ Please refer to the technical report for more details on safety alignment.
64
+
65
+ Responsible AI Considerations
66
+ -----------------------------
67
+
68
+ Like other language models, `phi-4` can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include:
69
+
70
+ * **Quality of Service:** The model is trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English. `phi-4` is not intended to support multilingual use.
71
+
72
+ * **Representation of Harms & Perpetuation of Stereotypes:** These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases.
73
+
74
+ * **Inappropriate or Offensive Content:** These models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case.
75
+
76
+ * **Information Reliability:** Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated.
77
+
78
+ * **Limited Scope for Code:** Majority of `phi-4` training data is based in Python and uses common packages such as `typing`, `math`, `random`, `collections`, `datetime`, `itertools`. If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.
79
+
80
+
81
+ Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Using safety services like [Azure AI Content Safety](https://azure.microsoft.com/en-us/products/ai-services/ai-content-safety) that have advanced guardrails is highly recommended. Important areas for consideration include:
82
+
83
+ * **Allocation:** Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques.
84
+
85
+ * **High-Risk Scenarios:** Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context.
86
+
87
+ * **Misinformation:** Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG).
88
+
89
+ * **Generation of Harmful Content:** Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case.
90
+
91
+ * **Misuse:** Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations.
92
+
93
+
94
+ We evaluated `phi-4` using [OpenAI’s SimpleEval](https://github.com/openai/simple-evals) and our own internal benchmarks to understand the model’s capabilities, more specifically:
95
+
96
+ * **MMLU:** Popular aggregated dataset for multitask language understanding.
97
+
98
+ * **MATH:** Challenging competition math problems.
99
+
100
+ * **GPQA:** Complex, graduate-level science questions.
101
+
102
+ * **DROP:** Complex comprehension and reasoning.
103
+
104
+ * **MGSM:** Multi-lingual grade-school math.
105
+
106
+ * **HumanEval:** Functional code generation.
107
+
108
+ * **SimpleQA:** Factual responses.
109
+
110
+
111
+ To understand the capabilities, we compare `phi-4` with a set of models over OpenAI’s SimpleEval benchmark.
112
+
113
+ At the high-level overview of the model quality on representative benchmarks. For the table below, higher numbers indicate better performance:
114
+
115
+
116
+ | **Category** | **Benchmark** | **phi-4** (14B) | **phi-3** (14B) | **Qwen 2.5** (14B instruct) | **GPT-4o-mini** | **Llama-3.3** (70B instruct) | **Qwen 2.5** (72B instruct) | **GPT-4o** |
117
+ | ---------------------------- | -------------- | ------------------ | --------------- | --------------------------- | --------------- | ---------------------------- | --------------------------- | ------------------ |
118
+ | Popular Aggregated Benchmark | MMLU | 84.8 | 77.9 | 79.9 | 81.8 | 86.3 | 85.3 | **88.1** |
119
+ | Science | GPQA | **56.1** | 31.2 | 42.9 | 40.9 | 49.1 | 49.0 | 50.6 |
120
+ | Math | MGSM <br>MATH | 80.6 <br>**80.4** | 53.5 <br>44.6 | 79.6 <br>75.6 | 86.5 <br>73.0 | 89.1 <br>66.3* | 87.3 <br>80.0 | **90.4** <br>74.6 |
121
+ | Code Generation | HumanEval | 82.6 | 67.8 | 72.1 | 86.2 | 78.9* | 80.4 | **90.6** |
122
+ | Factual Knowledge | SimpleQA | 3.0 | 7.6 | 5.4 | 9.9 | 20.9 | 10.2 | **39.4** |
123
+ | Reasoning | DROP | 75.5 | 68.3 | 85.5 | 79.3 | **90.2** | 76.7 | 80.9 |
124
+
125
+ \* These scores are lower than those reported by Meta, perhaps because simple-evals has a strict formatting requirement that Llama models have particular trouble following. We use the simple-evals framework because it is reproducible, but Meta reports 77 for MATH and 88 for HumanEval on Llama-3.3-70B.
126
+
added_tokens.json ADDED
@@ -0,0 +1,98 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "<|dummy_0|>": 100256,
3
+ "<|endoftext|>": 100257,
4
+ "<|fim_prefix|>": 100258,
5
+ "<|fim_middle|>": 100259,
6
+ "<|fim_suffix|>": 100260,
7
+ "<|dummy_1|>": 100261,
8
+ "<|dummy_2|>": 100262,
9
+ "<|dummy_3|>": 100263,
10
+ "<|im_start|>": 100264,
11
+ "<|im_end|>": 100265,
12
+ "<|im_sep|>": 100266,
13
+ "<|dummy_4|>": 100267,
14
+ "<|dummy_5|>": 100268,
15
+ "<|dummy_6|>": 100269,
16
+ "<|dummy_7|>": 100270,
17
+ "<|dummy_8|>": 100271,
18
+ "<|dummy_9|>": 100272,
19
+ "<|dummy_10|>": 100273,
20
+ "<|dummy_11|>": 100274,
21
+ "<|dummy_12|>": 100275,
22
+ "<|endofprompt|>": 100276,
23
+ "<|dummy_13|>": 100277,
24
+ "<|dummy_14|>": 100278,
25
+ "<|dummy_15|>": 100279,
26
+ "<|dummy_16|>": 100280,
27
+ "<|dummy_17|>": 100281,
28
+ "<|dummy_18|>": 100282,
29
+ "<|dummy_19|>": 100283,
30
+ "<|dummy_20|>": 100284,
31
+ "<|dummy_21|>": 100285,
32
+ "<|dummy_22|>": 100286,
33
+ "<|dummy_23|>": 100287,
34
+ "<|dummy_24|>": 100288,
35
+ "<|dummy_25|>": 100289,
36
+ "<|dummy_26|>": 100290,
37
+ "<|dummy_27|>": 100291,
38
+ "<|dummy_28|>": 100292,
39
+ "<|dummy_29|>": 100293,
40
+ "<|dummy_30|>": 100294,
41
+ "<|dummy_31|>": 100295,
42
+ "<|dummy_32|>": 100296,
43
+ "<|dummy_33|>": 100297,
44
+ "<|dummy_34|>": 100298,
45
+ "<|dummy_35|>": 100299,
46
+ "<|dummy_36|>": 100300,
47
+ "<|dummy_37|>": 100301,
48
+ "<|dummy_38|>": 100302,
49
+ "<|dummy_39|>": 100303,
50
+ "<|dummy_40|>": 100304,
51
+ "<|dummy_41|>": 100305,
52
+ "<|dummy_42|>": 100306,
53
+ "<|dummy_43|>": 100307,
54
+ "<|dummy_44|>": 100308,
55
+ "<|dummy_45|>": 100309,
56
+ "<|dummy_46|>": 100310,
57
+ "<|dummy_47|>": 100311,
58
+ "<|dummy_48|>": 100312,
59
+ "<|dummy_49|>": 100313,
60
+ "<|dummy_50|>": 100314,
61
+ "<|dummy_51|>": 100315,
62
+ "<|dummy_52|>": 100316,
63
+ "<|dummy_53|>": 100317,
64
+ "<|dummy_54|>": 100318,
65
+ "<|dummy_55|>": 100319,
66
+ "<|dummy_56|>": 100320,
67
+ "<|dummy_57|>": 100321,
68
+ "<|dummy_58|>": 100322,
69
+ "<|dummy_59|>": 100323,
70
+ "<|dummy_60|>": 100324,
71
+ "<|dummy_61|>": 100325,
72
+ "<|dummy_62|>": 100326,
73
+ "<|dummy_63|>": 100327,
74
+ "<|dummy_64|>": 100328,
75
+ "<|dummy_65|>": 100329,
76
+ "<|dummy_66|>": 100330,
77
+ "<|dummy_67|>": 100331,
78
+ "<|dummy_68|>": 100332,
79
+ "<|dummy_69|>": 100333,
80
+ "<|dummy_70|>": 100334,
81
+ "<|dummy_71|>": 100335,
82
+ "<|dummy_72|>": 100336,
83
+ "<|dummy_73|>": 100337,
84
+ "<|dummy_74|>": 100338,
85
+ "<|dummy_75|>": 100339,
86
+ "<|dummy_76|>": 100340,
87
+ "<|dummy_77|>": 100341,
88
+ "<|dummy_78|>": 100342,
89
+ "<|dummy_79|>": 100343,
90
+ "<|dummy_80|>": 100344,
91
+ "<|dummy_81|>": 100345,
92
+ "<|dummy_82|>": 100346,
93
+ "<|dummy_83|>": 100347,
94
+ "<|dummy_84|>": 100348,
95
+ "<|dummy_85|>": 100349,
96
+ "<|dummy_86|>": 100350,
97
+ "<|dummy_87|>": 100351
98
+ }
config.json ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "microsoft/phi-4",
3
+ "architectures": [
4
+ "Phi3ForCausalLM"
5
+ ],
6
+ "attention_bias": false,
7
+ "attention_dropout": 0.0,
8
+ "auto_map": {
9
+ "AutoConfig": "configuration_phi3.Phi3Config",
10
+ "AutoModelForCausalLM": "modeling_phi3.Phi3ForCausalLM"
11
+ },
12
+ "bos_token_id": 100264,
13
+ "embd_pdrop": 0.0,
14
+ "eos_token_id": 100265,
15
+ "hidden_act": "silu",
16
+ "hidden_size": 5120,
17
+ "initializer_range": 0.02,
18
+ "intermediate_size": 17920,
19
+ "max_position_embeddings": 16384,
20
+ "model_type": "phi3",
21
+ "num_attention_heads": 40,
22
+ "num_hidden_layers": 40,
23
+ "num_key_value_heads": 10,
24
+ "original_max_position_embeddings": 16384,
25
+ "sep_token_id": 100266,
26
+ "resid_pdrop": 0.0,
27
+ "rms_norm_eps": 1e-05,
28
+ "rope_scaling": null,
29
+ "rope_theta": 250000,
30
+ "sliding_window": null,
31
+ "tie_word_embeddings": false,
32
+ "torch_dtype": "bfloat16",
33
+ "transformers_version": "4.47.0",
34
+ "use_cache": true,
35
+ "vocab_size": 100352,
36
+ "attn_implementation": "eager"
37
+ }
configuration_phi3.py ADDED
@@ -0,0 +1,233 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2024 Microsoft and the HuggingFace Inc. team. All rights reserved.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ """ Phi-3 model configuration"""
17
+
18
+
19
+ from transformers.configuration_utils import PretrainedConfig
20
+ from transformers.utils import logging
21
+
22
+
23
+ logger = logging.get_logger(__name__)
24
+
25
+ PHI3_PRETRAINED_CONFIG_ARCHIVE_MAP = {
26
+ "microsoft/Phi-3-mini-4k-instruct": "https://huggingface.co/microsoft/Phi-3-mini-4k-instruct/resolve/main/config.json",
27
+ "microsoft/Phi-3-mini-128k-instruct": "https://huggingface.co/microsoft/Phi-3-mini-128k-instruct/resolve/main/config.json",
28
+ }
29
+
30
+
31
+ class Phi3Config(PretrainedConfig):
32
+ r"""
33
+ This is the configuration class to store the configuration of a [`Phi3Model`]. It is used to instantiate a Phi-3
34
+ model according to the specified arguments, defining the model architecture. Instantiating a configuration with the
35
+ defaults will yield a similar configuration to that of the
36
+ [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct).
37
+
38
+ Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the
39
+ documentation from [`PretrainedConfig`] for more information.
40
+
41
+ Args:
42
+ vocab_size (`int`, *optional*, defaults to 32064):
43
+ Vocabulary size of the Phi-3 model. Defines the number of different tokens that can be represented by the
44
+ `inputs_ids` passed when calling [`Phi3Model`].
45
+ hidden_size (`int`, *optional*, defaults to 3072):
46
+ Dimension of the hidden representations.
47
+ intermediate_size (`int`, *optional*, defaults to 8192):
48
+ Dimension of the MLP representations.
49
+ num_hidden_layers (`int`, *optional*, defaults to 32):
50
+ Number of hidden layers in the Transformer decoder.
51
+ num_attention_heads (`int`, *optional*, defaults to 32):
52
+ Number of attention heads for each attention layer in the Transformer decoder.
53
+ num_key_value_heads (`int`, *optional*):
54
+ This is the number of key_value heads that should be used to implement Grouped Query Attention. If
55
+ `num_key_value_heads=num_attention_heads`, the model will use Multi Head Attention (MHA), if
56
+ `num_key_value_heads=1 the model will use Multi Query Attention (MQA) otherwise GQA is used. When
57
+ converting a multi-head checkpoint to a GQA checkpoint, each group key and value head should be constructed
58
+ by meanpooling all the original heads within that group. For more details checkout [this
59
+ paper](https://arxiv.org/pdf/2305.13245.pdf). If it is not specified, will default to
60
+ `num_attention_heads`.
61
+ resid_pdrop (`float`, *optional*, defaults to 0.0):
62
+ Dropout probability for mlp outputs.
63
+ embd_pdrop (`int`, *optional*, defaults to 0.0):
64
+ The dropout ratio for the embeddings.
65
+ attention_dropout (`float`, *optional*, defaults to 0.0):
66
+ The dropout ratio after computing the attention scores.
67
+ hidden_act (`str` or `function`, *optional*, defaults to `"silu"`):
68
+ The non-linear activation function (function or string) in the decoder.
69
+ max_position_embeddings (`int`, *optional*, defaults to 4096):
70
+ The maximum sequence length that this model might ever be used with.
71
+ original_max_position_embeddings (`int`, *optional*, defaults to 4096):
72
+ The maximum sequence length that this model was trained with. This is used to determine the size of the
73
+ original RoPE embeddings when using long scaling.
74
+ initializer_range (`float`, *optional*, defaults to 0.02):
75
+ The standard deviation of the truncated_normal_initializer for initializing all weight matrices.
76
+ rms_norm_eps (`float`, *optional*, defaults to 1e-05):
77
+ The epsilon value used for the RMSNorm.
78
+ use_cache (`bool`, *optional*, defaults to `True`):
79
+ Whether or not the model should return the last key/values attentions (not used by all models). Only
80
+ relevant if `config.is_decoder=True`. Whether to tie weight embeddings or not.
81
+ tie_word_embeddings (`bool`, *optional*, defaults to `False`):
82
+ Whether to tie weight embeddings
83
+ rope_theta (`float`, *optional*, defaults to 10000.0):
84
+ The base period of the RoPE embeddings.
85
+ rope_scaling (`dict`, *optional*):
86
+ The scaling strategy for the RoPE embeddings. If `None`, no scaling is applied. If a dictionary, it must
87
+ contain the following keys: `type`, `short_factor` and `long_factor`. The `type` must be `longrope` and
88
+ the `short_factor` and `long_factor` must be lists of numbers with the same length as the hidden size
89
+ divided by the number of attention heads divided by 2.
90
+ bos_token_id (`int`, *optional*, defaults to 1):
91
+ The id of the "beginning-of-sequence" token.
92
+ eos_token_id (`int`, *optional*, defaults to 32000):
93
+ The id of the "end-of-sequence" token.
94
+ pad_token_id (`int`, *optional*, defaults to 32000):
95
+ The id of the padding token.
96
+ sliding_window (`int`, *optional*):
97
+ Sliding window attention window size. If `None`, no sliding window is applied.
98
+
99
+ Example:
100
+
101
+ ```python
102
+ >>> from transformers import Phi3Model, Phi3Config
103
+
104
+ >>> # Initializing a Phi-3 style configuration
105
+ >>> configuration = Phi3Config.from_pretrained("microsoft/Phi-3-mini-4k-instruct")
106
+
107
+ >>> # Initializing a model from the configuration
108
+ >>> model = Phi3Model(configuration)
109
+
110
+ >>> # Accessing the model configuration
111
+ >>> configuration = model.config
112
+ ```"""
113
+
114
+ model_type = "phi3"
115
+ keys_to_ignore_at_inference = ["past_key_values"]
116
+
117
+ def __init__(
118
+ self,
119
+ vocab_size=100352,
120
+ hidden_size=5120,
121
+ intermediate_size=17920,
122
+ num_hidden_layers=40,
123
+ num_attention_heads=40,
124
+ num_key_value_heads=10,
125
+ resid_pdrop=0.0,
126
+ embd_pdrop=0.0,
127
+ attention_dropout=0.0,
128
+ hidden_act="silu",
129
+ max_position_embeddings=16384,
130
+ original_max_position_embeddings=16384,
131
+ initializer_range=0.02,
132
+ rms_norm_eps=1e-5,
133
+ use_cache=True,
134
+ tie_word_embeddings=False,
135
+ rope_theta=250000,
136
+ rope_scaling=None,
137
+ bos_token_id=100264,
138
+ eos_token_id=100265,
139
+ sep_token_id=100266,
140
+ pad_token_id=100257,
141
+ unk_token_id=100257,
142
+ sliding_window=None,
143
+ attn_implementation='eager',
144
+ **kwargs,
145
+ ):
146
+ self.vocab_size = vocab_size
147
+ self.hidden_size = hidden_size
148
+ self.intermediate_size = intermediate_size
149
+ self._attn_implementation = attn_implementation
150
+ self.num_hidden_layers = num_hidden_layers
151
+ self.num_attention_heads = num_attention_heads
152
+
153
+ if num_key_value_heads is None:
154
+ num_key_value_heads = num_attention_heads
155
+
156
+ self.num_key_value_heads = num_key_value_heads
157
+ self.resid_pdrop = resid_pdrop
158
+ self.embd_pdrop = embd_pdrop
159
+ self.attention_dropout = attention_dropout
160
+ self.hidden_act = hidden_act
161
+ self.max_position_embeddings = max_position_embeddings
162
+ self.original_max_position_embeddings = original_max_position_embeddings
163
+ self.initializer_range = initializer_range
164
+ self.rms_norm_eps = rms_norm_eps
165
+ self.use_cache = use_cache
166
+ self.rope_theta = rope_theta
167
+ self.rope_scaling = rope_scaling
168
+ self._rope_scaling_adjustment()
169
+ self._rope_scaling_validation()
170
+ self.sliding_window = sliding_window
171
+
172
+ super().__init__(
173
+ bos_token_id=bos_token_id,
174
+ eos_token_id=eos_token_id,
175
+ sep_token_id=sep_token_id,
176
+ pad_token_id=pad_token_id,
177
+ unk_token_id=unk_token_id,
178
+ tie_word_embeddings=tie_word_embeddings,
179
+ **kwargs,
180
+ )
181
+
182
+ def _rope_scaling_adjustment(self):
183
+ """
184
+ Adjust the `type` of the `rope_scaling` configuration for backward compatibility.
185
+ """
186
+ if self.rope_scaling is None:
187
+ return
188
+
189
+ rope_scaling_type = self.rope_scaling.get("type", None)
190
+
191
+ # For backward compatibility if previous version used "su" or "yarn"
192
+ if rope_scaling_type is not None and rope_scaling_type in ["su", "yarn"]:
193
+ self.rope_scaling["type"] = "longrope"
194
+
195
+ def _rope_scaling_validation(self):
196
+ """
197
+ Validate the `rope_scaling` configuration.
198
+ """
199
+ if self.rope_scaling is None:
200
+ return
201
+
202
+ if not isinstance(self.rope_scaling, dict) or len(self.rope_scaling) != 3:
203
+ raise ValueError(
204
+ "`rope_scaling` must be a dictionary with three fields, `type`, `short_factor` and `long_factor`, "
205
+ f"got {self.rope_scaling}"
206
+ )
207
+ rope_scaling_type = self.rope_scaling.get("type", None)
208
+ rope_scaling_short_factor = self.rope_scaling.get("short_factor", None)
209
+ rope_scaling_long_factor = self.rope_scaling.get("long_factor", None)
210
+ if rope_scaling_type is None or rope_scaling_type not in ["longrope"]:
211
+ raise ValueError(f"`rope_scaling`'s type field must be one of ['longrope'], got {rope_scaling_type}")
212
+ if not (
213
+ isinstance(rope_scaling_short_factor, list)
214
+ and all(isinstance(x, (int, float)) for x in rope_scaling_short_factor)
215
+ ):
216
+ raise ValueError(
217
+ f"`rope_scaling`'s short_factor field must be a list of numbers, got {rope_scaling_short_factor}"
218
+ )
219
+ if not len(rope_scaling_short_factor) == self.hidden_size // self.num_attention_heads // 2:
220
+ raise ValueError(
221
+ f"`rope_scaling`'s short_factor field must have length {self.hidden_size // self.num_attention_heads // 2}, got {len(rope_scaling_short_factor)}"
222
+ )
223
+ if not (
224
+ isinstance(rope_scaling_long_factor, list)
225
+ and all(isinstance(x, (int, float)) for x in rope_scaling_long_factor)
226
+ ):
227
+ raise ValueError(
228
+ f"`rope_scaling`'s long_factor field must be a list of numbers, got {rope_scaling_long_factor}"
229
+ )
230
+ if not len(rope_scaling_long_factor) == self.hidden_size // self.num_attention_heads // 2:
231
+ raise ValueError(
232
+ f"`rope_scaling`'s long_factor field must have length {self.hidden_size // self.num_attention_heads // 2}, got {len(rope_scaling_long_factor)}"
233
+ )
generation_config.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 100264,
4
+ "eos_token_id": 100265,
5
+ "sep_token_id": 100266,
6
+ "pad_token_id": 100257,
7
+ "transformers_version": "4.47.0"
8
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
model.safetensors.index.json ADDED
@@ -0,0 +1,250 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "metadata": {
3
+ "total_size": 29319014400
4
+ },
5
+ "weight_map": {
6
+ "lm_head.weight": "model-00006-of-00006.safetensors",
7
+ "model.embed_tokens.weight": "model-00001-of-00006.safetensors",
8
+ "model.layers.0.input_layernorm.weight": "model-00001-of-00006.safetensors",
9
+ "model.layers.0.mlp.down_proj.weight": "model-00001-of-00006.safetensors",
10
+ "model.layers.0.mlp.gate_up_proj.weight": "model-00001-of-00006.safetensors",
11
+ "model.layers.0.post_attention_layernorm.weight": "model-00001-of-00006.safetensors",
12
+ "model.layers.0.self_attn.o_proj.weight": "model-00001-of-00006.safetensors",
13
+ "model.layers.0.self_attn.qkv_proj.weight": "model-00001-of-00006.safetensors",
14
+ "model.layers.1.input_layernorm.weight": "model-00001-of-00006.safetensors",
15
+ "model.layers.1.mlp.down_proj.weight": "model-00001-of-00006.safetensors",
16
+ "model.layers.1.mlp.gate_up_proj.weight": "model-00001-of-00006.safetensors",
17
+ "model.layers.1.post_attention_layernorm.weight": "model-00001-of-00006.safetensors",
18
+ "model.layers.1.self_attn.o_proj.weight": "model-00001-of-00006.safetensors",
19
+ "model.layers.1.self_attn.qkv_proj.weight": "model-00001-of-00006.safetensors",
20
+ "model.layers.10.input_layernorm.weight": "model-00002-of-00006.safetensors",
21
+ "model.layers.10.mlp.down_proj.weight": "model-00002-of-00006.safetensors",
22
+ "model.layers.10.mlp.gate_up_proj.weight": "model-00002-of-00006.safetensors",
23
+ "model.layers.10.post_attention_layernorm.weight": "model-00002-of-00006.safetensors",
24
+ "model.layers.10.self_attn.o_proj.weight": "model-00002-of-00006.safetensors",
25
+ "model.layers.10.self_attn.qkv_proj.weight": "model-00002-of-00006.safetensors",
26
+ "model.layers.11.input_layernorm.weight": "model-00002-of-00006.safetensors",
27
+ "model.layers.11.mlp.down_proj.weight": "model-00002-of-00006.safetensors",
28
+ "model.layers.11.mlp.gate_up_proj.weight": "model-00002-of-00006.safetensors",
29
+ "model.layers.11.post_attention_layernorm.weight": "model-00002-of-00006.safetensors",
30
+ "model.layers.11.self_attn.o_proj.weight": "model-00002-of-00006.safetensors",
31
+ "model.layers.11.self_attn.qkv_proj.weight": "model-00002-of-00006.safetensors",
32
+ "model.layers.12.input_layernorm.weight": "model-00002-of-00006.safetensors",
33
+ "model.layers.12.mlp.down_proj.weight": "model-00002-of-00006.safetensors",
34
+ "model.layers.12.mlp.gate_up_proj.weight": "model-00002-of-00006.safetensors",
35
+ "model.layers.12.post_attention_layernorm.weight": "model-00002-of-00006.safetensors",
36
+ "model.layers.12.self_attn.o_proj.weight": "model-00002-of-00006.safetensors",
37
+ "model.layers.12.self_attn.qkv_proj.weight": "model-00002-of-00006.safetensors",
38
+ "model.layers.13.input_layernorm.weight": "model-00003-of-00006.safetensors",
39
+ "model.layers.13.mlp.down_proj.weight": "model-00003-of-00006.safetensors",
40
+ "model.layers.13.mlp.gate_up_proj.weight": "model-00003-of-00006.safetensors",
41
+ "model.layers.13.post_attention_layernorm.weight": "model-00003-of-00006.safetensors",
42
+ "model.layers.13.self_attn.o_proj.weight": "model-00003-of-00006.safetensors",
43
+ "model.layers.13.self_attn.qkv_proj.weight": "model-00003-of-00006.safetensors",
44
+ "model.layers.14.input_layernorm.weight": "model-00003-of-00006.safetensors",
45
+ "model.layers.14.mlp.down_proj.weight": "model-00003-of-00006.safetensors",
46
+ "model.layers.14.mlp.gate_up_proj.weight": "model-00003-of-00006.safetensors",
47
+ "model.layers.14.post_attention_layernorm.weight": "model-00003-of-00006.safetensors",
48
+ "model.layers.14.self_attn.o_proj.weight": "model-00003-of-00006.safetensors",
49
+ "model.layers.14.self_attn.qkv_proj.weight": "model-00003-of-00006.safetensors",
50
+ "model.layers.15.input_layernorm.weight": "model-00003-of-00006.safetensors",
51
+ "model.layers.15.mlp.down_proj.weight": "model-00003-of-00006.safetensors",
52
+ "model.layers.15.mlp.gate_up_proj.weight": "model-00003-of-00006.safetensors",
53
+ "model.layers.15.post_attention_layernorm.weight": "model-00003-of-00006.safetensors",
54
+ "model.layers.15.self_attn.o_proj.weight": "model-00003-of-00006.safetensors",
55
+ "model.layers.15.self_attn.qkv_proj.weight": "model-00003-of-00006.safetensors",
56
+ "model.layers.16.input_layernorm.weight": "model-00003-of-00006.safetensors",
57
+ "model.layers.16.mlp.down_proj.weight": "model-00003-of-00006.safetensors",
58
+ "model.layers.16.mlp.gate_up_proj.weight": "model-00003-of-00006.safetensors",
59
+ "model.layers.16.post_attention_layernorm.weight": "model-00003-of-00006.safetensors",
60
+ "model.layers.16.self_attn.o_proj.weight": "model-00003-of-00006.safetensors",
61
+ "model.layers.16.self_attn.qkv_proj.weight": "model-00003-of-00006.safetensors",
62
+ "model.layers.17.input_layernorm.weight": "model-00003-of-00006.safetensors",
63
+ "model.layers.17.mlp.down_proj.weight": "model-00003-of-00006.safetensors",
64
+ "model.layers.17.mlp.gate_up_proj.weight": "model-00003-of-00006.safetensors",
65
+ "model.layers.17.post_attention_layernorm.weight": "model-00003-of-00006.safetensors",
66
+ "model.layers.17.self_attn.o_proj.weight": "model-00003-of-00006.safetensors",
67
+ "model.layers.17.self_attn.qkv_proj.weight": "model-00003-of-00006.safetensors",
68
+ "model.layers.18.input_layernorm.weight": "model-00003-of-00006.safetensors",
69
+ "model.layers.18.mlp.down_proj.weight": "model-00003-of-00006.safetensors",
70
+ "model.layers.18.mlp.gate_up_proj.weight": "model-00003-of-00006.safetensors",
71
+ "model.layers.18.post_attention_layernorm.weight": "model-00003-of-00006.safetensors",
72
+ "model.layers.18.self_attn.o_proj.weight": "model-00003-of-00006.safetensors",
73
+ "model.layers.18.self_attn.qkv_proj.weight": "model-00003-of-00006.safetensors",
74
+ "model.layers.19.input_layernorm.weight": "model-00003-of-00006.safetensors",
75
+ "model.layers.19.mlp.down_proj.weight": "model-00003-of-00006.safetensors",
76
+ "model.layers.19.mlp.gate_up_proj.weight": "model-00003-of-00006.safetensors",
77
+ "model.layers.19.post_attention_layernorm.weight": "model-00003-of-00006.safetensors",
78
+ "model.layers.19.self_attn.o_proj.weight": "model-00003-of-00006.safetensors",
79
+ "model.layers.19.self_attn.qkv_proj.weight": "model-00003-of-00006.safetensors",
80
+ "model.layers.2.input_layernorm.weight": "model-00001-of-00006.safetensors",
81
+ "model.layers.2.mlp.down_proj.weight": "model-00001-of-00006.safetensors",
82
+ "model.layers.2.mlp.gate_up_proj.weight": "model-00001-of-00006.safetensors",
83
+ "model.layers.2.post_attention_layernorm.weight": "model-00001-of-00006.safetensors",
84
+ "model.layers.2.self_attn.o_proj.weight": "model-00001-of-00006.safetensors",
85
+ "model.layers.2.self_attn.qkv_proj.weight": "model-00001-of-00006.safetensors",
86
+ "model.layers.20.input_layernorm.weight": "model-00004-of-00006.safetensors",
87
+ "model.layers.20.mlp.down_proj.weight": "model-00004-of-00006.safetensors",
88
+ "model.layers.20.mlp.gate_up_proj.weight": "model-00004-of-00006.safetensors",
89
+ "model.layers.20.post_attention_layernorm.weight": "model-00004-of-00006.safetensors",
90
+ "model.layers.20.self_attn.o_proj.weight": "model-00003-of-00006.safetensors",
91
+ "model.layers.20.self_attn.qkv_proj.weight": "model-00003-of-00006.safetensors",
92
+ "model.layers.21.input_layernorm.weight": "model-00004-of-00006.safetensors",
93
+ "model.layers.21.mlp.down_proj.weight": "model-00004-of-00006.safetensors",
94
+ "model.layers.21.mlp.gate_up_proj.weight": "model-00004-of-00006.safetensors",
95
+ "model.layers.21.post_attention_layernorm.weight": "model-00004-of-00006.safetensors",
96
+ "model.layers.21.self_attn.o_proj.weight": "model-00004-of-00006.safetensors",
97
+ "model.layers.21.self_attn.qkv_proj.weight": "model-00004-of-00006.safetensors",
98
+ "model.layers.22.input_layernorm.weight": "model-00004-of-00006.safetensors",
99
+ "model.layers.22.mlp.down_proj.weight": "model-00004-of-00006.safetensors",
100
+ "model.layers.22.mlp.gate_up_proj.weight": "model-00004-of-00006.safetensors",
101
+ "model.layers.22.post_attention_layernorm.weight": "model-00004-of-00006.safetensors",
102
+ "model.layers.22.self_attn.o_proj.weight": "model-00004-of-00006.safetensors",
103
+ "model.layers.22.self_attn.qkv_proj.weight": "model-00004-of-00006.safetensors",
104
+ "model.layers.23.input_layernorm.weight": "model-00004-of-00006.safetensors",
105
+ "model.layers.23.mlp.down_proj.weight": "model-00004-of-00006.safetensors",
106
+ "model.layers.23.mlp.gate_up_proj.weight": "model-00004-of-00006.safetensors",
107
+ "model.layers.23.post_attention_layernorm.weight": "model-00004-of-00006.safetensors",
108
+ "model.layers.23.self_attn.o_proj.weight": "model-00004-of-00006.safetensors",
109
+ "model.layers.23.self_attn.qkv_proj.weight": "model-00004-of-00006.safetensors",
110
+ "model.layers.24.input_layernorm.weight": "model-00004-of-00006.safetensors",
111
+ "model.layers.24.mlp.down_proj.weight": "model-00004-of-00006.safetensors",
112
+ "model.layers.24.mlp.gate_up_proj.weight": "model-00004-of-00006.safetensors",
113
+ "model.layers.24.post_attention_layernorm.weight": "model-00004-of-00006.safetensors",
114
+ "model.layers.24.self_attn.o_proj.weight": "model-00004-of-00006.safetensors",
115
+ "model.layers.24.self_attn.qkv_proj.weight": "model-00004-of-00006.safetensors",
116
+ "model.layers.25.input_layernorm.weight": "model-00004-of-00006.safetensors",
117
+ "model.layers.25.mlp.down_proj.weight": "model-00004-of-00006.safetensors",
118
+ "model.layers.25.mlp.gate_up_proj.weight": "model-00004-of-00006.safetensors",
119
+ "model.layers.25.post_attention_layernorm.weight": "model-00004-of-00006.safetensors",
120
+ "model.layers.25.self_attn.o_proj.weight": "model-00004-of-00006.safetensors",
121
+ "model.layers.25.self_attn.qkv_proj.weight": "model-00004-of-00006.safetensors",
122
+ "model.layers.26.input_layernorm.weight": "model-00004-of-00006.safetensors",
123
+ "model.layers.26.mlp.down_proj.weight": "model-00004-of-00006.safetensors",
124
+ "model.layers.26.mlp.gate_up_proj.weight": "model-00004-of-00006.safetensors",
125
+ "model.layers.26.post_attention_layernorm.weight": "model-00004-of-00006.safetensors",
126
+ "model.layers.26.self_attn.o_proj.weight": "model-00004-of-00006.safetensors",
127
+ "model.layers.26.self_attn.qkv_proj.weight": "model-00004-of-00006.safetensors",
128
+ "model.layers.27.input_layernorm.weight": "model-00005-of-00006.safetensors",
129
+ "model.layers.27.mlp.down_proj.weight": "model-00005-of-00006.safetensors",
130
+ "model.layers.27.mlp.gate_up_proj.weight": "model-00005-of-00006.safetensors",
131
+ "model.layers.27.post_attention_layernorm.weight": "model-00005-of-00006.safetensors",
132
+ "model.layers.27.self_attn.o_proj.weight": "model-00004-of-00006.safetensors",
133
+ "model.layers.27.self_attn.qkv_proj.weight": "model-00004-of-00006.safetensors",
134
+ "model.layers.28.input_layernorm.weight": "model-00005-of-00006.safetensors",
135
+ "model.layers.28.mlp.down_proj.weight": "model-00005-of-00006.safetensors",
136
+ "model.layers.28.mlp.gate_up_proj.weight": "model-00005-of-00006.safetensors",
137
+ "model.layers.28.post_attention_layernorm.weight": "model-00005-of-00006.safetensors",
138
+ "model.layers.28.self_attn.o_proj.weight": "model-00005-of-00006.safetensors",
139
+ "model.layers.28.self_attn.qkv_proj.weight": "model-00005-of-00006.safetensors",
140
+ "model.layers.29.input_layernorm.weight": "model-00005-of-00006.safetensors",
141
+ "model.layers.29.mlp.down_proj.weight": "model-00005-of-00006.safetensors",
142
+ "model.layers.29.mlp.gate_up_proj.weight": "model-00005-of-00006.safetensors",
143
+ "model.layers.29.post_attention_layernorm.weight": "model-00005-of-00006.safetensors",
144
+ "model.layers.29.self_attn.o_proj.weight": "model-00005-of-00006.safetensors",
145
+ "model.layers.29.self_attn.qkv_proj.weight": "model-00005-of-00006.safetensors",
146
+ "model.layers.3.input_layernorm.weight": "model-00001-of-00006.safetensors",
147
+ "model.layers.3.mlp.down_proj.weight": "model-00001-of-00006.safetensors",
148
+ "model.layers.3.mlp.gate_up_proj.weight": "model-00001-of-00006.safetensors",
149
+ "model.layers.3.post_attention_layernorm.weight": "model-00001-of-00006.safetensors",
150
+ "model.layers.3.self_attn.o_proj.weight": "model-00001-of-00006.safetensors",
151
+ "model.layers.3.self_attn.qkv_proj.weight": "model-00001-of-00006.safetensors",
152
+ "model.layers.30.input_layernorm.weight": "model-00005-of-00006.safetensors",
153
+ "model.layers.30.mlp.down_proj.weight": "model-00005-of-00006.safetensors",
154
+ "model.layers.30.mlp.gate_up_proj.weight": "model-00005-of-00006.safetensors",
155
+ "model.layers.30.post_attention_layernorm.weight": "model-00005-of-00006.safetensors",
156
+ "model.layers.30.self_attn.o_proj.weight": "model-00005-of-00006.safetensors",
157
+ "model.layers.30.self_attn.qkv_proj.weight": "model-00005-of-00006.safetensors",
158
+ "model.layers.31.input_layernorm.weight": "model-00005-of-00006.safetensors",
159
+ "model.layers.31.mlp.down_proj.weight": "model-00005-of-00006.safetensors",
160
+ "model.layers.31.mlp.gate_up_proj.weight": "model-00005-of-00006.safetensors",
161
+ "model.layers.31.post_attention_layernorm.weight": "model-00005-of-00006.safetensors",
162
+ "model.layers.31.self_attn.o_proj.weight": "model-00005-of-00006.safetensors",
163
+ "model.layers.31.self_attn.qkv_proj.weight": "model-00005-of-00006.safetensors",
164
+ "model.layers.32.input_layernorm.weight": "model-00005-of-00006.safetensors",
165
+ "model.layers.32.mlp.down_proj.weight": "model-00005-of-00006.safetensors",
166
+ "model.layers.32.mlp.gate_up_proj.weight": "model-00005-of-00006.safetensors",
167
+ "model.layers.32.post_attention_layernorm.weight": "model-00005-of-00006.safetensors",
168
+ "model.layers.32.self_attn.o_proj.weight": "model-00005-of-00006.safetensors",
169
+ "model.layers.32.self_attn.qkv_proj.weight": "model-00005-of-00006.safetensors",
170
+ "model.layers.33.input_layernorm.weight": "model-00005-of-00006.safetensors",
171
+ "model.layers.33.mlp.down_proj.weight": "model-00005-of-00006.safetensors",
172
+ "model.layers.33.mlp.gate_up_proj.weight": "model-00005-of-00006.safetensors",
173
+ "model.layers.33.post_attention_layernorm.weight": "model-00005-of-00006.safetensors",
174
+ "model.layers.33.self_attn.o_proj.weight": "model-00005-of-00006.safetensors",
175
+ "model.layers.33.self_attn.qkv_proj.weight": "model-00005-of-00006.safetensors",
176
+ "model.layers.34.input_layernorm.weight": "model-00006-of-00006.safetensors",
177
+ "model.layers.34.mlp.down_proj.weight": "model-00006-of-00006.safetensors",
178
+ "model.layers.34.mlp.gate_up_proj.weight": "model-00006-of-00006.safetensors",
179
+ "model.layers.34.post_attention_layernorm.weight": "model-00006-of-00006.safetensors",
180
+ "model.layers.34.self_attn.o_proj.weight": "model-00005-of-00006.safetensors",
181
+ "model.layers.34.self_attn.qkv_proj.weight": "model-00005-of-00006.safetensors",
182
+ "model.layers.35.input_layernorm.weight": "model-00006-of-00006.safetensors",
183
+ "model.layers.35.mlp.down_proj.weight": "model-00006-of-00006.safetensors",
184
+ "model.layers.35.mlp.gate_up_proj.weight": "model-00006-of-00006.safetensors",
185
+ "model.layers.35.post_attention_layernorm.weight": "model-00006-of-00006.safetensors",
186
+ "model.layers.35.self_attn.o_proj.weight": "model-00006-of-00006.safetensors",
187
+ "model.layers.35.self_attn.qkv_proj.weight": "model-00006-of-00006.safetensors",
188
+ "model.layers.36.input_layernorm.weight": "model-00006-of-00006.safetensors",
189
+ "model.layers.36.mlp.down_proj.weight": "model-00006-of-00006.safetensors",
190
+ "model.layers.36.mlp.gate_up_proj.weight": "model-00006-of-00006.safetensors",
191
+ "model.layers.36.post_attention_layernorm.weight": "model-00006-of-00006.safetensors",
192
+ "model.layers.36.self_attn.o_proj.weight": "model-00006-of-00006.safetensors",
193
+ "model.layers.36.self_attn.qkv_proj.weight": "model-00006-of-00006.safetensors",
194
+ "model.layers.37.input_layernorm.weight": "model-00006-of-00006.safetensors",
195
+ "model.layers.37.mlp.down_proj.weight": "model-00006-of-00006.safetensors",
196
+ "model.layers.37.mlp.gate_up_proj.weight": "model-00006-of-00006.safetensors",
197
+ "model.layers.37.post_attention_layernorm.weight": "model-00006-of-00006.safetensors",
198
+ "model.layers.37.self_attn.o_proj.weight": "model-00006-of-00006.safetensors",
199
+ "model.layers.37.self_attn.qkv_proj.weight": "model-00006-of-00006.safetensors",
200
+ "model.layers.38.input_layernorm.weight": "model-00006-of-00006.safetensors",
201
+ "model.layers.38.mlp.down_proj.weight": "model-00006-of-00006.safetensors",
202
+ "model.layers.38.mlp.gate_up_proj.weight": "model-00006-of-00006.safetensors",
203
+ "model.layers.38.post_attention_layernorm.weight": "model-00006-of-00006.safetensors",
204
+ "model.layers.38.self_attn.o_proj.weight": "model-00006-of-00006.safetensors",
205
+ "model.layers.38.self_attn.qkv_proj.weight": "model-00006-of-00006.safetensors",
206
+ "model.layers.39.input_layernorm.weight": "model-00006-of-00006.safetensors",
207
+ "model.layers.39.mlp.down_proj.weight": "model-00006-of-00006.safetensors",
208
+ "model.layers.39.mlp.gate_up_proj.weight": "model-00006-of-00006.safetensors",
209
+ "model.layers.39.post_attention_layernorm.weight": "model-00006-of-00006.safetensors",
210
+ "model.layers.39.self_attn.o_proj.weight": "model-00006-of-00006.safetensors",
211
+ "model.layers.39.self_attn.qkv_proj.weight": "model-00006-of-00006.safetensors",
212
+ "model.layers.4.input_layernorm.weight": "model-00001-of-00006.safetensors",
213
+ "model.layers.4.mlp.down_proj.weight": "model-00001-of-00006.safetensors",
214
+ "model.layers.4.mlp.gate_up_proj.weight": "model-00001-of-00006.safetensors",
215
+ "model.layers.4.post_attention_layernorm.weight": "model-00001-of-00006.safetensors",
216
+ "model.layers.4.self_attn.o_proj.weight": "model-00001-of-00006.safetensors",
217
+ "model.layers.4.self_attn.qkv_proj.weight": "model-00001-of-00006.safetensors",
218
+ "model.layers.5.input_layernorm.weight": "model-00002-of-00006.safetensors",
219
+ "model.layers.5.mlp.down_proj.weight": "model-00002-of-00006.safetensors",
220
+ "model.layers.5.mlp.gate_up_proj.weight": "model-00001-of-00006.safetensors",
221
+ "model.layers.5.post_attention_layernorm.weight": "model-00002-of-00006.safetensors",
222
+ "model.layers.5.self_attn.o_proj.weight": "model-00001-of-00006.safetensors",
223
+ "model.layers.5.self_attn.qkv_proj.weight": "model-00001-of-00006.safetensors",
224
+ "model.layers.6.input_layernorm.weight": "model-00002-of-00006.safetensors",
225
+ "model.layers.6.mlp.down_proj.weight": "model-00002-of-00006.safetensors",
226
+ "model.layers.6.mlp.gate_up_proj.weight": "model-00002-of-00006.safetensors",
227
+ "model.layers.6.post_attention_layernorm.weight": "model-00002-of-00006.safetensors",
228
+ "model.layers.6.self_attn.o_proj.weight": "model-00002-of-00006.safetensors",
229
+ "model.layers.6.self_attn.qkv_proj.weight": "model-00002-of-00006.safetensors",
230
+ "model.layers.7.input_layernorm.weight": "model-00002-of-00006.safetensors",
231
+ "model.layers.7.mlp.down_proj.weight": "model-00002-of-00006.safetensors",
232
+ "model.layers.7.mlp.gate_up_proj.weight": "model-00002-of-00006.safetensors",
233
+ "model.layers.7.post_attention_layernorm.weight": "model-00002-of-00006.safetensors",
234
+ "model.layers.7.self_attn.o_proj.weight": "model-00002-of-00006.safetensors",
235
+ "model.layers.7.self_attn.qkv_proj.weight": "model-00002-of-00006.safetensors",
236
+ "model.layers.8.input_layernorm.weight": "model-00002-of-00006.safetensors",
237
+ "model.layers.8.mlp.down_proj.weight": "model-00002-of-00006.safetensors",
238
+ "model.layers.8.mlp.gate_up_proj.weight": "model-00002-of-00006.safetensors",
239
+ "model.layers.8.post_attention_layernorm.weight": "model-00002-of-00006.safetensors",
240
+ "model.layers.8.self_attn.o_proj.weight": "model-00002-of-00006.safetensors",
241
+ "model.layers.8.self_attn.qkv_proj.weight": "model-00002-of-00006.safetensors",
242
+ "model.layers.9.input_layernorm.weight": "model-00002-of-00006.safetensors",
243
+ "model.layers.9.mlp.down_proj.weight": "model-00002-of-00006.safetensors",
244
+ "model.layers.9.mlp.gate_up_proj.weight": "model-00002-of-00006.safetensors",
245
+ "model.layers.9.post_attention_layernorm.weight": "model-00002-of-00006.safetensors",
246
+ "model.layers.9.self_attn.o_proj.weight": "model-00002-of-00006.safetensors",
247
+ "model.layers.9.self_attn.qkv_proj.weight": "model-00002-of-00006.safetensors",
248
+ "model.norm.weight": "model-00006-of-00006.safetensors"
249
+ }
250
+ }
modeling_phi3.py ADDED
@@ -0,0 +1,1570 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2024 Microsoft and the HuggingFace Inc. team. All rights reserved.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+
16
+ """ PyTorch Phi-3 model."""
17
+
18
+ import inspect
19
+ import math
20
+ import warnings
21
+ from typing import List, Optional, Tuple, Union
22
+
23
+ import torch
24
+ import torch.nn.functional as F
25
+ import torch.utils.checkpoint
26
+ from torch import nn
27
+ from torch.nn import BCEWithLogitsLoss, CrossEntropyLoss, MSELoss
28
+
29
+ from transformers.activations import ACT2FN
30
+ from transformers.cache_utils import Cache, DynamicCache
31
+ from transformers.modeling_attn_mask_utils import _prepare_4d_causal_attention_mask
32
+ from transformers.modeling_outputs import (
33
+ BaseModelOutputWithPast,
34
+ CausalLMOutputWithPast,
35
+ SequenceClassifierOutputWithPast,
36
+ TokenClassifierOutput,
37
+ )
38
+ from transformers.modeling_utils import PreTrainedModel
39
+ from transformers.utils import (
40
+ add_code_sample_docstrings,
41
+ add_start_docstrings,
42
+ add_start_docstrings_to_model_forward,
43
+ is_flash_attn_2_available,
44
+ is_flash_attn_greater_or_equal_2_10,
45
+ logging,
46
+ replace_return_docstrings,
47
+ )
48
+ from .configuration_phi3 import Phi3Config
49
+
50
+
51
+ logger = logging.get_logger(__name__)
52
+
53
+ # Transformers scans dependencies in the modeling file, causing issues on conditional loading. The regex only ignores try/catch blocks, but not if statements
54
+ # if is_flash_attn_2_available():
55
+ _flash_supports_window_size = False
56
+ try:
57
+ from flash_attn import flash_attn_func, flash_attn_varlen_func
58
+ from flash_attn.bert_padding import index_first_axis, pad_input, unpad_input # noqa
59
+
60
+ _flash_supports_window_size = "window_size" in list(inspect.signature(flash_attn_func).parameters)
61
+ except ImportError as error:
62
+ logger.warning(
63
+ f"`flash-attention` package not found, consider installing for better performance: {error}."
64
+ )
65
+ if not _flash_supports_window_size:
66
+ logger.warning(
67
+ "Current `flash-attention` does not support `window_size`. Either upgrade or use `attn_implementation='eager'`."
68
+ )
69
+
70
+ _CHECKPOINT_FOR_DOC = "microsoft/Phi-3-mini-4k-instruct"
71
+ _CONFIG_FOR_DOC = "Phi3Config"
72
+
73
+ PHI3_PRETRAINED_MODEL_ARCHIVE_LIST = [
74
+ "microsoft/Phi-3-mini-4k-instruct",
75
+ "microsoft/Phi-3-mini-128k-instruct",
76
+ # See all Phi-3 models at https://huggingface.co/models?filter=Phi-3
77
+ ]
78
+
79
+
80
+ # Copied from transformers.models.llama.modeling_llama.LlamaRMSNorm with Llama->Phi3
81
+ class Phi3RMSNorm(nn.Module):
82
+ def __init__(self, hidden_size, eps=1e-6):
83
+ """
84
+ Phi3RMSNorm is equivalent to T5LayerNorm
85
+ """
86
+ super().__init__()
87
+ self.weight = nn.Parameter(torch.ones(hidden_size))
88
+ self.variance_epsilon = eps
89
+
90
+ def forward(self, hidden_states):
91
+ input_dtype = hidden_states.dtype
92
+ hidden_states = hidden_states.to(torch.float32)
93
+ variance = hidden_states.pow(2).mean(-1, keepdim=True)
94
+ hidden_states = hidden_states * torch.rsqrt(variance + self.variance_epsilon)
95
+ return self.weight * hidden_states.to(input_dtype)
96
+
97
+
98
+ # Copied from transformers.models.llama.modeling_llama._get_unpad_data
99
+ def _get_unpad_data(attention_mask):
100
+ seqlens_in_batch = attention_mask.sum(dim=-1, dtype=torch.int32)
101
+ indices = torch.nonzero(attention_mask.flatten(), as_tuple=False).flatten()
102
+ max_seqlen_in_batch = seqlens_in_batch.max().item()
103
+ cu_seqlens = F.pad(torch.cumsum(seqlens_in_batch, dim=0, dtype=torch.int32), (1, 0))
104
+ return (
105
+ indices,
106
+ cu_seqlens,
107
+ max_seqlen_in_batch,
108
+ )
109
+
110
+
111
+ # Copied from transformers.models.gemma.modeling_gemma.GemmaRotaryEmbedding with gemma->phi3, Gemma->Phi3
112
+ class Phi3RotaryEmbedding(nn.Module):
113
+ def __init__(self, dim, max_position_embeddings=2048, base=10000, device=None):
114
+ super().__init__()
115
+
116
+ self.dim = dim
117
+ self.max_position_embeddings = max_position_embeddings
118
+ self.base = base
119
+ self.register_buffer("inv_freq", None, persistent=False)
120
+
121
+ @torch.no_grad()
122
+ def forward(self, x, position_ids, seq_len=None):
123
+ # x: [bs, num_attention_heads, seq_len, head_size]
124
+ if self.inv_freq is None:
125
+ self.inv_freq = 1.0 / (
126
+ self.base ** (torch.arange(0, self.dim, 2, dtype=torch.int64, device=x.device).float() / self.dim)
127
+ )
128
+ inv_freq_expanded = self.inv_freq[None, :, None].float().expand(position_ids.shape[0], -1, 1)
129
+ position_ids_expanded = position_ids[:, None, :].float()
130
+ # Force float32 since bfloat16 loses precision on long contexts
131
+ # See https://github.com/huggingface/transformers/pull/29285
132
+ device_type = x.device.type
133
+ device_type = device_type if isinstance(device_type, str) and device_type != "mps" else "cpu"
134
+ with torch.autocast(device_type=device_type, enabled=False):
135
+ freqs = (inv_freq_expanded.float() @ position_ids_expanded.float()).transpose(1, 2)
136
+ emb = torch.cat((freqs, freqs), dim=-1)
137
+ cos = emb.cos()
138
+ sin = emb.sin()
139
+ return cos.to(dtype=x.dtype), sin.to(dtype=x.dtype)
140
+
141
+
142
+ class Phi3LongRoPEScaledRotaryEmbedding(Phi3RotaryEmbedding):
143
+ def __init__(self, dim, config, device=None):
144
+ super().__init__(dim, config.max_position_embeddings, config.rope_theta, device)
145
+
146
+ self.short_factor = config.rope_scaling["short_factor"]
147
+ self.long_factor = config.rope_scaling["long_factor"]
148
+ self.original_max_position_embeddings = config.original_max_position_embeddings
149
+
150
+ @torch.no_grad()
151
+ def forward(self, x, position_ids, seq_len=None):
152
+ seq_len = seq_len or torch.max(position_ids) + 1
153
+ if seq_len > self.original_max_position_embeddings:
154
+ ext_factors = torch.tensor(self.long_factor, dtype=torch.float32, device=x.device)
155
+ else:
156
+ ext_factors = torch.tensor(self.short_factor, dtype=torch.float32, device=x.device)
157
+
158
+ inv_freq_shape = torch.arange(0, self.dim, 2, dtype=torch.int64, device=x.device).float() / self.dim
159
+ self.inv_freq = 1.0 / (ext_factors * self.base**inv_freq_shape)
160
+
161
+ inv_freq_expanded = self.inv_freq[None, :, None].float().expand(position_ids.shape[0], -1, 1)
162
+ position_ids_expanded = position_ids[:, None, :].float()
163
+
164
+ # Force float32 since bfloat16 loses precision on long contexts
165
+ # See https://github.com/huggingface/transformers/pull/29285
166
+ device_type = x.device.type
167
+ device_type = device_type if isinstance(device_type, str) and device_type != "mps" else "cpu"
168
+ with torch.autocast(device_type=device_type, enabled=False):
169
+ freqs = (inv_freq_expanded.float() @ position_ids_expanded.float()).transpose(1, 2)
170
+ emb = torch.cat((freqs, freqs), dim=-1)
171
+
172
+ scale = self.max_position_embeddings / self.original_max_position_embeddings
173
+ if scale <= 1.0:
174
+ scaling_factor = 1.0
175
+ else:
176
+ scaling_factor = math.sqrt(1 + math.log(scale) / math.log(self.original_max_position_embeddings))
177
+
178
+ cos = emb.cos() * scaling_factor
179
+ sin = emb.sin() * scaling_factor
180
+ return cos.to(dtype=x.dtype), sin.to(dtype=x.dtype)
181
+
182
+
183
+ # Copied from transformers.models.llama.modeling_llama.rotate_half
184
+ def rotate_half(x):
185
+ """Rotates half the hidden dims of the input."""
186
+ x1 = x[..., : x.shape[-1] // 2]
187
+ x2 = x[..., x.shape[-1] // 2 :]
188
+ return torch.cat((-x2, x1), dim=-1)
189
+
190
+
191
+ # Copied from transformers.models.llama.modeling_llama.apply_rotary_pos_emb
192
+ def apply_rotary_pos_emb(q, k, cos, sin, position_ids=None, unsqueeze_dim=1):
193
+ """Applies Rotary Position Embedding to the query and key tensors.
194
+
195
+ Args:
196
+ q (`torch.Tensor`): The query tensor.
197
+ k (`torch.Tensor`): The key tensor.
198
+ cos (`torch.Tensor`): The cosine part of the rotary embedding.
199
+ sin (`torch.Tensor`): The sine part of the rotary embedding.
200
+ position_ids (`torch.Tensor`, *optional*):
201
+ Deprecated and unused.
202
+ unsqueeze_dim (`int`, *optional*, defaults to 1):
203
+ The 'unsqueeze_dim' argument specifies the dimension along which to unsqueeze cos[position_ids] and
204
+ sin[position_ids] so that they can be properly broadcasted to the dimensions of q and k. For example, note
205
+ that cos[position_ids] and sin[position_ids] have the shape [batch_size, seq_len, head_dim]. Then, if q and
206
+ k have the shape [batch_size, heads, seq_len, head_dim], then setting unsqueeze_dim=1 makes
207
+ cos[position_ids] and sin[position_ids] broadcastable to the shapes of q and k. Similarly, if q and k have
208
+ the shape [batch_size, seq_len, heads, head_dim], then set unsqueeze_dim=2.
209
+ Returns:
210
+ `tuple(torch.Tensor)` comprising of the query and key tensors rotated using the Rotary Position Embedding.
211
+ """
212
+ cos = cos.unsqueeze(unsqueeze_dim)
213
+ sin = sin.unsqueeze(unsqueeze_dim)
214
+ q_embed = (q * cos) + (rotate_half(q) * sin)
215
+ k_embed = (k * cos) + (rotate_half(k) * sin)
216
+ return q_embed, k_embed
217
+
218
+
219
+ class Phi3MLP(nn.Module):
220
+ def __init__(self, config):
221
+ super().__init__()
222
+
223
+ self.config = config
224
+ self.gate_up_proj = nn.Linear(config.hidden_size, 2 * config.intermediate_size, bias=False)
225
+ self.down_proj = nn.Linear(config.intermediate_size, config.hidden_size, bias=False)
226
+
227
+ self.activation_fn = ACT2FN[config.hidden_act]
228
+
229
+ def forward(self, hidden_states: torch.FloatTensor) -> torch.FloatTensor:
230
+ up_states = self.gate_up_proj(hidden_states)
231
+
232
+ gate, up_states = up_states.chunk(2, dim=-1)
233
+ up_states = up_states * self.activation_fn(gate)
234
+
235
+ return self.down_proj(up_states)
236
+
237
+
238
+ # Copied from transformers.models.llama.modeling_llama.repeat_kv with llama->phi
239
+ def repeat_kv(hidden_states: torch.Tensor, n_rep: int) -> torch.Tensor:
240
+ """
241
+ This is the equivalent of torch.repeat_interleave(x, dim=1, repeats=n_rep). The hidden states go from (batch,
242
+ num_key_value_heads, seqlen, head_dim) to (batch, num_attention_heads, seqlen, head_dim)
243
+ """
244
+ batch, num_key_value_heads, slen, head_dim = hidden_states.shape
245
+ if n_rep == 1:
246
+ return hidden_states
247
+ hidden_states = hidden_states[:, :, None, :, :].expand(batch, num_key_value_heads, n_rep, slen, head_dim)
248
+ return hidden_states.reshape(batch, num_key_value_heads * n_rep, slen, head_dim)
249
+
250
+
251
+ class Phi3Attention(nn.Module):
252
+ """Multi-headed attention from 'Attention Is All You Need' paper"""
253
+
254
+ def __init__(self, config: Phi3Config, layer_idx: Optional[int] = None):
255
+ super().__init__()
256
+ self.config = config
257
+ self.layer_idx = layer_idx
258
+ if layer_idx is None:
259
+ logger.warning_once(
260
+ f"Instantiating {self.__class__.__name__} without passing a `layer_idx` is not recommended and will "
261
+ "lead to errors during the forward call if caching is used. Please make sure to provide a `layer_idx` "
262
+ "when creating this class."
263
+ )
264
+
265
+ self.attention_dropout = config.attention_dropout
266
+ self.hidden_size = config.hidden_size
267
+ self.num_heads = config.num_attention_heads
268
+ self.head_dim = self.hidden_size // self.num_heads
269
+ self.num_key_value_heads = config.num_key_value_heads
270
+ self.num_key_value_groups = self.num_heads // self.num_key_value_heads
271
+ self.max_position_embeddings = config.max_position_embeddings
272
+ self.original_max_position_embeddings = config.original_max_position_embeddings
273
+ self.rope_theta = config.rope_theta
274
+ self.rope_scaling = config.rope_scaling
275
+ self.is_causal = True
276
+
277
+ if (self.head_dim * self.num_heads) != self.hidden_size:
278
+ raise ValueError(
279
+ f"hidden_size must be divisible by num_heads (got `hidden_size`: {self.hidden_size}"
280
+ f" and `num_heads`: {self.num_heads})."
281
+ )
282
+
283
+ op_size = self.num_heads * self.head_dim + 2 * (self.num_key_value_heads * self.head_dim)
284
+ self.o_proj = nn.Linear(self.num_heads * self.head_dim, self.hidden_size, bias=False)
285
+ self.qkv_proj = nn.Linear(self.hidden_size, op_size, bias=False)
286
+ self._init_rope()
287
+
288
+ def _init_rope(self):
289
+ if self.rope_scaling is None:
290
+ self.rotary_emb = Phi3RotaryEmbedding(
291
+ self.head_dim,
292
+ max_position_embeddings=self.max_position_embeddings,
293
+ base=self.rope_theta,
294
+ )
295
+ else:
296
+ scaling_type = self.config.rope_scaling["type"]
297
+ if scaling_type == "longrope":
298
+ self.rotary_emb = Phi3LongRoPEScaledRotaryEmbedding(self.head_dim, self.config)
299
+ else:
300
+ raise ValueError(f"Unknown RoPE scaling type {scaling_type}")
301
+
302
+ def forward(
303
+ self,
304
+ hidden_states: torch.Tensor,
305
+ attention_mask: Optional[torch.Tensor] = None,
306
+ position_ids: Optional[torch.LongTensor] = None,
307
+ past_key_value: Optional[Cache] = None,
308
+ output_attentions: bool = False,
309
+ use_cache: bool = False,
310
+ ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:
311
+ logger.warning_once("You are not running the flash-attention implementation, expect numerical differences.")
312
+
313
+ bsz, q_len, _ = hidden_states.size()
314
+
315
+ qkv = self.qkv_proj(hidden_states)
316
+ query_pos = self.num_heads * self.head_dim
317
+ query_states = qkv[..., :query_pos]
318
+ key_states = qkv[..., query_pos : query_pos + self.num_key_value_heads * self.head_dim]
319
+ value_states = qkv[..., query_pos + self.num_key_value_heads * self.head_dim :]
320
+
321
+ query_states = query_states.view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2)
322
+ key_states = key_states.view(bsz, q_len, self.num_key_value_heads, self.head_dim).transpose(1, 2)
323
+ value_states = value_states.view(bsz, q_len, self.num_key_value_heads, self.head_dim).transpose(1, 2)
324
+
325
+ kv_seq_len = key_states.shape[-2]
326
+ if past_key_value is not None:
327
+ if self.layer_idx is None:
328
+ raise ValueError(
329
+ f"The cache structure has changed since version v4.36. If you are using {self.__class__.__name__} "
330
+ "for auto-regressive decoding with k/v caching, please make sure to initialize the attention class "
331
+ "with a layer index."
332
+ )
333
+ kv_seq_len += past_key_value.get_usable_length(kv_seq_len, self.layer_idx)
334
+ cos, sin = self.rotary_emb(value_states, position_ids, seq_len=kv_seq_len)
335
+
336
+ query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin, position_ids)
337
+
338
+ if past_key_value is not None:
339
+ cache_kwargs = {"sin": sin, "cos": cos} # Specific to RoPE models
340
+ key_states, value_states = past_key_value.update(key_states, value_states, self.layer_idx, cache_kwargs)
341
+
342
+ # repeat k/v heads if n_kv_heads < n_heads
343
+ key_states = repeat_kv(key_states, self.num_key_value_groups)
344
+ value_states = repeat_kv(value_states, self.num_key_value_groups)
345
+
346
+ attn_weights = torch.matmul(query_states, key_states.transpose(2, 3)) / math.sqrt(self.head_dim)
347
+
348
+ if attn_weights.size() != (bsz, self.num_heads, q_len, kv_seq_len):
349
+ raise ValueError(
350
+ f"Attention weights should be of size {(bsz, self.num_heads, q_len, kv_seq_len)}, but is"
351
+ f" {attn_weights.size()}"
352
+ )
353
+
354
+ if attention_mask is not None:
355
+ if attention_mask.size() != (bsz, 1, q_len, kv_seq_len):
356
+ raise ValueError(
357
+ f"Attention mask should be of size {(bsz, 1, q_len, kv_seq_len)}, but is {attention_mask.size()}"
358
+ )
359
+ attn_weights = attn_weights + attention_mask
360
+
361
+ # upcast attention to fp32
362
+ attn_weights = nn.functional.softmax(attn_weights, dim=-1, dtype=torch.float32).to(value_states.dtype)
363
+ attn_weights = nn.functional.dropout(attn_weights, p=self.attention_dropout, training=self.training)
364
+
365
+ attn_output = torch.matmul(attn_weights, value_states)
366
+
367
+ if attn_output.size() != (bsz, self.num_heads, q_len, self.head_dim):
368
+ raise ValueError(
369
+ f"`attn_output` should be of size {(bsz, self.num_heads, q_len, self.head_dim)}, but is"
370
+ f" {attn_output.size()}"
371
+ )
372
+
373
+ attn_output = attn_output.transpose(1, 2).contiguous()
374
+ attn_output = attn_output.reshape(bsz, q_len, self.hidden_size)
375
+
376
+ attn_output = self.o_proj(attn_output)
377
+
378
+ if not output_attentions:
379
+ attn_weights = None
380
+
381
+ return attn_output, attn_weights, past_key_value
382
+
383
+
384
+ class Phi3FlashAttention2(Phi3Attention):
385
+ """
386
+ Phi-3 flash attention module. This module inherits from `Phi3Attention` as the weights of the module stays
387
+ untouched. The only required change would be on the forward pass where it needs to correctly call the public API of
388
+ flash attention and deal with padding tokens in case the input contains any of them.
389
+ """
390
+
391
+ # Copied from transformers.models.llama.modeling_llama.LlamaFlashAttention2.__init__
392
+ def __init__(self, *args, **kwargs):
393
+ super().__init__(*args, **kwargs)
394
+
395
+ # TODO: Should be removed once Flash Attention for RoCm is bumped to 2.1.
396
+ # flash_attn<2.1 generates top-left aligned causal mask, while what is needed here is bottom-right alignement, that was made default for flash_attn>=2.1. This attribute is used to handle this difference. Reference: https://github.com/Dao-AILab/flash-attention/releases/tag/v2.1.0.
397
+ # Beware that with flash_attn<2.1, using q_seqlen != k_seqlen (except for the case q_seqlen == 1) produces a wrong mask (top-left).
398
+ self._flash_attn_uses_top_left_mask = not is_flash_attn_greater_or_equal_2_10()
399
+
400
+ def forward(
401
+ self,
402
+ hidden_states: torch.Tensor,
403
+ attention_mask: Optional[torch.LongTensor] = None,
404
+ position_ids: Optional[torch.LongTensor] = None,
405
+ past_key_value: Optional[Cache] = None,
406
+ output_attentions: bool = False,
407
+ use_cache: bool = False,
408
+ **kwargs,
409
+ ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:
410
+ # Phi3FlashAttention2 attention does not support output_attentions
411
+
412
+ if not _flash_supports_window_size:
413
+ logger.warning_once(
414
+ "The current flash attention version does not support sliding window attention. Please use `attn_implementation='eager'` or upgrade flash-attn library."
415
+ )
416
+ raise ValueError("The current flash attention version does not support sliding window attention.")
417
+
418
+ output_attentions = False
419
+
420
+ if "padding_mask" in kwargs:
421
+ warnings.warn(
422
+ "Passing `padding_mask` is deprecated and will be removed in v4.37. Please make sure use `attention_mask` instead.`"
423
+ )
424
+
425
+ # overwrite attention_mask with padding_mask
426
+ attention_mask = kwargs.pop("padding_mask")
427
+
428
+ bsz, q_len, _ = hidden_states.size()
429
+
430
+ qkv = self.qkv_proj(hidden_states)
431
+ query_pos = self.num_heads * self.head_dim
432
+ query_states = qkv[..., :query_pos]
433
+ key_states = qkv[..., query_pos : query_pos + self.num_key_value_heads * self.head_dim]
434
+ value_states = qkv[..., query_pos + self.num_key_value_heads * self.head_dim :]
435
+
436
+ # Flash attention requires the input to have the shape
437
+ # batch_size x seq_length x head_dim x hidden_dim
438
+ # therefore we just need to keep the original shape
439
+ query_states = query_states.view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2)
440
+ key_states = key_states.view(bsz, q_len, self.num_key_value_heads, self.head_dim).transpose(1, 2)
441
+ value_states = value_states.view(bsz, q_len, self.num_key_value_heads, self.head_dim).transpose(1, 2)
442
+
443
+ kv_seq_len = key_states.shape[-2]
444
+ if past_key_value is not None:
445
+ if self.layer_idx is None:
446
+ raise ValueError(
447
+ f"The cache structure has changed since version v4.36. If you are using {self.__class__.__name__} "
448
+ "for auto-regressive decoding with k/v caching, please make sure to initialize the attention class "
449
+ "with a layer index."
450
+ )
451
+ kv_seq_len += past_key_value.get_usable_length(kv_seq_len, self.layer_idx)
452
+
453
+ # Because the input can be padded, the absolute sequence length depends on the max position id.
454
+ rotary_seq_len = max(kv_seq_len, position_ids[:, -1].max().item() + 1)
455
+ cos, sin = self.rotary_emb(value_states, position_ids, seq_len=rotary_seq_len)
456
+
457
+ query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin, position_ids)
458
+
459
+ use_sliding_windows = (
460
+ _flash_supports_window_size
461
+ and getattr(self.config, "sliding_window", None) is not None
462
+ and kv_seq_len > self.config.sliding_window
463
+ )
464
+
465
+ if past_key_value is not None:
466
+ # Activate slicing cache only if the config has a value `sliding_windows` attribute
467
+ cache_has_contents = past_key_value.get_seq_length(self.layer_idx) > 0
468
+ if (
469
+ getattr(self.config, "sliding_window", None) is not None
470
+ and kv_seq_len > self.config.sliding_window
471
+ and cache_has_contents
472
+ ):
473
+ slicing_tokens = 1 - self.config.sliding_window
474
+
475
+ past_key = past_key_value[self.layer_idx][0]
476
+ past_value = past_key_value[self.layer_idx][1]
477
+
478
+ past_key = past_key[:, :, slicing_tokens:, :].contiguous()
479
+ past_value = past_value[:, :, slicing_tokens:, :].contiguous()
480
+
481
+ if past_key.shape[-2] != self.config.sliding_window - 1:
482
+ raise ValueError(
483
+ f"past key must have a shape of (`batch_size, num_heads, self.config.sliding_window-1, head_dim`), got"
484
+ f" {past_key.shape}"
485
+ )
486
+
487
+ if attention_mask is not None:
488
+ attention_mask = attention_mask[:, slicing_tokens:]
489
+ attention_mask = torch.cat([attention_mask, torch.ones_like(attention_mask[:, -1:])], dim=-1)
490
+
491
+ cache_kwargs = {"sin": sin, "cos": cos} # Specific to RoPE models
492
+ key_states, value_states = past_key_value.update(key_states, value_states, self.layer_idx, cache_kwargs)
493
+
494
+ # repeat k/v heads if n_kv_heads < n_heads
495
+ key_states = repeat_kv(key_states, self.num_key_value_groups)
496
+ value_states = repeat_kv(value_states, self.num_key_value_groups)
497
+
498
+ attn_dropout = self.attention_dropout if self.training else 0.0
499
+
500
+ # In PEFT, usually we cast the layer norms in float32 for training stability reasons
501
+ # therefore the input hidden states gets silently casted in float32. Hence, we need
502
+ # cast them back in the correct dtype just to be sure everything works as expected.
503
+ # This might slowdown training & inference so it is recommended to not cast the LayerNorms
504
+ # in fp32.
505
+
506
+ if query_states.dtype == torch.float32:
507
+ if torch.is_autocast_enabled():
508
+ target_dtype = torch.get_autocast_gpu_dtype()
509
+ # Handle the case where the model is quantized
510
+ elif hasattr(self.config, "_pre_quantization_dtype"):
511
+ target_dtype = self.config._pre_quantization_dtype
512
+ else:
513
+ target_dtype = self.qkv_proj.weight.dtype
514
+
515
+ logger.warning_once(
516
+ f"The input hidden states seems to be silently casted in float32, this might be related to"
517
+ f" the fact you have upcasted embedding or layer norm layers in float32. We will cast back the input in"
518
+ f" {target_dtype}."
519
+ )
520
+
521
+ query_states = query_states.to(target_dtype)
522
+ key_states = key_states.to(target_dtype)
523
+ value_states = value_states.to(target_dtype)
524
+
525
+ # Reashape to the expected shape for Flash Attention
526
+ query_states = query_states.transpose(1, 2)
527
+ key_states = key_states.transpose(1, 2)
528
+ value_states = value_states.transpose(1, 2)
529
+
530
+ attn_output = self._flash_attention_forward(
531
+ query_states,
532
+ key_states,
533
+ value_states,
534
+ attention_mask,
535
+ q_len,
536
+ dropout=attn_dropout,
537
+ use_sliding_windows=use_sliding_windows,
538
+ )
539
+
540
+ attn_output = attn_output.reshape(bsz, q_len, self.hidden_size).contiguous()
541
+ attn_output = self.o_proj(attn_output)
542
+
543
+ if not output_attentions:
544
+ attn_weights = None
545
+
546
+ return attn_output, attn_weights, past_key_value
547
+
548
+ # Copied from transformers.models.mistral.modeling_mistral.MistralFlashAttention2._flash_attention_forward
549
+ def _flash_attention_forward(
550
+ self,
551
+ query_states,
552
+ key_states,
553
+ value_states,
554
+ attention_mask,
555
+ query_length,
556
+ dropout=0.0,
557
+ softmax_scale=None,
558
+ use_sliding_windows=False,
559
+ ):
560
+ """
561
+ Calls the forward method of Flash Attention - if the input hidden states contain at least one padding token
562
+ first unpad the input, then computes the attention scores and pad the final attention scores.
563
+
564
+ Args:
565
+ query_states (`torch.Tensor`):
566
+ Input query states to be passed to Flash Attention API
567
+ key_states (`torch.Tensor`):
568
+ Input key states to be passed to Flash Attention API
569
+ value_states (`torch.Tensor`):
570
+ Input value states to be passed to Flash Attention API
571
+ attention_mask (`torch.Tensor`):
572
+ The padding mask - corresponds to a tensor of size `(batch_size, seq_len)` where 0 stands for the
573
+ position of padding tokens and 1 for the position of non-padding tokens.
574
+ dropout (`float`):
575
+ Attention dropout
576
+ softmax_scale (`float`, *optional*):
577
+ The scaling of QK^T before applying softmax. Default to 1 / sqrt(head_dim)
578
+ use_sliding_windows (`bool`, *optional*):
579
+ Whether to activate sliding window attention.
580
+ """
581
+ if not self._flash_attn_uses_top_left_mask:
582
+ causal = self.is_causal
583
+ else:
584
+ # TODO: Remove the `query_length != 1` check once Flash Attention for RoCm is bumped to 2.1. For details, please see the comment in LlamaFlashAttention2 __init__.
585
+ causal = self.is_causal and query_length != 1
586
+
587
+ # Contains at least one padding token in the sequence
588
+ if attention_mask is not None:
589
+ batch_size = query_states.shape[0]
590
+ query_states, key_states, value_states, indices_q, cu_seq_lens, max_seq_lens = self._upad_input(
591
+ query_states, key_states, value_states, attention_mask, query_length
592
+ )
593
+
594
+ cu_seqlens_q, cu_seqlens_k = cu_seq_lens
595
+ max_seqlen_in_batch_q, max_seqlen_in_batch_k = max_seq_lens
596
+
597
+ if not use_sliding_windows:
598
+ attn_output_unpad = flash_attn_varlen_func(
599
+ query_states,
600
+ key_states,
601
+ value_states,
602
+ cu_seqlens_q=cu_seqlens_q,
603
+ cu_seqlens_k=cu_seqlens_k,
604
+ max_seqlen_q=max_seqlen_in_batch_q,
605
+ max_seqlen_k=max_seqlen_in_batch_k,
606
+ dropout_p=dropout,
607
+ softmax_scale=softmax_scale,
608
+ causal=causal,
609
+ )
610
+ else:
611
+ attn_output_unpad = flash_attn_varlen_func(
612
+ query_states,
613
+ key_states,
614
+ value_states,
615
+ cu_seqlens_q=cu_seqlens_q,
616
+ cu_seqlens_k=cu_seqlens_k,
617
+ max_seqlen_q=max_seqlen_in_batch_q,
618
+ max_seqlen_k=max_seqlen_in_batch_k,
619
+ dropout_p=dropout,
620
+ softmax_scale=softmax_scale,
621
+ causal=causal,
622
+ window_size=(self.config.sliding_window, self.config.sliding_window),
623
+ )
624
+
625
+ attn_output = pad_input(attn_output_unpad, indices_q, batch_size, query_length)
626
+ else:
627
+ if not use_sliding_windows:
628
+ attn_output = flash_attn_func(
629
+ query_states,
630
+ key_states,
631
+ value_states,
632
+ dropout,
633
+ softmax_scale=softmax_scale,
634
+ causal=causal,
635
+ )
636
+ else:
637
+ attn_output = flash_attn_func(
638
+ query_states,
639
+ key_states,
640
+ value_states,
641
+ dropout,
642
+ softmax_scale=softmax_scale,
643
+ causal=causal,
644
+ window_size=(self.config.sliding_window, self.config.sliding_window),
645
+ )
646
+
647
+ return attn_output
648
+
649
+ # Copied from transformers.models.mistral.modeling_mistral.MistralFlashAttention2._upad_input
650
+ def _upad_input(self, query_layer, key_layer, value_layer, attention_mask, query_length):
651
+ batch_size, kv_seq_len, num_heads, head_dim = key_layer.shape
652
+
653
+ # On the first iteration we need to properly re-create the padding mask
654
+ # by slicing it on the proper place
655
+ if kv_seq_len != attention_mask.shape[-1]:
656
+ attention_mask_num_tokens = attention_mask.shape[-1]
657
+ attention_mask = attention_mask[:, attention_mask_num_tokens - kv_seq_len :]
658
+
659
+ indices_k, cu_seqlens_k, max_seqlen_in_batch_k = _get_unpad_data(attention_mask)
660
+
661
+ key_layer = index_first_axis(key_layer.reshape(batch_size * kv_seq_len, num_heads, head_dim), indices_k)
662
+ value_layer = index_first_axis(value_layer.reshape(batch_size * kv_seq_len, num_heads, head_dim), indices_k)
663
+
664
+ if query_length == kv_seq_len:
665
+ query_layer = index_first_axis(
666
+ query_layer.reshape(batch_size * kv_seq_len, num_heads, head_dim), indices_k
667
+ )
668
+ cu_seqlens_q = cu_seqlens_k
669
+ max_seqlen_in_batch_q = max_seqlen_in_batch_k
670
+ indices_q = indices_k
671
+ elif query_length == 1:
672
+ max_seqlen_in_batch_q = 1
673
+ cu_seqlens_q = torch.arange(
674
+ batch_size + 1, dtype=torch.int32, device=query_layer.device
675
+ ) # There is a memcpy here, that is very bad.
676
+ indices_q = cu_seqlens_q[:-1]
677
+ query_layer = query_layer.squeeze(1)
678
+ else:
679
+ # The -q_len: slice assumes left padding.
680
+ attention_mask = attention_mask[:, -query_length:]
681
+ query_layer, indices_q, cu_seqlens_q, max_seqlen_in_batch_q = unpad_input(query_layer, attention_mask)
682
+
683
+ return (
684
+ query_layer,
685
+ key_layer,
686
+ value_layer,
687
+ indices_q,
688
+ (cu_seqlens_q, cu_seqlens_k),
689
+ (max_seqlen_in_batch_q, max_seqlen_in_batch_k),
690
+ )
691
+
692
+
693
+ # copied from transformers.models.llama.modeling_llama.LlamaSdpaAttention with Llama->Phi3
694
+ # TODO @Arthur no longer copied from LLama after static cache
695
+ class Phi3SdpaAttention(Phi3Attention):
696
+ """
697
+ Phi3 attention module using torch.nn.functional.scaled_dot_product_attention. This module inherits from
698
+ `Phi3Attention` as the weights of the module stays untouched. The only changes are on the forward pass to adapt to
699
+ SDPA API.
700
+ """
701
+
702
+ # Adapted from Phi3Attention.forward
703
+ def forward(
704
+ self,
705
+ hidden_states: torch.Tensor,
706
+ attention_mask: Optional[torch.Tensor] = None,
707
+ position_ids: Optional[torch.LongTensor] = None,
708
+ past_key_value: Optional[Cache] = None,
709
+ output_attentions: bool = False,
710
+ use_cache: bool = False,
711
+ ) -> Tuple[torch.Tensor, Optional[torch.Tensor], Optional[Tuple[torch.Tensor]]]:
712
+ if output_attentions:
713
+ # TODO: Improve this warning with e.g. `model.config.attn_implementation = "manual"` once this is implemented.
714
+ logger.warning_once(
715
+ "Phi3Model is using Phi3SdpaAttention, but `torch.nn.functional.scaled_dot_product_attention` does not support `output_attentions=True`. Falling back to the manual attention implementation, "
716
+ 'but specifying the manual implementation will be required from Transformers version v5.0.0 onwards. This warning can be removed using the argument `attn_implementation="eager"` when loading the model.'
717
+ )
718
+ return super().forward(
719
+ hidden_states=hidden_states,
720
+ attention_mask=attention_mask,
721
+ position_ids=position_ids,
722
+ past_key_value=past_key_value,
723
+ output_attentions=output_attentions,
724
+ use_cache=use_cache,
725
+ )
726
+
727
+ bsz, q_len, _ = hidden_states.size()
728
+
729
+ qkv = self.qkv_proj(hidden_states)
730
+ query_pos = self.num_heads * self.head_dim
731
+ query_states = qkv[..., :query_pos]
732
+ key_states = qkv[..., query_pos : query_pos + self.num_key_value_heads * self.head_dim]
733
+ value_states = qkv[..., query_pos + self.num_key_value_heads * self.head_dim :]
734
+
735
+ query_states = query_states.view(bsz, q_len, self.num_heads, self.head_dim).transpose(1, 2)
736
+ key_states = key_states.view(bsz, q_len, self.num_key_value_heads, self.head_dim).transpose(1, 2)
737
+ value_states = value_states.view(bsz, q_len, self.num_key_value_heads, self.head_dim).transpose(1, 2)
738
+
739
+ kv_seq_len = key_states.shape[-2]
740
+ if past_key_value is not None:
741
+ kv_seq_len += past_key_value.get_usable_length(kv_seq_len, self.layer_idx)
742
+ cos, sin = self.rotary_emb(value_states, position_ids, seq_len=kv_seq_len)
743
+
744
+ query_states, key_states = apply_rotary_pos_emb(query_states, key_states, cos, sin, position_ids)
745
+
746
+ if past_key_value is not None:
747
+ cache_kwargs = {"sin": sin, "cos": cos} # Specific to RoPE models
748
+ key_states, value_states = past_key_value.update(key_states, value_states, self.layer_idx, cache_kwargs)
749
+
750
+ key_states = repeat_kv(key_states, self.num_key_value_groups)
751
+ value_states = repeat_kv(value_states, self.num_key_value_groups)
752
+
753
+ if attention_mask is not None:
754
+ if attention_mask.size() != (bsz, 1, q_len, kv_seq_len):
755
+ raise ValueError(
756
+ f"Attention mask should be of size {(bsz, 1, q_len, kv_seq_len)}, but is {attention_mask.size()}"
757
+ )
758
+
759
+ # SDPA with memory-efficient backend is currently (torch==2.1.2) bugged with non-contiguous inputs with custom attn_mask,
760
+ # Reference: https://github.com/pytorch/pytorch/issues/112577.
761
+ if query_states.device.type == "cuda" and attention_mask is not None:
762
+ query_states = query_states.contiguous()
763
+ key_states = key_states.contiguous()
764
+ value_states = value_states.contiguous()
765
+
766
+ attn_output = torch.nn.functional.scaled_dot_product_attention(
767
+ query_states,
768
+ key_states,
769
+ value_states,
770
+ attn_mask=attention_mask,
771
+ dropout_p=self.attention_dropout if self.training else 0.0,
772
+ # The q_len > 1 is necessary to match with AttentionMaskConverter.to_causal_4d that does not create a causal mask in case q_len == 1.
773
+ is_causal=self.is_causal and attention_mask is None and q_len > 1,
774
+ )
775
+
776
+ attn_output = attn_output.transpose(1, 2).contiguous()
777
+ attn_output = attn_output.view(bsz, q_len, self.hidden_size)
778
+
779
+ attn_output = self.o_proj(attn_output)
780
+
781
+ return attn_output, None, past_key_value
782
+
783
+
784
+ PHI3_ATTENTION_CLASSES = {
785
+ "eager": Phi3Attention,
786
+ "flash_attention_2": Phi3FlashAttention2,
787
+ "sdpa": Phi3SdpaAttention,
788
+ }
789
+
790
+
791
+ class Phi3DecoderLayer(nn.Module):
792
+ def __init__(self, config: Phi3Config, layer_idx: int):
793
+ super().__init__()
794
+
795
+ self.config = config
796
+ self.self_attn = PHI3_ATTENTION_CLASSES[config._attn_implementation](config, layer_idx=layer_idx)
797
+
798
+ self.mlp = Phi3MLP(config)
799
+ self.input_layernorm = Phi3RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
800
+
801
+ self.resid_attn_dropout = nn.Dropout(config.resid_pdrop)
802
+ self.resid_mlp_dropout = nn.Dropout(config.resid_pdrop)
803
+ self.post_attention_layernorm = Phi3RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
804
+
805
+ def forward(
806
+ self,
807
+ hidden_states: torch.Tensor,
808
+ attention_mask: Optional[torch.Tensor] = None,
809
+ position_ids: Optional[torch.LongTensor] = None,
810
+ past_key_value: Optional[Tuple[torch.Tensor]] = None,
811
+ output_attentions: Optional[bool] = False,
812
+ use_cache: Optional[bool] = False,
813
+ **kwargs,
814
+ ) -> Tuple[torch.FloatTensor, Optional[Tuple[torch.FloatTensor, torch.FloatTensor]]]:
815
+ if "padding_mask" in kwargs:
816
+ warnings.warn(
817
+ "Passing `padding_mask` is deprecated and will be removed in v4.37. Please make sure use `attention_mask` instead.`"
818
+ )
819
+ """
820
+ Args:
821
+ hidden_states (`torch.FloatTensor`):
822
+ input to the layer of shape `(batch, seq_len, embed_dim)`
823
+ attention_mask (`torch.FloatTensor`, *optional*): attention mask of size
824
+ `(batch, 1, tgt_len, src_len)` where padding elements are indicated by very large negative values.
825
+ position_ids (`torch.LongTensor` of shape `({0})`, *optional*):
826
+ Indices of positions of each input sequence tokens in the position embeddings. Selected in the range
827
+ `[0, config.n_positions - 1]`. [What are position IDs?](../glossary#position-ids)
828
+ output_attentions (`bool`, *optional*):
829
+ Whether or not to return the attentions tensors of all attention layers. See `attentions` under
830
+ returned tensors for more detail.
831
+ use_cache (`bool`, *optional*):
832
+ If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding
833
+ (see `past_key_values`).
834
+ past_key_value (`Tuple(torch.FloatTensor)`, *optional*): cached past key and value projection states
835
+ """
836
+
837
+ residual = hidden_states
838
+
839
+ hidden_states = self.input_layernorm(hidden_states)
840
+
841
+ # Self Attention
842
+ attn_outputs, self_attn_weights, present_key_value = self.self_attn(
843
+ hidden_states=hidden_states,
844
+ attention_mask=attention_mask,
845
+ position_ids=position_ids,
846
+ past_key_value=past_key_value,
847
+ output_attentions=output_attentions,
848
+ use_cache=use_cache,
849
+ )
850
+
851
+ hidden_states = residual + self.resid_attn_dropout(attn_outputs)
852
+
853
+ residual = hidden_states
854
+ hidden_states = self.post_attention_layernorm(hidden_states)
855
+ hidden_states = self.mlp(hidden_states)
856
+ hidden_states = residual + self.resid_mlp_dropout(hidden_states)
857
+
858
+ outputs = (hidden_states,)
859
+
860
+ if output_attentions:
861
+ outputs += (self_attn_weights,)
862
+
863
+ if use_cache:
864
+ outputs += (present_key_value,)
865
+
866
+ return outputs
867
+
868
+
869
+ PHI3_START_DOCSTRING = r"""
870
+ This model inherits from [`PreTrainedModel`]. Check the superclass documentation for the generic methods the
871
+ library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads
872
+ etc.)
873
+
874
+ This model is also a PyTorch [torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module) subclass.
875
+ Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage
876
+ and behavior.
877
+
878
+ Parameters:
879
+ config ([`Phi3Config`]):
880
+ Model configuration class with all the parameters of the model. Initializing with a config file does not
881
+ load the weights associated with the model, only the configuration. Check out the
882
+ [`~PreTrainedModel.from_pretrained`] method to load the model weights.
883
+ """
884
+
885
+
886
+ @add_start_docstrings(
887
+ "The bare Phi-3 model outputting raw hidden-states without any specific head on top.",
888
+ PHI3_START_DOCSTRING,
889
+ )
890
+ class Phi3PreTrainedModel(PreTrainedModel):
891
+ config_class = Phi3Config
892
+ base_model_prefix = "model"
893
+ supports_gradient_checkpointing = True
894
+ _no_split_modules = ["Phi3DecoderLayer"]
895
+ _skip_keys_device_placement = "past_key_values"
896
+ _supports_flash_attn_2 = True
897
+ _supports_sdpa = False
898
+ _supports_cache_class = True
899
+
900
+ _version = "0.0.5"
901
+
902
+ def _init_weights(self, module):
903
+ std = self.config.initializer_range
904
+ if isinstance(module, nn.Linear):
905
+ module.weight.data.normal_(mean=0.0, std=std)
906
+ if module.bias is not None:
907
+ module.bias.data.zero_()
908
+ elif isinstance(module, nn.Embedding):
909
+ module.weight.data.normal_(mean=0.0, std=std)
910
+ if module.padding_idx is not None:
911
+ module.weight.data[module.padding_idx].zero_()
912
+
913
+
914
+ PHI3_INPUTS_DOCSTRING = r"""
915
+ Args:
916
+ input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
917
+ Indices of input sequence tokens in the vocabulary. Padding will be ignored by default should you provide
918
+ it.
919
+
920
+ Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
921
+ [`PreTrainedTokenizer.__call__`] for details.
922
+
923
+ [What are input IDs?](../glossary#input-ids)
924
+ attention_mask (`torch.Tensor` of shape `(batch_size, sequence_length)`, *optional*):
925
+ Mask to avoid performing attention on padding token indices. Mask values selected in `[0, 1]`:
926
+
927
+ - 1 for tokens that are **not masked**,
928
+ - 0 for tokens that are **masked**.
929
+
930
+ [What are attention masks?](../glossary#attention-mask)
931
+
932
+ Indices can be obtained using [`AutoTokenizer`]. See [`PreTrainedTokenizer.encode`] and
933
+ [`PreTrainedTokenizer.__call__`] for details.
934
+
935
+ If `past_key_values` is used, optionally only the last `input_ids` have to be input (see
936
+ `past_key_values`).
937
+
938
+ If you want to change padding behavior, you should read [`modeling_opt._prepare_decoder_attention_mask`]
939
+ and modify to your needs. See diagram 1 in [the paper](https://arxiv.org/abs/1910.13461) for more
940
+ information on the default strategy.
941
+
942
+ - 1 indicates the head is **not masked**,
943
+ - 0 indicates the head is **masked**.
944
+ position_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
945
+ Indices of positions of each input sequence tokens in the position embeddings. Selected in the range `[0,
946
+ config.n_positions - 1]`.
947
+
948
+ [What are position IDs?](../glossary#position-ids)
949
+ past_key_values (`Cache` or `tuple(tuple(torch.FloatTensor))`, *optional*):
950
+ Pre-computed hidden-states (key and values in the self-attention blocks and in the cross-attention
951
+ blocks) that can be used to speed up sequential decoding. This typically consists in the `past_key_values`
952
+ returned by the model at a previous stage of decoding, when `use_cache=True` or `config.use_cache=True`.
953
+
954
+ Two formats are allowed:
955
+ - a [`~cache_utils.Cache`] instance;
956
+ - Tuple of `tuple(torch.FloatTensor)` of length `config.n_layers`, with each tuple having 2 tensors of
957
+ shape `(batch_size, num_heads, sequence_length, embed_size_per_head)`). This is also known as the legacy
958
+ cache format.
959
+
960
+ The model will output the same cache format that is fed as input. If no `past_key_values` are passed, the
961
+ legacy cache format will be returned.
962
+
963
+ If `past_key_values` are used, the user can optionally input only the last `input_ids` (those that don't
964
+ have their past key value states given to this model) of shape `(batch_size, 1)` instead of all `input_ids`
965
+ of shape `(batch_size, sequence_length)`.
966
+ inputs_embeds (`torch.FloatTensor` of shape `(batch_size, sequence_length, hidden_size)`, *optional*):
967
+ Optionally, instead of passing `input_ids` you can choose to directly pass an embedded representation. This
968
+ is useful if you want more control over how to convert `input_ids` indices into associated vectors than the
969
+ model's internal embedding lookup matrix.
970
+ use_cache (`bool`, *optional*):
971
+ If set to `True`, `past_key_values` key value states are returned and can be used to speed up decoding (see
972
+ `past_key_values`).
973
+ output_attentions (`bool`, *optional*):
974
+ Whether or not to return the attentions tensors of all attention layers. See `attentions` under returned
975
+ tensors for more detail.
976
+ output_hidden_states (`bool`, *optional*):
977
+ Whether or not to return the hidden states of all layers. See `hidden_states` under returned tensors for
978
+ more detail.
979
+ return_dict (`bool`, *optional*):
980
+ Whether or not to return a [`~utils.ModelOutput`] instead of a plain tuple.
981
+ """
982
+
983
+
984
+ @add_start_docstrings(
985
+ "The bare Phi-3 model outputting raw hidden-states without any specific head on top.",
986
+ PHI3_START_DOCSTRING,
987
+ )
988
+ class Phi3Model(Phi3PreTrainedModel):
989
+ """
990
+ Transformer decoder consisting of *config.num_hidden_layers* layers. Each layer is a [`Phi3DecoderLayer`]
991
+
992
+ Args:
993
+ config: Phi3Config
994
+ """
995
+
996
+ def __init__(self, config: Phi3Config):
997
+ super().__init__(config)
998
+ self.padding_idx = config.pad_token_id
999
+ self.vocab_size = config.vocab_size
1000
+
1001
+ self.embed_tokens = nn.Embedding(config.vocab_size, config.hidden_size, self.padding_idx)
1002
+ self.embed_dropout = nn.Dropout(config.embd_pdrop)
1003
+ self.layers = nn.ModuleList(
1004
+ [Phi3DecoderLayer(config, layer_idx) for layer_idx in range(config.num_hidden_layers)]
1005
+ )
1006
+ self._attn_implementation = config._attn_implementation
1007
+ self.norm = Phi3RMSNorm(config.hidden_size, eps=config.rms_norm_eps)
1008
+
1009
+ self.gradient_checkpointing = False
1010
+ # Initialize weights and apply final processing
1011
+ self.post_init()
1012
+
1013
+ def get_input_embeddings(self):
1014
+ return self.embed_tokens
1015
+
1016
+ def set_input_embeddings(self, value):
1017
+ self.embed_tokens = value
1018
+
1019
+ @add_start_docstrings_to_model_forward(PHI3_INPUTS_DOCSTRING)
1020
+ def forward(
1021
+ self,
1022
+ input_ids: torch.LongTensor = None,
1023
+ attention_mask: Optional[torch.Tensor] = None,
1024
+ position_ids: Optional[torch.LongTensor] = None,
1025
+ past_key_values: Optional[List[torch.FloatTensor]] = None,
1026
+ inputs_embeds: Optional[torch.FloatTensor] = None,
1027
+ use_cache: Optional[bool] = None,
1028
+ output_attentions: Optional[bool] = None,
1029
+ output_hidden_states: Optional[bool] = None,
1030
+ return_dict: Optional[bool] = None,
1031
+ ) -> Union[Tuple, BaseModelOutputWithPast]:
1032
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
1033
+ output_hidden_states = (
1034
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
1035
+ )
1036
+ use_cache = use_cache if use_cache is not None else self.config.use_cache
1037
+
1038
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1039
+
1040
+ # retrieve input_ids and inputs_embeds
1041
+ if input_ids is not None and inputs_embeds is not None:
1042
+ raise ValueError("You cannot specify both input_ids and inputs_embeds at the same time")
1043
+ elif input_ids is not None:
1044
+ batch_size, seq_length = input_ids.shape[:2]
1045
+ elif inputs_embeds is not None:
1046
+ batch_size, seq_length = inputs_embeds.shape[:2]
1047
+ else:
1048
+ raise ValueError("You have to specify either input_ids or inputs_embeds")
1049
+
1050
+ past_key_values_length = 0
1051
+
1052
+ if self.gradient_checkpointing and self.training:
1053
+ if use_cache:
1054
+ logger.warning_once(
1055
+ "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..."
1056
+ )
1057
+ use_cache = False
1058
+
1059
+ if use_cache:
1060
+ use_legacy_cache = not isinstance(past_key_values, Cache)
1061
+ if use_legacy_cache:
1062
+ past_key_values = DynamicCache.from_legacy_cache(past_key_values)
1063
+ past_key_values_length = past_key_values.get_usable_length(seq_length)
1064
+
1065
+ if position_ids is None:
1066
+ device = input_ids.device if input_ids is not None else inputs_embeds.device
1067
+ position_ids = torch.arange(
1068
+ past_key_values_length, seq_length + past_key_values_length, dtype=torch.long, device=device
1069
+ )
1070
+ position_ids = position_ids.unsqueeze(0).view(-1, seq_length)
1071
+ else:
1072
+ position_ids = position_ids.view(-1, seq_length).long()
1073
+
1074
+ if inputs_embeds is None:
1075
+ inputs_embeds = self.embed_tokens(input_ids)
1076
+
1077
+ if attention_mask is not None and self._attn_implementation == "flash_attention_2" and use_cache:
1078
+ is_padding_right = attention_mask[:, -1].sum().item() != batch_size
1079
+ if is_padding_right:
1080
+ raise ValueError(
1081
+ "You are attempting to perform batched generation with padding_side='right'"
1082
+ " this may lead to unexpected behaviour for Flash Attention version of Phi3. Make sure to "
1083
+ " call `tokenizer.padding_side = 'left'` before tokenizing the input. "
1084
+ )
1085
+
1086
+ if self._attn_implementation == "flash_attention_2":
1087
+ # 2d mask is passed through the layers
1088
+ attention_mask = attention_mask if (attention_mask is not None and 0 in attention_mask) else None
1089
+ else:
1090
+ # 4d mask is passed through the layers
1091
+ attention_mask = _prepare_4d_causal_attention_mask(
1092
+ attention_mask,
1093
+ (batch_size, seq_length),
1094
+ inputs_embeds,
1095
+ past_key_values_length,
1096
+ sliding_window=self.config.sliding_window,
1097
+ )
1098
+
1099
+ hidden_states = inputs_embeds
1100
+
1101
+ # decoder layers
1102
+ all_hidden_states = () if output_hidden_states else None
1103
+ all_self_attns = () if output_attentions else None
1104
+ next_decoder_cache = None
1105
+
1106
+ for decoder_layer in self.layers:
1107
+ if output_hidden_states:
1108
+ all_hidden_states += (hidden_states,)
1109
+
1110
+ if self.gradient_checkpointing and self.training:
1111
+ layer_outputs = self._gradient_checkpointing_func(
1112
+ decoder_layer.__call__,
1113
+ hidden_states,
1114
+ attention_mask,
1115
+ position_ids,
1116
+ past_key_values,
1117
+ output_attentions,
1118
+ use_cache,
1119
+ )
1120
+ else:
1121
+ layer_outputs = decoder_layer(
1122
+ hidden_states,
1123
+ attention_mask=attention_mask,
1124
+ position_ids=position_ids,
1125
+ past_key_value=past_key_values,
1126
+ output_attentions=output_attentions,
1127
+ use_cache=use_cache,
1128
+ )
1129
+
1130
+ hidden_states = layer_outputs[0]
1131
+
1132
+ if use_cache:
1133
+ next_decoder_cache = layer_outputs[2 if output_attentions else 1]
1134
+
1135
+ if output_attentions:
1136
+ all_self_attns += (layer_outputs[1],)
1137
+
1138
+ hidden_states = self.norm(hidden_states)
1139
+
1140
+ # add hidden states from the last decoder layer
1141
+ if output_hidden_states:
1142
+ all_hidden_states += (hidden_states,)
1143
+
1144
+ next_cache = None
1145
+ if use_cache:
1146
+ next_cache = next_decoder_cache.to_legacy_cache() if use_legacy_cache else next_decoder_cache
1147
+ if not return_dict:
1148
+ return tuple(v for v in [hidden_states, next_cache, all_hidden_states, all_self_attns] if v is not None)
1149
+ return BaseModelOutputWithPast(
1150
+ last_hidden_state=hidden_states,
1151
+ past_key_values=next_cache,
1152
+ hidden_states=all_hidden_states,
1153
+ attentions=all_self_attns,
1154
+ )
1155
+
1156
+
1157
+ class Phi3ForCausalLM(Phi3PreTrainedModel):
1158
+ _tied_weights_keys = ["lm_head.weight"]
1159
+
1160
+ # Copied from transformers.models.llama.modeling_llama.LlamaForCausalLM.__init__ with Llama->Phi3
1161
+ def __init__(self, config):
1162
+ super().__init__(config)
1163
+ self.model = Phi3Model(config)
1164
+ self.vocab_size = config.vocab_size
1165
+ self.lm_head = nn.Linear(config.hidden_size, config.vocab_size, bias=False)
1166
+
1167
+ # Initialize weights and apply final processing
1168
+ self.post_init()
1169
+
1170
+ # Copied from transformers.models.llama.modeling_llama.LlamaForCausalLM.get_input_embeddings
1171
+ def get_input_embeddings(self):
1172
+ return self.model.embed_tokens
1173
+
1174
+ # Copied from transformers.models.llama.modeling_llama.LlamaForCausalLM.set_input_embeddings
1175
+ def set_input_embeddings(self, value):
1176
+ self.model.embed_tokens = value
1177
+
1178
+ # Copied from transformers.models.llama.modeling_llama.LlamaForCausalLM.get_output_embeddings
1179
+ def get_output_embeddings(self):
1180
+ return self.lm_head
1181
+
1182
+ # Copied from transformers.models.llama.modeling_llama.LlamaForCausalLM.set_output_embeddings
1183
+ def set_output_embeddings(self, new_embeddings):
1184
+ self.lm_head = new_embeddings
1185
+
1186
+ # Copied from transformers.models.llama.modeling_llama.LlamaForCausalLM.set_decoder
1187
+ def set_decoder(self, decoder):
1188
+ self.model = decoder
1189
+
1190
+ # Copied from transformers.models.llama.modeling_llama.LlamaForCausalLM.get_decoder
1191
+ def get_decoder(self):
1192
+ return self.model
1193
+
1194
+ # Ignore copy
1195
+ @add_start_docstrings_to_model_forward(PHI3_INPUTS_DOCSTRING)
1196
+ @replace_return_docstrings(output_type=CausalLMOutputWithPast, config_class=_CONFIG_FOR_DOC)
1197
+ def forward(
1198
+ self,
1199
+ input_ids: torch.LongTensor = None,
1200
+ attention_mask: Optional[torch.Tensor] = None,
1201
+ position_ids: Optional[torch.LongTensor] = None,
1202
+ past_key_values: Optional[List[torch.FloatTensor]] = None,
1203
+ inputs_embeds: Optional[torch.FloatTensor] = None,
1204
+ labels: Optional[torch.LongTensor] = None,
1205
+ use_cache: Optional[bool] = None,
1206
+ output_attentions: Optional[bool] = None,
1207
+ output_hidden_states: Optional[bool] = None,
1208
+ return_dict: Optional[bool] = None,
1209
+ ) -> Union[Tuple, CausalLMOutputWithPast]:
1210
+ r"""
1211
+ Args:
1212
+ labels (`torch.LongTensor` of shape `(batch_size, sequence_length)`, *optional*):
1213
+ Labels for computing the masked language modeling loss. Indices should either be in `[0, ...,
1214
+ config.vocab_size]` or -100 (see `input_ids` docstring). Tokens with indices set to `-100` are ignored
1215
+ (masked), the loss is only computed for the tokens with labels in `[0, ..., config.vocab_size]`.
1216
+
1217
+ Returns:
1218
+
1219
+ Example:
1220
+
1221
+ ```python
1222
+ >>> from transformers import AutoTokenizer, Phi3ForCausalLM
1223
+
1224
+ >>> model = Phi3ForCausalLM.from_pretrained("microsoft/phi-3-mini-4k-instruct")
1225
+ >>> tokenizer = AutoTokenizer.from_pretrained("microsoft/phi-3-mini-4k-instruct")
1226
+
1227
+ >>> prompt = "This is an example script ."
1228
+ >>> inputs = tokenizer(prompt, return_tensors="pt")
1229
+
1230
+ >>> # Generate
1231
+ >>> generate_ids = model.generate(inputs.input_ids, max_length=30)
1232
+ >>> tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
1233
+ 'This is an example script .\n Certainly! Below is a sample script that demonstrates a simple task, such as calculating the sum'
1234
+ ```"""
1235
+
1236
+ output_attentions = output_attentions if output_attentions is not None else self.config.output_attentions
1237
+ output_hidden_states = (
1238
+ output_hidden_states if output_hidden_states is not None else self.config.output_hidden_states
1239
+ )
1240
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1241
+
1242
+ # decoder outputs consists of (dec_features, layer_state, dec_hidden, dec_attn)
1243
+ outputs = self.model(
1244
+ input_ids=input_ids,
1245
+ attention_mask=attention_mask,
1246
+ position_ids=position_ids,
1247
+ past_key_values=past_key_values,
1248
+ inputs_embeds=inputs_embeds,
1249
+ use_cache=use_cache,
1250
+ output_attentions=output_attentions,
1251
+ output_hidden_states=output_hidden_states,
1252
+ return_dict=return_dict,
1253
+ )
1254
+
1255
+ hidden_states = outputs[0]
1256
+ logits = self.lm_head(hidden_states)
1257
+ logits = logits.float()
1258
+
1259
+ loss = None
1260
+ if labels is not None:
1261
+ # Shift so that tokens < n predict n
1262
+ shift_logits = logits[..., :-1, :].contiguous()
1263
+ shift_labels = labels[..., 1:].contiguous()
1264
+ # Flatten the tokens
1265
+ loss_fct = CrossEntropyLoss()
1266
+ shift_logits = shift_logits.view(-1, self.config.vocab_size)
1267
+ shift_labels = shift_labels.view(-1)
1268
+ # Enable model parallelism
1269
+ shift_labels = shift_labels.to(shift_logits.device)
1270
+ loss = loss_fct(shift_logits, shift_labels)
1271
+
1272
+ if not return_dict:
1273
+ output = (logits,) + outputs[1:]
1274
+ return (loss,) + output if loss is not None else output
1275
+
1276
+ return CausalLMOutputWithPast(
1277
+ loss=loss,
1278
+ logits=logits,
1279
+ past_key_values=outputs.past_key_values,
1280
+ hidden_states=outputs.hidden_states,
1281
+ attentions=outputs.attentions,
1282
+ )
1283
+
1284
+ # Copied from transformers.models.persimmon.modeling_persimmon.PersimmonForCausalLM.prepare_inputs_for_generation
1285
+ def prepare_inputs_for_generation(
1286
+ self, input_ids, past_key_values=None, attention_mask=None, inputs_embeds=None, **kwargs
1287
+ ):
1288
+ # When the first time input length reached long and short factor switching point, enforce re-compute cache
1289
+ # It will cause downside of slower at this single token position, however, better than current failure.
1290
+ if past_key_values and self.config.rope_scaling and input_ids.shape[1] >= self.config.original_max_position_embeddings + 1:
1291
+ past_length = past_key_values.seen_tokens if isinstance(past_key_values, Cache) else past_key_values[0][0].shape[2]
1292
+ if past_length <= self.config.original_max_position_embeddings:
1293
+ past_key_values = None
1294
+
1295
+ if past_key_values is not None:
1296
+ if isinstance(past_key_values, Cache):
1297
+ cache_length = past_key_values.get_seq_length()
1298
+ past_length = past_key_values.seen_tokens
1299
+ max_cache_length = past_key_values.get_max_length()
1300
+ else:
1301
+ cache_length = past_length = past_key_values[0][0].shape[2]
1302
+ max_cache_length = None
1303
+
1304
+ # Keep only the unprocessed tokens:
1305
+ # 1 - If the length of the attention_mask exceeds the length of input_ids, then we are in a setting where
1306
+ # some of the inputs are exclusively passed as part of the cache (e.g. when passing input_embeds as
1307
+ # input)
1308
+ if attention_mask is not None and attention_mask.shape[1] > input_ids.shape[1]:
1309
+ input_ids = input_ids[:, -(attention_mask.shape[1] - past_length) :]
1310
+ # 2 - If the past_length is smaller than input_ids', then input_ids holds all input tokens. We can discard
1311
+ # input_ids based on the past_length.
1312
+ elif past_length < input_ids.shape[1]:
1313
+ input_ids = input_ids[:, past_length:]
1314
+ # 3 - Otherwise (past_length >= input_ids.shape[1]), let's assume input_ids only has unprocessed tokens.
1315
+
1316
+ # If we are about to go beyond the maximum cache length, we need to crop the input attention mask.
1317
+ if (
1318
+ max_cache_length is not None
1319
+ and attention_mask is not None
1320
+ and cache_length + input_ids.shape[1] > max_cache_length
1321
+ ):
1322
+ attention_mask = attention_mask[:, -max_cache_length:]
1323
+
1324
+ position_ids = kwargs.get("position_ids", None)
1325
+ if attention_mask is not None and position_ids is None:
1326
+ # create position_ids on the fly for batch generation
1327
+ position_ids = attention_mask.long().cumsum(-1) - 1
1328
+ position_ids.masked_fill_(attention_mask == 0, 1)
1329
+ if past_key_values:
1330
+ position_ids = position_ids[:, -input_ids.shape[1] :]
1331
+
1332
+ # if `inputs_embeds` are passed, we only want to use them in the 1st generation step
1333
+ if inputs_embeds is not None and past_key_values is None:
1334
+ model_inputs = {"inputs_embeds": inputs_embeds}
1335
+ else:
1336
+ model_inputs = {"input_ids": input_ids}
1337
+
1338
+ model_inputs.update(
1339
+ {
1340
+ "position_ids": position_ids,
1341
+ "past_key_values": past_key_values,
1342
+ "use_cache": kwargs.get("use_cache"),
1343
+ "attention_mask": attention_mask,
1344
+ }
1345
+ )
1346
+ return model_inputs
1347
+
1348
+ @staticmethod
1349
+ # Copied from transformers.models.llama.modeling_llama.LlamaForCausalLM._reorder_cache
1350
+ def _reorder_cache(past_key_values, beam_idx):
1351
+ reordered_past = ()
1352
+ for layer_past in past_key_values:
1353
+ reordered_past += (
1354
+ tuple(past_state.index_select(0, beam_idx.to(past_state.device)) for past_state in layer_past),
1355
+ )
1356
+ return reordered_past
1357
+
1358
+
1359
+ @add_start_docstrings(
1360
+ """
1361
+ The [`Phi3Model`] with a sequence classification head on top (linear layer).
1362
+
1363
+ [`Phi3ForSequenceClassification`] uses the last token in order to do the classification, as other causal models
1364
+ (e.g. GPT-2) do.
1365
+
1366
+ Since it does classification on the last token, it requires to know the position of the last token. If a
1367
+ `pad_token_id` is defined in the configuration, it finds the last token that is not a padding token in each row. If
1368
+ no `pad_token_id` is defined, it simply takes the last value in each row of the batch. Since it cannot guess the
1369
+ padding tokens when `inputs_embeds` are passed instead of `input_ids`, it does the same (take the last value in
1370
+ each row of the batch).
1371
+ """,
1372
+ PHI3_START_DOCSTRING,
1373
+ )
1374
+ # Copied from transformers.models.llama.modeling_llama.LlamaForSequenceClassification with Llama->Phi3, LLAMA->PHI3, self.transformer->self.model, transformer_outputs->model_outputs
1375
+ class Phi3ForSequenceClassification(Phi3PreTrainedModel):
1376
+ def __init__(self, config):
1377
+ super().__init__(config)
1378
+ self.num_labels = config.num_labels
1379
+ self.model = Phi3Model(config)
1380
+ self.score = nn.Linear(config.hidden_size, self.num_labels, bias=False)
1381
+
1382
+ # Initialize weights and apply final processing
1383
+ self.post_init()
1384
+
1385
+ def get_input_embeddings(self):
1386
+ return self.model.embed_tokens
1387
+
1388
+ def set_input_embeddings(self, value):
1389
+ self.model.embed_tokens = value
1390
+
1391
+ @add_start_docstrings_to_model_forward(PHI3_INPUTS_DOCSTRING)
1392
+ def forward(
1393
+ self,
1394
+ input_ids: torch.LongTensor = None,
1395
+ attention_mask: Optional[torch.Tensor] = None,
1396
+ position_ids: Optional[torch.LongTensor] = None,
1397
+ past_key_values: Optional[List[torch.FloatTensor]] = None,
1398
+ inputs_embeds: Optional[torch.FloatTensor] = None,
1399
+ labels: Optional[torch.LongTensor] = None,
1400
+ use_cache: Optional[bool] = None,
1401
+ output_attentions: Optional[bool] = None,
1402
+ output_hidden_states: Optional[bool] = None,
1403
+ return_dict: Optional[bool] = None,
1404
+ ) -> Union[Tuple, SequenceClassifierOutputWithPast]:
1405
+ r"""
1406
+ labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
1407
+ Labels for computing the sequence classification/regression loss. Indices should be in `[0, ...,
1408
+ config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
1409
+ `config.num_labels > 1` a classification loss is computed (Cross-Entropy).
1410
+ """
1411
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1412
+
1413
+ model_outputs = self.model(
1414
+ input_ids,
1415
+ attention_mask=attention_mask,
1416
+ position_ids=position_ids,
1417
+ past_key_values=past_key_values,
1418
+ inputs_embeds=inputs_embeds,
1419
+ use_cache=use_cache,
1420
+ output_attentions=output_attentions,
1421
+ output_hidden_states=output_hidden_states,
1422
+ return_dict=return_dict,
1423
+ )
1424
+ hidden_states = model_outputs[0]
1425
+ logits = self.score(hidden_states)
1426
+
1427
+ if input_ids is not None:
1428
+ batch_size = input_ids.shape[0]
1429
+ else:
1430
+ batch_size = inputs_embeds.shape[0]
1431
+
1432
+ if self.config.pad_token_id is None and batch_size != 1:
1433
+ raise ValueError("Cannot handle batch sizes > 1 if no padding token is defined.")
1434
+ if self.config.pad_token_id is None:
1435
+ sequence_lengths = -1
1436
+ else:
1437
+ if input_ids is not None:
1438
+ # if no pad token found, use modulo instead of reverse indexing for ONNX compatibility
1439
+ sequence_lengths = torch.eq(input_ids, self.config.pad_token_id).int().argmax(-1) - 1
1440
+ sequence_lengths = sequence_lengths % input_ids.shape[-1]
1441
+ sequence_lengths = sequence_lengths.to(logits.device)
1442
+ else:
1443
+ sequence_lengths = -1
1444
+
1445
+ pooled_logits = logits[torch.arange(batch_size, device=logits.device), sequence_lengths]
1446
+
1447
+ loss = None
1448
+ if labels is not None:
1449
+ labels = labels.to(logits.device)
1450
+ if self.config.problem_type is None:
1451
+ if self.num_labels == 1:
1452
+ self.config.problem_type = "regression"
1453
+ elif self.num_labels > 1 and (labels.dtype == torch.long or labels.dtype == torch.int):
1454
+ self.config.problem_type = "single_label_classification"
1455
+ else:
1456
+ self.config.problem_type = "multi_label_classification"
1457
+
1458
+ if self.config.problem_type == "regression":
1459
+ loss_fct = MSELoss()
1460
+ if self.num_labels == 1:
1461
+ loss = loss_fct(pooled_logits.squeeze(), labels.squeeze())
1462
+ else:
1463
+ loss = loss_fct(pooled_logits, labels)
1464
+ elif self.config.problem_type == "single_label_classification":
1465
+ loss_fct = CrossEntropyLoss()
1466
+ loss = loss_fct(pooled_logits.view(-1, self.num_labels), labels.view(-1))
1467
+ elif self.config.problem_type == "multi_label_classification":
1468
+ loss_fct = BCEWithLogitsLoss()
1469
+ loss = loss_fct(pooled_logits, labels)
1470
+ if not return_dict:
1471
+ output = (pooled_logits,) + model_outputs[1:]
1472
+ return ((loss,) + output) if loss is not None else output
1473
+
1474
+ return SequenceClassifierOutputWithPast(
1475
+ loss=loss,
1476
+ logits=pooled_logits,
1477
+ past_key_values=model_outputs.past_key_values,
1478
+ hidden_states=model_outputs.hidden_states,
1479
+ attentions=model_outputs.attentions,
1480
+ )
1481
+
1482
+
1483
+ @add_start_docstrings(
1484
+ """
1485
+ [`Phi3Model`] with a token classification head on top (a linear layer on top of the hidden-states output) e.g. for
1486
+ Named-Entity-Recognition (NER) tasks.
1487
+ """,
1488
+ PHI3_START_DOCSTRING,
1489
+ )
1490
+ # Copied from transformers.models.mpt.modeling_mpt.MptForTokenClassification with Mpt->Phi3,MPT->PHI3,self.transformer->self.model,transformer_outputs->model_outputs
1491
+ class Phi3ForTokenClassification(Phi3PreTrainedModel):
1492
+ def __init__(self, config: Phi3Config):
1493
+ super().__init__(config)
1494
+ self.num_labels = config.num_labels
1495
+
1496
+ self.model = Phi3Model(config)
1497
+ if hasattr(config, "classifier_dropout") and config.classifier_dropout is not None:
1498
+ classifier_dropout = config.classifier_dropout
1499
+ elif hasattr(config, "hidden_dropout") and config.hidden_dropout is not None:
1500
+ classifier_dropout = config.hidden_dropout
1501
+ else:
1502
+ classifier_dropout = 0.1
1503
+ self.dropout = nn.Dropout(classifier_dropout)
1504
+ self.classifier = nn.Linear(config.hidden_size, config.num_labels)
1505
+
1506
+ # Initialize weights and apply final processing
1507
+ self.post_init()
1508
+
1509
+ @add_start_docstrings_to_model_forward(PHI3_INPUTS_DOCSTRING)
1510
+ @add_code_sample_docstrings(
1511
+ checkpoint=_CHECKPOINT_FOR_DOC,
1512
+ output_type=TokenClassifierOutput,
1513
+ config_class=_CONFIG_FOR_DOC,
1514
+ )
1515
+ def forward(
1516
+ self,
1517
+ input_ids: Optional[torch.LongTensor] = None,
1518
+ past_key_values: Optional[Tuple[Tuple[torch.Tensor, torch.Tensor], ...]] = None,
1519
+ attention_mask: Optional[torch.Tensor] = None,
1520
+ inputs_embeds: Optional[torch.Tensor] = None,
1521
+ labels: Optional[torch.Tensor] = None,
1522
+ use_cache: Optional[bool] = None,
1523
+ output_attentions: Optional[bool] = None,
1524
+ output_hidden_states: Optional[bool] = None,
1525
+ return_dict: Optional[bool] = None,
1526
+ **deprecated_arguments,
1527
+ ) -> Union[Tuple[torch.Tensor], TokenClassifierOutput]:
1528
+ r"""
1529
+ labels (`torch.LongTensor` of shape `(batch_size,)`, *optional*):
1530
+ Labels for computing the sequence classification/regression loss. Indices should be in `[0, ...,
1531
+ config.num_labels - 1]`. If `config.num_labels == 1` a regression loss is computed (Mean-Square loss), If
1532
+ `config.num_labels > 1` a classification loss is computed (Cross-Entropy).
1533
+ """
1534
+ return_dict = return_dict if return_dict is not None else self.config.use_return_dict
1535
+
1536
+ model_outputs = self.model(
1537
+ input_ids,
1538
+ past_key_values=past_key_values,
1539
+ attention_mask=attention_mask,
1540
+ inputs_embeds=inputs_embeds,
1541
+ use_cache=use_cache,
1542
+ output_attentions=output_attentions,
1543
+ output_hidden_states=output_hidden_states,
1544
+ return_dict=return_dict,
1545
+ )
1546
+
1547
+ hidden_states = model_outputs[0]
1548
+ hidden_states = self.dropout(hidden_states)
1549
+ logits = self.classifier(hidden_states)
1550
+
1551
+ loss = None
1552
+ if labels is not None:
1553
+ # move labels to correct device to enable model parallelism
1554
+ labels = labels.to(logits.device)
1555
+ batch_size, seq_length = labels.shape
1556
+ loss_fct = CrossEntropyLoss()
1557
+ loss = loss_fct(
1558
+ logits.view(batch_size * seq_length, self.num_labels), labels.view(batch_size * seq_length)
1559
+ )
1560
+
1561
+ if not return_dict:
1562
+ output = (logits,) + model_outputs[2:]
1563
+ return ((loss,) + output) if loss is not None else output
1564
+
1565
+ return TokenClassifierOutput(
1566
+ loss=loss,
1567
+ logits=logits,
1568
+ hidden_states=model_outputs.hidden_states,
1569
+ attentions=model_outputs.attentions,
1570
+ )
sample_finetune.py ADDED
@@ -0,0 +1,214 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import sys
2
+ import logging
3
+
4
+ import datasets
5
+ from datasets import load_dataset
6
+ from peft import LoraConfig
7
+ import torch
8
+ import transformers
9
+ from trl import SFTTrainer
10
+ from transformers import AutoModelForCausalLM, AutoTokenizer, TrainingArguments, BitsAndBytesConfig
11
+
12
+ """
13
+ A simple example on using SFTTrainer and Accelerate to finetune Phi-3 models. For
14
+ a more advanced example, please follow HF alignment-handbook/scripts/run_sft.py.
15
+ This example has utilized DeepSpeed ZeRO3 offload to reduce the memory usage. The
16
+ script can be run on V100 or later generation GPUs. Here are some suggestions on
17
+ futher reducing memory consumption:
18
+ - reduce batch size
19
+ - decrease lora dimension
20
+ - restrict lora target modules
21
+ Please follow these steps to run the script:
22
+ 1. Install dependencies:
23
+ conda install -c conda-forge accelerate
24
+ pip3 install -i https://pypi.org/simple/ bitsandbytes
25
+ pip3 install peft transformers trl datasets
26
+ pip3 install deepspeed
27
+ 2. Setup accelerate and deepspeed config based on the machine used:
28
+ accelerate config
29
+ Here is a sample config for deepspeed zero3:
30
+ compute_environment: LOCAL_MACHINE
31
+ debug: false
32
+ deepspeed_config:
33
+ gradient_accumulation_steps: 1
34
+ offload_optimizer_device: none
35
+ offload_param_device: none
36
+ zero3_init_flag: true
37
+ zero3_save_16bit_model: true
38
+ zero_stage: 3
39
+ distributed_type: DEEPSPEED
40
+ downcast_bf16: 'no'
41
+ enable_cpu_affinity: false
42
+ machine_rank: 0
43
+ main_training_function: main
44
+ mixed_precision: bf16
45
+ num_machines: 1
46
+ num_processes: 4
47
+ rdzv_backend: static
48
+ same_network: true
49
+ tpu_env: []
50
+ tpu_use_cluster: false
51
+ tpu_use_sudo: false
52
+ use_cpu: false
53
+ 3. check accelerate config:
54
+ accelerate env
55
+ 4. Run the code:
56
+ accelerate launch sample_finetune.py
57
+ """
58
+
59
+ logger = logging.getLogger(__name__)
60
+
61
+
62
+ ###################
63
+ # Hyper-parameters
64
+ ###################
65
+ training_config = {
66
+ "bf16": True,
67
+ "do_eval": False,
68
+ "learning_rate": 5.0e-06,
69
+ "log_level": "info",
70
+ "logging_steps": 20,
71
+ "logging_strategy": "steps",
72
+ "lr_scheduler_type": "cosine",
73
+ "num_train_epochs": 1,
74
+ "max_steps": -1,
75
+ "output_dir": "./checkpoint_dir",
76
+ "overwrite_output_dir": True,
77
+ "per_device_eval_batch_size": 4,
78
+ "per_device_train_batch_size": 4,
79
+ "remove_unused_columns": True,
80
+ "save_steps": 100,
81
+ "save_total_limit": 1,
82
+ "seed": 0,
83
+ "gradient_checkpointing": True,
84
+ "gradient_checkpointing_kwargs":{"use_reentrant": False},
85
+ "gradient_accumulation_steps": 1,
86
+ "warmup_ratio": 0.2,
87
+ }
88
+
89
+ peft_config = {
90
+ "r": 16,
91
+ "lora_alpha": 32,
92
+ "lora_dropout": 0.05,
93
+ "bias": "none",
94
+ "task_type": "CAUSAL_LM",
95
+ "target_modules": "all-linear",
96
+ "modules_to_save": None,
97
+ }
98
+ train_conf = TrainingArguments(**training_config)
99
+ peft_conf = LoraConfig(**peft_config)
100
+
101
+
102
+ ###############
103
+ # Setup logging
104
+ ###############
105
+ logging.basicConfig(
106
+ format="%(asctime)s - %(levelname)s - %(name)s - %(message)s",
107
+ datefmt="%Y-%m-%d %H:%M:%S",
108
+ handlers=[logging.StreamHandler(sys.stdout)],
109
+ )
110
+ log_level = train_conf.get_process_log_level()
111
+ logger.setLevel(log_level)
112
+ datasets.utils.logging.set_verbosity(log_level)
113
+ transformers.utils.logging.set_verbosity(log_level)
114
+ transformers.utils.logging.enable_default_handler()
115
+ transformers.utils.logging.enable_explicit_format()
116
+
117
+ # Log on each process a small summary
118
+ logger.warning(
119
+ f"Process rank: {train_conf.local_rank}, device: {train_conf.device}, n_gpu: {train_conf.n_gpu}"
120
+ + f" distributed training: {bool(train_conf.local_rank != -1)}, 16-bits training: {train_conf.fp16}"
121
+ )
122
+ logger.info(f"Training/evaluation parameters {train_conf}")
123
+ logger.info(f"PEFT parameters {peft_conf}")
124
+
125
+
126
+ ################
127
+ # Model Loading
128
+ ################
129
+
130
+ checkpoint_path = "microsoft/Phi-3.5-mini-instruct"
131
+ model_kwargs = dict(
132
+ use_cache=False,
133
+ trust_remote_code=True,
134
+ attn_implementation="flash_attention_2", # loading the model with flash-attenstion support
135
+ torch_dtype=torch.bfloat16,
136
+ device_map=None
137
+ )
138
+ model = AutoModelForCausalLM.from_pretrained(checkpoint_path, **model_kwargs)
139
+ tokenizer = AutoTokenizer.from_pretrained(checkpoint_path)
140
+ tokenizer.model_max_length = 2048
141
+ tokenizer.pad_token = tokenizer.unk_token # use unk rather than eos token to prevent endless generation
142
+ tokenizer.pad_token_id = tokenizer.convert_tokens_to_ids(tokenizer.pad_token)
143
+ tokenizer.padding_side = 'right'
144
+
145
+
146
+ ##################
147
+ # Data Processing
148
+ ##################
149
+ def apply_chat_template(
150
+ example,
151
+ tokenizer,
152
+ ):
153
+ messages = example["messages"]
154
+ example["text"] = tokenizer.apply_chat_template(
155
+ messages, tokenize=False, add_generation_prompt=False)
156
+ return example
157
+
158
+ raw_dataset = load_dataset("HuggingFaceH4/ultrachat_200k")
159
+ train_dataset = raw_dataset["train_sft"]
160
+ test_dataset = raw_dataset["test_sft"]
161
+ column_names = list(train_dataset.features)
162
+
163
+ processed_train_dataset = train_dataset.map(
164
+ apply_chat_template,
165
+ fn_kwargs={"tokenizer": tokenizer},
166
+ num_proc=10,
167
+ remove_columns=column_names,
168
+ desc="Applying chat template to train_sft",
169
+ )
170
+
171
+ processed_test_dataset = test_dataset.map(
172
+ apply_chat_template,
173
+ fn_kwargs={"tokenizer": tokenizer},
174
+ num_proc=10,
175
+ remove_columns=column_names,
176
+ desc="Applying chat template to test_sft",
177
+ )
178
+
179
+
180
+ ###########
181
+ # Training
182
+ ###########
183
+ trainer = SFTTrainer(
184
+ model=model,
185
+ args=train_conf,
186
+ peft_config=peft_conf,
187
+ train_dataset=processed_train_dataset,
188
+ eval_dataset=processed_test_dataset,
189
+ max_seq_length=2048,
190
+ dataset_text_field="text",
191
+ tokenizer=tokenizer,
192
+ packing=True
193
+ )
194
+ train_result = trainer.train()
195
+ metrics = train_result.metrics
196
+ trainer.log_metrics("train", metrics)
197
+ trainer.save_metrics("train", metrics)
198
+ trainer.save_state()
199
+
200
+
201
+ #############
202
+ # Evaluation
203
+ #############
204
+ tokenizer.padding_side = 'left'
205
+ metrics = trainer.evaluate()
206
+ metrics["eval_samples"] = len(processed_test_dataset)
207
+ trainer.log_metrics("eval", metrics)
208
+ trainer.save_metrics("eval", metrics)
209
+
210
+
211
+ # ############
212
+ # # Save model
213
+ # ############
214
+ trainer.save_model(train_conf.output_dir)
special_tokens_map.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<|im_start|>",
4
+ "lstrip": true,
5
+ "normalized": false,
6
+ "rstrip": true,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "<|im_end|>",
11
+ "lstrip": true,
12
+ "normalized": false,
13
+ "rstrip": true,
14
+ "single_word": false
15
+ },
16
+ "pad_token": {
17
+ "content": "<|im_sep|>",
18
+ "lstrip": true,
19
+ "normalized": false,
20
+ "rstrip": true,
21
+ "single_word": false
22
+ },
23
+ "unk_token": "<|endoftext|>"
24
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,783 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "added_tokens_decoder": {
4
+ "100256": {
5
+ "content": "<|dummy_0|>",
6
+ "lstrip": true,
7
+ "normalized": false,
8
+ "rstrip": true,
9
+ "single_word": false,
10
+ "special": true
11
+ },
12
+ "100257": {
13
+ "content": "<|endoftext|>",
14
+ "lstrip": true,
15
+ "normalized": false,
16
+ "rstrip": true,
17
+ "single_word": false,
18
+ "special": true
19
+ },
20
+ "100258": {
21
+ "content": "<|fim_prefix|>",
22
+ "lstrip": true,
23
+ "normalized": false,
24
+ "rstrip": true,
25
+ "single_word": false,
26
+ "special": true
27
+ },
28
+ "100259": {
29
+ "content": "<|fim_middle|>",
30
+ "lstrip": true,
31
+ "normalized": false,
32
+ "rstrip": true,
33
+ "single_word": false,
34
+ "special": true
35
+ },
36
+ "100260": {
37
+ "content": "<|fim_suffix|>",
38
+ "lstrip": true,
39
+ "normalized": false,
40
+ "rstrip": true,
41
+ "single_word": false,
42
+ "special": true
43
+ },
44
+ "100261": {
45
+ "content": "<|dummy_1|>",
46
+ "lstrip": true,
47
+ "normalized": false,
48
+ "rstrip": true,
49
+ "single_word": false,
50
+ "special": true
51
+ },
52
+ "100262": {
53
+ "content": "<|dummy_2|>",
54
+ "lstrip": true,
55
+ "normalized": false,
56
+ "rstrip": true,
57
+ "single_word": false,
58
+ "special": true
59
+ },
60
+ "100263": {
61
+ "content": "<|dummy_3|>",
62
+ "lstrip": true,
63
+ "normalized": false,
64
+ "rstrip": true,
65
+ "single_word": false,
66
+ "special": true
67
+ },
68
+ "100264": {
69
+ "content": "<|im_start|>",
70
+ "lstrip": true,
71
+ "normalized": false,
72
+ "rstrip": true,
73
+ "single_word": false,
74
+ "special": true
75
+ },
76
+ "100265": {
77
+ "content": "<|im_end|>",
78
+ "lstrip": true,
79
+ "normalized": false,
80
+ "rstrip": true,
81
+ "single_word": false,
82
+ "special": true
83
+ },
84
+ "100266": {
85
+ "content": "<|im_sep|>",
86
+ "lstrip": true,
87
+ "normalized": false,
88
+ "rstrip": true,
89
+ "single_word": false,
90
+ "special": true
91
+ },
92
+ "100267": {
93
+ "content": "<|dummy_4|>",
94
+ "lstrip": true,
95
+ "normalized": false,
96
+ "rstrip": true,
97
+ "single_word": false,
98
+ "special": true
99
+ },
100
+ "100268": {
101
+ "content": "<|dummy_5|>",
102
+ "lstrip": true,
103
+ "normalized": false,
104
+ "rstrip": true,
105
+ "single_word": false,
106
+ "special": true
107
+ },
108
+ "100269": {
109
+ "content": "<|dummy_6|>",
110
+ "lstrip": true,
111
+ "normalized": false,
112
+ "rstrip": true,
113
+ "single_word": false,
114
+ "special": true
115
+ },
116
+ "100270": {
117
+ "content": "<|dummy_7|>",
118
+ "lstrip": true,
119
+ "normalized": false,
120
+ "rstrip": true,
121
+ "single_word": false,
122
+ "special": true
123
+ },
124
+ "100271": {
125
+ "content": "<|dummy_8|>",
126
+ "lstrip": true,
127
+ "normalized": false,
128
+ "rstrip": true,
129
+ "single_word": false,
130
+ "special": true
131
+ },
132
+ "100272": {
133
+ "content": "<|dummy_9|>",
134
+ "lstrip": true,
135
+ "normalized": false,
136
+ "rstrip": true,
137
+ "single_word": false,
138
+ "special": true
139
+ },
140
+ "100273": {
141
+ "content": "<|dummy_10|>",
142
+ "lstrip": true,
143
+ "normalized": false,
144
+ "rstrip": true,
145
+ "single_word": false,
146
+ "special": true
147
+ },
148
+ "100274": {
149
+ "content": "<|dummy_11|>",
150
+ "lstrip": true,
151
+ "normalized": false,
152
+ "rstrip": true,
153
+ "single_word": false,
154
+ "special": true
155
+ },
156
+ "100275": {
157
+ "content": "<|dummy_12|>",
158
+ "lstrip": true,
159
+ "normalized": false,
160
+ "rstrip": true,
161
+ "single_word": false,
162
+ "special": true
163
+ },
164
+ "100276": {
165
+ "content": "<|endofprompt|>",
166
+ "lstrip": true,
167
+ "normalized": false,
168
+ "rstrip": true,
169
+ "single_word": false,
170
+ "special": true
171
+ },
172
+ "100277": {
173
+ "content": "<|dummy_13|>",
174
+ "lstrip": true,
175
+ "normalized": false,
176
+ "rstrip": true,
177
+ "single_word": false,
178
+ "special": true
179
+ },
180
+ "100278": {
181
+ "content": "<|dummy_14|>",
182
+ "lstrip": true,
183
+ "normalized": false,
184
+ "rstrip": true,
185
+ "single_word": false,
186
+ "special": true
187
+ },
188
+ "100279": {
189
+ "content": "<|dummy_15|>",
190
+ "lstrip": true,
191
+ "normalized": false,
192
+ "rstrip": true,
193
+ "single_word": false,
194
+ "special": true
195
+ },
196
+ "100280": {
197
+ "content": "<|dummy_16|>",
198
+ "lstrip": true,
199
+ "normalized": false,
200
+ "rstrip": true,
201
+ "single_word": false,
202
+ "special": true
203
+ },
204
+ "100281": {
205
+ "content": "<|dummy_17|>",
206
+ "lstrip": true,
207
+ "normalized": false,
208
+ "rstrip": true,
209
+ "single_word": false,
210
+ "special": true
211
+ },
212
+ "100282": {
213
+ "content": "<|dummy_18|>",
214
+ "lstrip": true,
215
+ "normalized": false,
216
+ "rstrip": true,
217
+ "single_word": false,
218
+ "special": true
219
+ },
220
+ "100283": {
221
+ "content": "<|dummy_19|>",
222
+ "lstrip": true,
223
+ "normalized": false,
224
+ "rstrip": true,
225
+ "single_word": false,
226
+ "special": true
227
+ },
228
+ "100284": {
229
+ "content": "<|dummy_20|>",
230
+ "lstrip": true,
231
+ "normalized": false,
232
+ "rstrip": true,
233
+ "single_word": false,
234
+ "special": true
235
+ },
236
+ "100285": {
237
+ "content": "<|dummy_21|>",
238
+ "lstrip": true,
239
+ "normalized": false,
240
+ "rstrip": true,
241
+ "single_word": false,
242
+ "special": true
243
+ },
244
+ "100286": {
245
+ "content": "<|dummy_22|>",
246
+ "lstrip": true,
247
+ "normalized": false,
248
+ "rstrip": true,
249
+ "single_word": false,
250
+ "special": true
251
+ },
252
+ "100287": {
253
+ "content": "<|dummy_23|>",
254
+ "lstrip": true,
255
+ "normalized": false,
256
+ "rstrip": true,
257
+ "single_word": false,
258
+ "special": true
259
+ },
260
+ "100288": {
261
+ "content": "<|dummy_24|>",
262
+ "lstrip": true,
263
+ "normalized": false,
264
+ "rstrip": true,
265
+ "single_word": false,
266
+ "special": true
267
+ },
268
+ "100289": {
269
+ "content": "<|dummy_25|>",
270
+ "lstrip": true,
271
+ "normalized": false,
272
+ "rstrip": true,
273
+ "single_word": false,
274
+ "special": true
275
+ },
276
+ "100290": {
277
+ "content": "<|dummy_26|>",
278
+ "lstrip": true,
279
+ "normalized": false,
280
+ "rstrip": true,
281
+ "single_word": false,
282
+ "special": true
283
+ },
284
+ "100291": {
285
+ "content": "<|dummy_27|>",
286
+ "lstrip": true,
287
+ "normalized": false,
288
+ "rstrip": true,
289
+ "single_word": false,
290
+ "special": true
291
+ },
292
+ "100292": {
293
+ "content": "<|dummy_28|>",
294
+ "lstrip": true,
295
+ "normalized": false,
296
+ "rstrip": true,
297
+ "single_word": false,
298
+ "special": true
299
+ },
300
+ "100293": {
301
+ "content": "<|dummy_29|>",
302
+ "lstrip": true,
303
+ "normalized": false,
304
+ "rstrip": true,
305
+ "single_word": false,
306
+ "special": true
307
+ },
308
+ "100294": {
309
+ "content": "<|dummy_30|>",
310
+ "lstrip": true,
311
+ "normalized": false,
312
+ "rstrip": true,
313
+ "single_word": false,
314
+ "special": true
315
+ },
316
+ "100295": {
317
+ "content": "<|dummy_31|>",
318
+ "lstrip": true,
319
+ "normalized": false,
320
+ "rstrip": true,
321
+ "single_word": false,
322
+ "special": true
323
+ },
324
+ "100296": {
325
+ "content": "<|dummy_32|>",
326
+ "lstrip": true,
327
+ "normalized": false,
328
+ "rstrip": true,
329
+ "single_word": false,
330
+ "special": true
331
+ },
332
+ "100297": {
333
+ "content": "<|dummy_33|>",
334
+ "lstrip": true,
335
+ "normalized": false,
336
+ "rstrip": true,
337
+ "single_word": false,
338
+ "special": true
339
+ },
340
+ "100298": {
341
+ "content": "<|dummy_34|>",
342
+ "lstrip": true,
343
+ "normalized": false,
344
+ "rstrip": true,
345
+ "single_word": false,
346
+ "special": true
347
+ },
348
+ "100299": {
349
+ "content": "<|dummy_35|>",
350
+ "lstrip": true,
351
+ "normalized": false,
352
+ "rstrip": true,
353
+ "single_word": false,
354
+ "special": true
355
+ },
356
+ "100300": {
357
+ "content": "<|dummy_36|>",
358
+ "lstrip": true,
359
+ "normalized": false,
360
+ "rstrip": true,
361
+ "single_word": false,
362
+ "special": true
363
+ },
364
+ "100301": {
365
+ "content": "<|dummy_37|>",
366
+ "lstrip": true,
367
+ "normalized": false,
368
+ "rstrip": true,
369
+ "single_word": false,
370
+ "special": true
371
+ },
372
+ "100302": {
373
+ "content": "<|dummy_38|>",
374
+ "lstrip": true,
375
+ "normalized": false,
376
+ "rstrip": true,
377
+ "single_word": false,
378
+ "special": true
379
+ },
380
+ "100303": {
381
+ "content": "<|dummy_39|>",
382
+ "lstrip": true,
383
+ "normalized": false,
384
+ "rstrip": true,
385
+ "single_word": false,
386
+ "special": true
387
+ },
388
+ "100304": {
389
+ "content": "<|dummy_40|>",
390
+ "lstrip": true,
391
+ "normalized": false,
392
+ "rstrip": true,
393
+ "single_word": false,
394
+ "special": true
395
+ },
396
+ "100305": {
397
+ "content": "<|dummy_41|>",
398
+ "lstrip": true,
399
+ "normalized": false,
400
+ "rstrip": true,
401
+ "single_word": false,
402
+ "special": true
403
+ },
404
+ "100306": {
405
+ "content": "<|dummy_42|>",
406
+ "lstrip": true,
407
+ "normalized": false,
408
+ "rstrip": true,
409
+ "single_word": false,
410
+ "special": true
411
+ },
412
+ "100307": {
413
+ "content": "<|dummy_43|>",
414
+ "lstrip": true,
415
+ "normalized": false,
416
+ "rstrip": true,
417
+ "single_word": false,
418
+ "special": true
419
+ },
420
+ "100308": {
421
+ "content": "<|dummy_44|>",
422
+ "lstrip": true,
423
+ "normalized": false,
424
+ "rstrip": true,
425
+ "single_word": false,
426
+ "special": true
427
+ },
428
+ "100309": {
429
+ "content": "<|dummy_45|>",
430
+ "lstrip": true,
431
+ "normalized": false,
432
+ "rstrip": true,
433
+ "single_word": false,
434
+ "special": true
435
+ },
436
+ "100310": {
437
+ "content": "<|dummy_46|>",
438
+ "lstrip": true,
439
+ "normalized": false,
440
+ "rstrip": true,
441
+ "single_word": false,
442
+ "special": true
443
+ },
444
+ "100311": {
445
+ "content": "<|dummy_47|>",
446
+ "lstrip": true,
447
+ "normalized": false,
448
+ "rstrip": true,
449
+ "single_word": false,
450
+ "special": true
451
+ },
452
+ "100312": {
453
+ "content": "<|dummy_48|>",
454
+ "lstrip": true,
455
+ "normalized": false,
456
+ "rstrip": true,
457
+ "single_word": false,
458
+ "special": true
459
+ },
460
+ "100313": {
461
+ "content": "<|dummy_49|>",
462
+ "lstrip": true,
463
+ "normalized": false,
464
+ "rstrip": true,
465
+ "single_word": false,
466
+ "special": true
467
+ },
468
+ "100314": {
469
+ "content": "<|dummy_50|>",
470
+ "lstrip": true,
471
+ "normalized": false,
472
+ "rstrip": true,
473
+ "single_word": false,
474
+ "special": true
475
+ },
476
+ "100315": {
477
+ "content": "<|dummy_51|>",
478
+ "lstrip": true,
479
+ "normalized": false,
480
+ "rstrip": true,
481
+ "single_word": false,
482
+ "special": true
483
+ },
484
+ "100316": {
485
+ "content": "<|dummy_52|>",
486
+ "lstrip": true,
487
+ "normalized": false,
488
+ "rstrip": true,
489
+ "single_word": false,
490
+ "special": true
491
+ },
492
+ "100317": {
493
+ "content": "<|dummy_53|>",
494
+ "lstrip": true,
495
+ "normalized": false,
496
+ "rstrip": true,
497
+ "single_word": false,
498
+ "special": true
499
+ },
500
+ "100318": {
501
+ "content": "<|dummy_54|>",
502
+ "lstrip": true,
503
+ "normalized": false,
504
+ "rstrip": true,
505
+ "single_word": false,
506
+ "special": true
507
+ },
508
+ "100319": {
509
+ "content": "<|dummy_55|>",
510
+ "lstrip": true,
511
+ "normalized": false,
512
+ "rstrip": true,
513
+ "single_word": false,
514
+ "special": true
515
+ },
516
+ "100320": {
517
+ "content": "<|dummy_56|>",
518
+ "lstrip": true,
519
+ "normalized": false,
520
+ "rstrip": true,
521
+ "single_word": false,
522
+ "special": true
523
+ },
524
+ "100321": {
525
+ "content": "<|dummy_57|>",
526
+ "lstrip": true,
527
+ "normalized": false,
528
+ "rstrip": true,
529
+ "single_word": false,
530
+ "special": true
531
+ },
532
+ "100322": {
533
+ "content": "<|dummy_58|>",
534
+ "lstrip": true,
535
+ "normalized": false,
536
+ "rstrip": true,
537
+ "single_word": false,
538
+ "special": true
539
+ },
540
+ "100323": {
541
+ "content": "<|dummy_59|>",
542
+ "lstrip": true,
543
+ "normalized": false,
544
+ "rstrip": true,
545
+ "single_word": false,
546
+ "special": true
547
+ },
548
+ "100324": {
549
+ "content": "<|dummy_60|>",
550
+ "lstrip": true,
551
+ "normalized": false,
552
+ "rstrip": true,
553
+ "single_word": false,
554
+ "special": true
555
+ },
556
+ "100325": {
557
+ "content": "<|dummy_61|>",
558
+ "lstrip": true,
559
+ "normalized": false,
560
+ "rstrip": true,
561
+ "single_word": false,
562
+ "special": true
563
+ },
564
+ "100326": {
565
+ "content": "<|dummy_62|>",
566
+ "lstrip": true,
567
+ "normalized": false,
568
+ "rstrip": true,
569
+ "single_word": false,
570
+ "special": true
571
+ },
572
+ "100327": {
573
+ "content": "<|dummy_63|>",
574
+ "lstrip": true,
575
+ "normalized": false,
576
+ "rstrip": true,
577
+ "single_word": false,
578
+ "special": true
579
+ },
580
+ "100328": {
581
+ "content": "<|dummy_64|>",
582
+ "lstrip": true,
583
+ "normalized": false,
584
+ "rstrip": true,
585
+ "single_word": false,
586
+ "special": true
587
+ },
588
+ "100329": {
589
+ "content": "<|dummy_65|>",
590
+ "lstrip": true,
591
+ "normalized": false,
592
+ "rstrip": true,
593
+ "single_word": false,
594
+ "special": true
595
+ },
596
+ "100330": {
597
+ "content": "<|dummy_66|>",
598
+ "lstrip": true,
599
+ "normalized": false,
600
+ "rstrip": true,
601
+ "single_word": false,
602
+ "special": true
603
+ },
604
+ "100331": {
605
+ "content": "<|dummy_67|>",
606
+ "lstrip": true,
607
+ "normalized": false,
608
+ "rstrip": true,
609
+ "single_word": false,
610
+ "special": true
611
+ },
612
+ "100332": {
613
+ "content": "<|dummy_68|>",
614
+ "lstrip": true,
615
+ "normalized": false,
616
+ "rstrip": true,
617
+ "single_word": false,
618
+ "special": true
619
+ },
620
+ "100333": {
621
+ "content": "<|dummy_69|>",
622
+ "lstrip": true,
623
+ "normalized": false,
624
+ "rstrip": true,
625
+ "single_word": false,
626
+ "special": true
627
+ },
628
+ "100334": {
629
+ "content": "<|dummy_70|>",
630
+ "lstrip": true,
631
+ "normalized": false,
632
+ "rstrip": true,
633
+ "single_word": false,
634
+ "special": true
635
+ },
636
+ "100335": {
637
+ "content": "<|dummy_71|>",
638
+ "lstrip": true,
639
+ "normalized": false,
640
+ "rstrip": true,
641
+ "single_word": false,
642
+ "special": true
643
+ },
644
+ "100336": {
645
+ "content": "<|dummy_72|>",
646
+ "lstrip": true,
647
+ "normalized": false,
648
+ "rstrip": true,
649
+ "single_word": false,
650
+ "special": true
651
+ },
652
+ "100337": {
653
+ "content": "<|dummy_73|>",
654
+ "lstrip": true,
655
+ "normalized": false,
656
+ "rstrip": true,
657
+ "single_word": false,
658
+ "special": true
659
+ },
660
+ "100338": {
661
+ "content": "<|dummy_74|>",
662
+ "lstrip": true,
663
+ "normalized": false,
664
+ "rstrip": true,
665
+ "single_word": false,
666
+ "special": true
667
+ },
668
+ "100339": {
669
+ "content": "<|dummy_75|>",
670
+ "lstrip": true,
671
+ "normalized": false,
672
+ "rstrip": true,
673
+ "single_word": false,
674
+ "special": true
675
+ },
676
+ "100340": {
677
+ "content": "<|dummy_76|>",
678
+ "lstrip": true,
679
+ "normalized": false,
680
+ "rstrip": true,
681
+ "single_word": false,
682
+ "special": true
683
+ },
684
+ "100341": {
685
+ "content": "<|dummy_77|>",
686
+ "lstrip": true,
687
+ "normalized": false,
688
+ "rstrip": true,
689
+ "single_word": false,
690
+ "special": true
691
+ },
692
+ "100342": {
693
+ "content": "<|dummy_78|>",
694
+ "lstrip": true,
695
+ "normalized": false,
696
+ "rstrip": true,
697
+ "single_word": false,
698
+ "special": true
699
+ },
700
+ "100343": {
701
+ "content": "<|dummy_79|>",
702
+ "lstrip": true,
703
+ "normalized": false,
704
+ "rstrip": true,
705
+ "single_word": false,
706
+ "special": true
707
+ },
708
+ "100344": {
709
+ "content": "<|dummy_80|>",
710
+ "lstrip": true,
711
+ "normalized": false,
712
+ "rstrip": true,
713
+ "single_word": false,
714
+ "special": true
715
+ },
716
+ "100345": {
717
+ "content": "<|dummy_81|>",
718
+ "lstrip": true,
719
+ "normalized": false,
720
+ "rstrip": true,
721
+ "single_word": false,
722
+ "special": true
723
+ },
724
+ "100346": {
725
+ "content": "<|dummy_82|>",
726
+ "lstrip": true,
727
+ "normalized": false,
728
+ "rstrip": true,
729
+ "single_word": false,
730
+ "special": true
731
+ },
732
+ "100347": {
733
+ "content": "<|dummy_83|>",
734
+ "lstrip": true,
735
+ "normalized": false,
736
+ "rstrip": true,
737
+ "single_word": false,
738
+ "special": true
739
+ },
740
+ "100348": {
741
+ "content": "<|dummy_84|>",
742
+ "lstrip": true,
743
+ "normalized": false,
744
+ "rstrip": true,
745
+ "single_word": false,
746
+ "special": true
747
+ },
748
+ "100349": {
749
+ "content": "<|dummy_85|>",
750
+ "lstrip": true,
751
+ "normalized": false,
752
+ "rstrip": true,
753
+ "single_word": false,
754
+ "special": true
755
+ },
756
+ "100350": {
757
+ "content": "<|dummy_86|>",
758
+ "lstrip": true,
759
+ "normalized": false,
760
+ "rstrip": true,
761
+ "single_word": false,
762
+ "special": true
763
+ },
764
+ "100351": {
765
+ "content": "<|dummy_87|>",
766
+ "lstrip": true,
767
+ "normalized": false,
768
+ "rstrip": true,
769
+ "single_word": false,
770
+ "special": true
771
+ }
772
+ },
773
+ "bos_token": "<|im_start|>",
774
+ "chat_template": "{% for message in messages %}{% if (message['role'] == 'system') %}{{'<|im_start|>system<|im_sep|>' + message['content'] + '<|im_end|>'}}{% elif (message['role'] == 'user') %}{{'<|im_start|>user<|im_sep|>' + message['content'] + '<|im_end|><|im_start|>assistant<|im_sep|>'}}{% elif (message['role'] == 'assistant') %}{{message['content'] + '<|im_end|>'}}{% endif %}{% endfor %}",
775
+ "clean_up_tokenization_spaces": false,
776
+ "eos_token": "<|im_end|>",
777
+ "extra_special_tokens": {},
778
+ "model_max_length": 16384,
779
+ "sep_token": "<|im_sep|>",
780
+ "pad_token": "<|endoftext|>",
781
+ "tokenizer_class": "GPT2Tokenizer",
782
+ "unk_token": "<|endoftext|>"
783
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff