Vanessasml
commited on
Commit
•
5179e0f
1
Parent(s):
b5cd482
Update README.md
Browse files
README.md
CHANGED
@@ -1,199 +1,128 @@
|
|
1 |
---
|
2 |
-
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
---
|
5 |
-
|
6 |
-
|
7 |
-
|
8 |
-
|
9 |
-
|
10 |
-
|
11 |
-
|
12 |
-
##
|
13 |
-
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
|
18 |
-
|
19 |
-
|
20 |
-
|
21 |
-
|
22 |
-
|
23 |
-
|
24 |
-
|
25 |
-
|
26 |
-
|
27 |
-
|
28 |
-
|
29 |
-
|
30 |
-
|
31 |
-
|
32 |
-
|
33 |
-
|
34 |
-
|
35 |
-
|
36 |
-
|
37 |
-
|
38 |
-
|
39 |
-
|
40 |
-
|
41 |
-
|
42 |
-
|
43 |
-
|
44 |
-
|
45 |
-
|
46 |
-
|
47 |
-
|
48 |
-
|
49 |
-
|
50 |
-
|
51 |
-
|
52 |
-
|
53 |
-
|
54 |
-
|
55 |
-
|
56 |
-
[More Information Needed]
|
57 |
-
|
58 |
-
## Bias, Risks, and Limitations
|
59 |
-
|
60 |
-
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
61 |
-
|
62 |
-
[More Information Needed]
|
63 |
-
|
64 |
-
### Recommendations
|
65 |
-
|
66 |
-
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
67 |
-
|
68 |
-
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
|
69 |
-
|
70 |
-
## How to Get Started with the Model
|
71 |
-
|
72 |
-
Use the code below to get started with the model.
|
73 |
-
|
74 |
-
[More Information Needed]
|
75 |
-
|
76 |
-
## Training Details
|
77 |
-
|
78 |
-
### Training Data
|
79 |
-
|
80 |
-
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
81 |
-
|
82 |
-
[More Information Needed]
|
83 |
-
|
84 |
-
### Training Procedure
|
85 |
-
|
86 |
-
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
87 |
-
|
88 |
-
#### Preprocessing [optional]
|
89 |
-
|
90 |
-
[More Information Needed]
|
91 |
-
|
92 |
-
|
93 |
-
#### Training Hyperparameters
|
94 |
-
|
95 |
-
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
|
96 |
-
|
97 |
-
#### Speeds, Sizes, Times [optional]
|
98 |
-
|
99 |
-
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
|
100 |
-
|
101 |
-
[More Information Needed]
|
102 |
-
|
103 |
-
## Evaluation
|
104 |
-
|
105 |
-
<!-- This section describes the evaluation protocols and provides the results. -->
|
106 |
-
|
107 |
-
### Testing Data, Factors & Metrics
|
108 |
-
|
109 |
-
#### Testing Data
|
110 |
-
|
111 |
-
<!-- This should link to a Dataset Card if possible. -->
|
112 |
-
|
113 |
-
[More Information Needed]
|
114 |
-
|
115 |
-
#### Factors
|
116 |
-
|
117 |
-
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
118 |
-
|
119 |
-
[More Information Needed]
|
120 |
-
|
121 |
-
#### Metrics
|
122 |
-
|
123 |
-
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
124 |
-
|
125 |
-
[More Information Needed]
|
126 |
-
|
127 |
-
### Results
|
128 |
-
|
129 |
-
[More Information Needed]
|
130 |
-
|
131 |
-
#### Summary
|
132 |
-
|
133 |
-
|
134 |
-
|
135 |
-
## Model Examination [optional]
|
136 |
-
|
137 |
-
<!-- Relevant interpretability work for the model goes here -->
|
138 |
-
|
139 |
-
[More Information Needed]
|
140 |
|
141 |
## Environmental Impact
|
142 |
-
|
143 |
-
|
144 |
-
|
145 |
-
|
146 |
-
|
147 |
-
|
148 |
-
|
149 |
-
|
150 |
-
|
151 |
-
|
152 |
-
|
153 |
-
|
154 |
-
|
155 |
-
|
156 |
-
|
157 |
-
|
158 |
-
|
159 |
-
|
160 |
-
|
161 |
-
|
162 |
-
|
163 |
-
|
164 |
-
|
165 |
-
|
166 |
-
|
167 |
-
|
168 |
-
|
169 |
-
|
170 |
-
|
171 |
-
|
172 |
-
|
173 |
-
|
174 |
-
|
175 |
-
|
176 |
-
|
177 |
-
|
178 |
-
|
179 |
-
|
180 |
-
|
181 |
-
|
182 |
-
|
183 |
-
|
184 |
-
|
185 |
-
|
186 |
-
|
187 |
-
|
188 |
-
|
189 |
-
|
190 |
-
|
191 |
-
|
192 |
-
|
193 |
-
|
194 |
-
|
195 |
-
|
196 |
-
|
197 |
-
|
198 |
-
|
199 |
-
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
datasets:
|
3 |
+
- Vanessasml/cybersecurity_32k_instruction_input_output
|
4 |
+
pipeline_tag: text-generation
|
5 |
+
tags:
|
6 |
+
- finance
|
7 |
+
- supervision
|
8 |
+
- cyber risk
|
9 |
+
- cybersecurity
|
10 |
+
- cyber threats
|
11 |
+
- SFT
|
12 |
+
- LoRA
|
13 |
+
- A100GPU
|
14 |
---
|
15 |
+
# Model Card for Cyber-risk-llama-3-8b-instruct-sft
|
16 |
+
|
17 |
+
## Model Description
|
18 |
+
This model is a fine-tuned version of `meta-llama/Meta-Llama-3-8B-Instruct` on the `vanessasml/cybersecurity_32k_instruction_input_output` dataset.
|
19 |
+
|
20 |
+
It is specifically designed to enhance performance in generating and understanding cybersecurity, identifying cyber threats and classifying data under the NIST taxonomy and IT Risks based on the ITC EBA guidelines.
|
21 |
+
|
22 |
+
## Intended Use
|
23 |
+
- **Intended users**: Data scientists and developers working on cybersecurity applications.
|
24 |
+
- **Out-of-scope use cases**: This model should not be used for medical advice, legal decisions, or any life-critical systems.
|
25 |
+
|
26 |
+
## Training Data
|
27 |
+
The model was fine-tuned on `vanessasml/cybersecurity_32k_instruction_input_output`, a dataset focused on cybersecurity news analysis.
|
28 |
+
No special data format was applied as [recommended](https://huggingface.co/blog/llama3#fine-tuning-with-%F0%9F%A4%97-trl), although the following steps need to be applied to adjust the input:
|
29 |
+
```python
|
30 |
+
# During training
|
31 |
+
from trl import setup_chat_format
|
32 |
+
|
33 |
+
model, tokenizer = setup_chat_format(model, tokenizer)
|
34 |
+
|
35 |
+
# During inference
|
36 |
+
messages = [
|
37 |
+
{"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"},
|
38 |
+
{"role": "user", "content": "Who are you?"},
|
39 |
+
]
|
40 |
+
|
41 |
+
prompt = pipeline.tokenizer.apply_chat_template(
|
42 |
+
messages,
|
43 |
+
tokenize=False,
|
44 |
+
add_generation_prompt=True
|
45 |
+
)
|
46 |
+
```
|
47 |
+
|
48 |
+
## Training Procedure
|
49 |
+
- **Preprocessing**: Text data were tokenized using the tokenizer corresponding to the base model `meta-llama/Meta-Llama-3-8B-Instruct`.
|
50 |
+
- **Hardware**: The training was performed on GPUs with mixed precision (FP16/BF16) enabled.
|
51 |
+
- **Optimizer**: Paged AdamW with a cosine learning rate schedule.
|
52 |
+
- **Epochs**: The model was trained for 1 epoch.
|
53 |
+
- **Batch size**: 4 per device, with gradient accumulation where required.
|
54 |
+
|
55 |
+
## Evaluation Results
|
56 |
+
Model evaluation was based on qualitative assessment of generated text relevance and coherence in the context of cybersecurity.
|
57 |
+
|
58 |
+
## Quantization and Optimization
|
59 |
+
- **Quantization**: 4-bit precision with type `nf4`. Nested quantization is disabled.
|
60 |
+
- **Compute dtype**: `float16` to ensure efficient computation.
|
61 |
+
- **LoRA Settings**:
|
62 |
+
- LoRA attention dimension: `64`
|
63 |
+
- Alpha parameter for LoRA scaling: `16`
|
64 |
+
- Dropout in LoRA layers: `0.1`
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
65 |
|
66 |
## Environmental Impact
|
67 |
+
- **Compute Resources**: Training leveraged energy-efficient hardware and practices to minimize carbon footprint.
|
68 |
+
- **Strategies**: Gradient checkpointing and group-wise data processing were used to optimize memory and power usage.
|
69 |
+
|
70 |
+
## How to Use
|
71 |
+
Here is how to load and use the model using transformers:
|
72 |
+
|
73 |
+
```python
|
74 |
+
import transformers
|
75 |
+
|
76 |
+
model_name = "vanessasml/cyber-risk-llama-3-8b-instruct-sft"
|
77 |
+
|
78 |
+
# Example of how to use the model:
|
79 |
+
pipeline = transformers.pipeline(
|
80 |
+
"text-generation",
|
81 |
+
model=model_name,
|
82 |
+
model_kwargs={"torch_dtype": torch.bfloat16},
|
83 |
+
device="cuda",
|
84 |
+
)
|
85 |
+
|
86 |
+
messages = [
|
87 |
+
{"role": "system", "content": SYSTEM_PROMPT},
|
88 |
+
{"role": "user", "content": "What are the main 5 cyber classes from the NIST cyber framework?"},
|
89 |
+
]
|
90 |
+
|
91 |
+
prompt = pipeline.tokenizer.apply_chat_template(
|
92 |
+
messages,
|
93 |
+
tokenize=False,
|
94 |
+
add_generation_prompt=True
|
95 |
+
)
|
96 |
+
|
97 |
+
terminators = [
|
98 |
+
pipeline.tokenizer.eos_token_id,
|
99 |
+
pipeline.tokenizer.convert_tokens_to_ids("<|eot_id|>")
|
100 |
+
]
|
101 |
+
|
102 |
+
outputs = pipeline(
|
103 |
+
prompt,
|
104 |
+
max_new_tokens=256,
|
105 |
+
eos_token_id=terminators,
|
106 |
+
do_sample=True,
|
107 |
+
temperature=0.6,
|
108 |
+
top_p=0.9,
|
109 |
+
)
|
110 |
+
print(outputs[0]["generated_text"][len(prompt):])
|
111 |
+
```
|
112 |
+
|
113 |
+
## Limitations and Bias
|
114 |
+
The model, while robust in cybersecurity contexts, may not generalize well to unrelated domains. Users should be cautious of biases inherent in the training data which may manifest in model predictions.
|
115 |
+
|
116 |
+
|
117 |
+
## Citation
|
118 |
+
If you use this model, please cite it as follows:
|
119 |
+
|
120 |
+
```bibtex
|
121 |
+
@misc{cyber-risk-llama-3-8b-instruct-sft,
|
122 |
+
author = {Vanessa Lopes},
|
123 |
+
title = {Cyber-risk-llama-3-8B-Instruct-sft Model},
|
124 |
+
year = {2024},
|
125 |
+
publisher = {HuggingFace Hub},
|
126 |
+
journal = {HuggingFace Model Hub}
|
127 |
+
}
|
128 |
+
```
|