Alexis CarriΓ³n commited on
Commit
c587e44
β€’
1 Parent(s): 7832ccc

commit from $USER

Browse files
Conf-ajuste.txt ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ optim = AdamW(model.parameters(), lr=3e-5) #tasa de aprendizaje
2
+ # Se inicializa el cargador de datos para los datos de entrenamiento
3
+ train_loader = DataLoader(train_dataset, batch_size=16, shuffle=True)
4
+
5
+ for epoch in range(4):
6
+
7
+ Epoch 0: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 94/94 [01:58<00:00, 1.26s/it, loss=1.08]
8
+ Epoch 1: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 94/94 [02:00<00:00, 1.28s/it, loss=2.86]
9
+ Epoch 2: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 94/94 [02:00<00:00, 1.28s/it, loss=0.612]
10
+ Epoch 3: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 94/94 [02:00<00:00, 1.28s/it, loss=0.0787]
11
+
12
+ PrecisiΓ³n del modelo ajustado: 0.8043478260869565
Conf_bert.txt ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ optim = AdamW(model.parameters(), lr=5e-5, eps=1e-8) #tasa de aprendizaje
2
+ # Se inicializa el cargador de datos para los datos de entrenamiento
3
+ train_loader = DataLoader(train_dataset, batch_size=16, shuffle=True)
4
+
5
+ for epoch in range(10):
6
+
7
+ {'score': 0.7284889221191406, 'start': 14, 'end': 29, 'answer': 'serology tests,'}
8
+
9
+ PrecisiΓ³n del modelo ajustado: 0.8211654387139986
10
+
11
+ Epoch 0: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 94/94 [00:57<00:00, 1.62it/s, loss=1.87]
12
+ Epoch 1: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 94/94 [00:57<00:00, 1.62it/s, loss=0.211]
13
+ Epoch 2: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 94/94 [00:57<00:00, 1.62it/s, loss=1.95]
14
+ Epoch 3: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 94/94 [00:57<00:00, 1.62it/s, loss=0.0322]
15
+ Epoch 4: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 94/94 [00:57<00:00, 1.62it/s, loss=0.0229]
16
+ Epoch 5: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 94/94 [00:57<00:00, 1.62it/s, loss=0.0271]
17
+ Epoch 6: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 94/94 [00:57<00:00, 1.62it/s, loss=0.59]
18
+ Epoch 7: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 94/94 [00:57<00:00, 1.62it/s, loss=0.0233]
19
+ Epoch 8: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 94/94 [00:57<00:00, 1.62it/s, loss=0.00257]
20
+ Epoch 9: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 94/94 [00:57<00:00, 1.62it/s, loss=0.00663]
config.json ADDED
@@ -0,0 +1,28 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "deepset/bert-base-cased-squad2",
3
+ "architectures": [
4
+ "BertForQuestionAnswering"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "hidden_act": "gelu",
9
+ "hidden_dropout_prob": 0.1,
10
+ "hidden_size": 768,
11
+ "initializer_range": 0.02,
12
+ "intermediate_size": 3072,
13
+ "language": "english",
14
+ "layer_norm_eps": 1e-12,
15
+ "max_position_embeddings": 512,
16
+ "model_type": "bert",
17
+ "name": "Bert",
18
+ "num_attention_heads": 12,
19
+ "num_hidden_layers": 12,
20
+ "output_past": true,
21
+ "pad_token_id": 0,
22
+ "position_embedding_type": "absolute",
23
+ "torch_dtype": "float32",
24
+ "transformers_version": "4.21.1",
25
+ "type_vocab_size": 2,
26
+ "use_cache": true,
27
+ "vocab_size": 28996
28
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3d1ce956f357d5b2461c60ed4d27577b0ac2fdfa647af3117c1116ce9fd39061
3
+ size 430955313
special_tokens_map.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": "[CLS]",
3
+ "mask_token": "[MASK]",
4
+ "pad_token": "[PAD]",
5
+ "sep_token": "[SEP]",
6
+ "unk_token": "[UNK]"
7
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": "[CLS]",
3
+ "do_basic_tokenize": true,
4
+ "do_lower_case": false,
5
+ "mask_token": "[MASK]",
6
+ "max_len": 512,
7
+ "name_or_path": "deepset/bert-base-cased-squad2",
8
+ "never_split": null,
9
+ "pad_token": "[PAD]",
10
+ "sep_token": "[SEP]",
11
+ "special_tokens_map_file": "/root/.cache/huggingface/transformers/5fa07cc35cf92053100af63a5b23424ba428c75dafc83de837f7c2bd5118a8b1.dd8bd9bfd3664b530ea4e645105f557769387b3da9f79bdb55ed556bdd80611d",
12
+ "strip_accents": null,
13
+ "tokenize_chinese_chars": true,
14
+ "tokenizer_class": "BertTokenizer",
15
+ "unk_token": "[UNK]"
16
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff
 
vocab.txt ADDED
The diff for this file is too large to render. See raw diff