MilosKosRad commited on
Commit
1be8c67
1 Parent(s): 259d0d9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +38 -30
README.md CHANGED
@@ -26,20 +26,20 @@ library_name: transformers
26
 
27
  ## Model description
28
 
29
- This model was created during the research collaboration between Bayer Pharma and Serbian Institute for Artificial Intelligence Research and Development.
30
- The model is trained on about 25+ biomedical NER classes and can perform also zero-shot inference and can be further fine-tuned for new classes with just few examples (few-shot learning).
31
- For more details about our methods please see the paper named ["A transformer-based method for zero and few-shot biomedical named entity recognition"](https://arxiv.org/abs/2305.04928). The model corresponds to BioBERT-based mode, trained with 1 in the first segment (check paper for more details).
32
 
33
- Model takes as input two strings. String1 is NER label that is being searched in second string. String1 must be phrase for entity. String2 is short text where String1 is searched for semantically.
34
- model outputs list of zeros and ones corresponding to the occurance of Named Entity and corresponing to the tokens(tokens given by transformer tokenizer) of the Sring2.
35
 
36
  ## Example of usage
37
  ```python
38
  from transformers import AutoTokenizer
39
  from transformers import BertForTokenClassification
40
 
41
- modelname = 'ProdicusII/ZeroShotBioNER' # modelpath
42
- tokenizer = AutoTokenizer.from_pretrained(modelname) ## loading the tokenizer of that model
43
  string1 = 'Drug'
44
  string2 = 'No recent antibiotics or other nephrotoxins, and no symptoms of UTI with benign UA.'
45
  encodings = tokenizer(string1, string2, is_split_into_words=False,
@@ -53,34 +53,42 @@ print(prediction_logits)
53
 
54
  ## Example of fine-tuning with few-shot learning
55
 
56
- In order to fine-tune model to the new entity using few shots, the dataset needs to be transformed to torch.utils.data.Dataset, containing BERT tokens and set of 0s and 1s (1 is where the class is positive and should be predicted as the member of given NER class). After the dataset is created, the following can be done (for more details, please have a look at the code at GitHub - https://github.com/br-ai-ns-institute/Zero-ShotNER):
57
 
58
  ```python
59
- training_args = TrainingArguments(
60
- output_dir=os.path.join('Results', class_unseen, str(j)+'Shot'), # folder for results
61
- num_train_epochs=10, # number of epochs
62
- per_device_train_batch_size=16, # batch size per device during training
63
- per_device_eval_batch_size=16, # batch size for evaluation
64
- weight_decay=0.01, # strength of weight decay
65
- logging_dir=os.path.join('Logs', class_unseen, str(j)+'Shot'), # folder for logs
 
66
  save_strategy='epoch',
67
  evaluation_strategy='epoch',
68
- load_best_model_at_end=True,
69
  )
70
-
71
- model0 = BertForTokenClassification.from_pretrained(model_path, num_labels=2)
72
- trainer = Trainer(
73
- model=model0, # pretrained model
74
- args=training_args, # training artguments
75
- train_dataset=dataset, # Object of class torch.utils.data.Dataset for training
76
- eval_dataset=dataset_valid # Object of class torch.utils.data.Dataset for vaLidation
 
77
  )
78
- start_time = time.time()
79
- trainer.train()
80
- total_time = time.time()-start_time
81
- model0_path = os.path.join('Results', class_unseen, str(j)+'Shot', 'Model')
82
- os.makedirs(model0_path, exist_ok=True)
83
- trainer.save_model(model0_path)
 
 
 
 
 
 
84
  ```
85
 
86
  ## Available classes
@@ -123,7 +131,7 @@ The following datasets and entities were used for training and therefore they ca
123
  * ADE
124
  * Duration
125
 
126
- On top of this, one can use the model in zero-shot regime with other classes, and also fine-tune it with few examples of other classes.
127
 
128
 
129
 
 
26
 
27
  ## Model description
28
 
29
+ This model was created during the research collaboration between Bayer Pharma and The Institute for Artificial Intelligence Research and Development of Serbia.
30
+ The model is trained on 26 biomedical Named Entity (NE) classes and can perform zero-shot inference. It also can be further fine-tuned for new classes with just few examples (few-shot learning).
31
+ For more details about our method please see the paper named ["A transformer-based method for zero and few-shot biomedical named entity recognition"](https://arxiv.org/abs/2305.04928). The model corresponds to PubMedBERT-based model, trained with 1 in the first segment (check paper for more details).
32
 
33
+ Model takes two strings as input. String1 is NE label that is being searched in second string. String2 is short text where one wants to searc for NE (represented by String1).
34
+ Model outputs list of ones (corresponding to the found Named Entities) and zeros (corresponding to other non-NE tokens) of the Sring2.
35
 
36
  ## Example of usage
37
  ```python
38
  from transformers import AutoTokenizer
39
  from transformers import BertForTokenClassification
40
 
41
+ modelname = 'MilosKorsRad/BioNER' # modelpath
42
+ tokenizer = AutoTokenizer.from_pretrained(modelname) ## loading the tokenizer of the model
43
  string1 = 'Drug'
44
  string2 = 'No recent antibiotics or other nephrotoxins, and no symptoms of UTI with benign UA.'
45
  encodings = tokenizer(string1, string2, is_split_into_words=False,
 
53
 
54
  ## Example of fine-tuning with few-shot learning
55
 
56
+ In order to fine-tune model with new entity using few-shots, the dataset needs to be transformed to torch.utils.data.Dataset, containing BERT tokens and set of 0s and 1s (1 is where the class is positive and should be predicted as the member of given NE class). After the dataset is created, the following can be done (for more details, please have a look at the code at GitHub - https://github.com/br-ai-ns-institute/Zero-ShotNER):
57
 
58
  ```python
59
+ for i in [train1shot, train10shot, train100shot]:
60
+ training_args = TrainingArguments(
61
+ output_dir='./Results'+class_unseen+'FewShot'+str(i), # output folder (folder to store the results)
62
+ num_train_epochs=10, # number of training epochs
63
+ per_device_train_batch_size=16, # batch size per device during training
64
+ per_device_eval_batch_size=16, # batch size for evaluation
65
+ weight_decay=0.01, # strength of weight decay
66
+ logging_dir='./Logs'+class_unseen+'FewShot'+str(i), # folder to store the logs
67
  save_strategy='epoch',
68
  evaluation_strategy='epoch',
69
+ load_best_model_at_end=True
70
  )
71
+
72
+ model0 = BertForTokenClassification.from_pretrained(model_path, num_labels=2)
73
+
74
+ trainer = Trainer(
75
+ model=model0, # pre-trained model for fine-tuning
76
+ args=training_args, # training arguments defined above
77
+ train_dataset=train_0shot, # dataset class object for training
78
+ eval_dataset=valid_dataset # dataset class object for validation
79
  )
80
+
81
+ start_time = time.time()
82
+ trainer.train()
83
+ total_time = time.time()-start_time
84
+
85
+ model_path = os.path.join('Results', class_unseen, 'FewShot',str(i), 'Model')
86
+ os.makedirs(model_path, exist_ok=True)
87
+ model.save_pretrained(model_path)
88
+
89
+ tokenizer_path = os.path.join('Results', class_unseen, 'FewShot', str(i), 'Tokenizer')
90
+ os.makedirs(tokenizer_path, exist_ok=True)
91
+ tokenizer.save_pretrained(tokenizer_path)
92
  ```
93
 
94
  ## Available classes
 
131
  * ADE
132
  * Duration
133
 
134
+ On top of this, one can use the model for zero-shot learning with other classes, and also fine-tune it with few examples of other classes.
135
 
136
 
137