UMCU commited on
Commit
cc285f4
·
1 Parent(s): 9bf6c65

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +25 -0
README.md CHANGED
@@ -8,6 +8,31 @@ license: mit
8
  ## Description
9
  This model is a finetuned RoBERTa-based model pre-trained from scratch on Dutch hospital notes sourced from Electronic Health Records. All code used for the creation of MedRoBERTa.nl can be found at https://github.com/cltl-students/verkijk_stella_rma_thesis_dutch_medical_language_model. The publication associated with the negation detection task can be found at https://arxiv.org/abs/2209.00470. The code for finetuning the model can be found at https://github.com/umcu/negation-detection.
10
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  ## Intended use
12
  The model is finetuned for negation detection on Dutch clinical text. Since it is a domain-specific model trained on medical data, it is meant to be used on medical NLP tasks for Dutch. This particular model is trained on a 512-max token windows surrounding the concept-to-be negated.
13
 
 
8
  ## Description
9
  This model is a finetuned RoBERTa-based model pre-trained from scratch on Dutch hospital notes sourced from Electronic Health Records. All code used for the creation of MedRoBERTa.nl can be found at https://github.com/cltl-students/verkijk_stella_rma_thesis_dutch_medical_language_model. The publication associated with the negation detection task can be found at https://arxiv.org/abs/2209.00470. The code for finetuning the model can be found at https://github.com/umcu/negation-detection.
10
 
11
+
12
+ ## Minimal example
13
+
14
+ ```python
15
+ tokenizer = AutoTokenizer.from_pretrained("UMCU/MedRoBERTa.nl_NegationDetection")
16
+ model = AutoModelForTokenClassification.from_pretrained("UMCU/MedRoBERTa.nl_NegationDetection")
17
+
18
+ some_text = "De patient was niet aanspreekbaar en hij zag er grauw uit. Hij heeft de inspanningstest echter goed doorstaan."
19
+ inputs = tokenizer(some_text, return_tensors='pt')
20
+ output = model.forward(inputs)
21
+ probas = torch.nn.functional.softmax(output.logits[0]).detach().numpy()
22
+
23
+ # koppel aan tokens
24
+ input_tokens = tokenizer.convert_ids_to_tokens(inputs['input_ids'][0])
25
+ target_map = {0: 'B-Negated', 1:'B-NotNegated',2:'I-Negated',3:'I-NotNegated'}
26
+ results = [{'token': input_tokens[idx],
27
+ 'proba_negated': proba_arr[0]+proba_arr[2],
28
+ 'proba_not_negated': proba_arr[1]+proba_arr[3]
29
+ }
30
+ for idx,proba_arr in enumerate(probas)]
31
+
32
+ ```
33
+
34
+ It perhaps good to note that we assume the [Inside-Outside-Beginning](https://en.wikipedia.org/wiki/Inside%E2%80%93outside%E2%80%93beginning_(tagging)) format.
35
+
36
  ## Intended use
37
  The model is finetuned for negation detection on Dutch clinical text. Since it is a domain-specific model trained on medical data, it is meant to be used on medical NLP tasks for Dutch. This particular model is trained on a 512-max token windows surrounding the concept-to-be negated.
38