Update README.md
Browse files
README.md
CHANGED
@@ -12,10 +12,13 @@ This model is a finetuned RoBERTa-based model pre-trained from scratch on Dutch
|
|
12 |
## Minimal example
|
13 |
|
14 |
```python
|
15 |
-
tokenizer = AutoTokenizer
|
16 |
-
|
|
|
|
|
17 |
|
18 |
-
some_text = "De patient was niet aanspreekbaar en hij zag er grauw uit.
|
|
|
19 |
inputs = tokenizer(some_text, return_tensors='pt')
|
20 |
output = model.forward(inputs)
|
21 |
probas = torch.nn.functional.softmax(output.logits[0]).detach().numpy()
|
@@ -31,7 +34,7 @@ results = [{'token': input_tokens[idx],
|
|
31 |
|
32 |
```
|
33 |
|
34 |
-
It perhaps good to note that we assume the [Inside-Outside-Beginning](https://en.wikipedia.org/wiki/Inside%E2%80%93outside%E2%80%93beginning_(tagging)) format.
|
35 |
|
36 |
## Intended use
|
37 |
The model is finetuned for negation detection on Dutch clinical text. Since it is a domain-specific model trained on medical data, it is meant to be used on medical NLP tasks for Dutch. This particular model is trained on a 512-max token windows surrounding the concept-to-be negated.
|
|
|
12 |
## Minimal example
|
13 |
|
14 |
```python
|
15 |
+
tokenizer = AutoTokenizer\
|
16 |
+
.from_pretrained("UMCU/MedRoBERTa.nl_NegationDetection")
|
17 |
+
model = AutoModelForTokenClassification\
|
18 |
+
.from_pretrained("UMCU/MedRoBERTa.nl_NegationDetection")
|
19 |
|
20 |
+
some_text = "De patient was niet aanspreekbaar en hij zag er grauw uit. \
|
21 |
+
Hij heeft de inspanningstest echter goed doorstaan."
|
22 |
inputs = tokenizer(some_text, return_tensors='pt')
|
23 |
output = model.forward(inputs)
|
24 |
probas = torch.nn.functional.softmax(output.logits[0]).detach().numpy()
|
|
|
34 |
|
35 |
```
|
36 |
|
37 |
+
It is perhaps good to note that we assume the [Inside-Outside-Beginning](https://en.wikipedia.org/wiki/Inside%E2%80%93outside%E2%80%93beginning_(tagging)) format.
|
38 |
|
39 |
## Intended use
|
40 |
The model is finetuned for negation detection on Dutch clinical text. Since it is a domain-specific model trained on medical data, it is meant to be used on medical NLP tasks for Dutch. This particular model is trained on a 512-max token windows surrounding the concept-to-be negated.
|