hfunakura commited on
Commit
4b197e0
·
1 Parent(s): 1b3bf11

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -12,7 +12,7 @@ This model is a BERT-base-uncased model finetuned for **semantic tagging**.
12
  As training data, I use the English fragment (both gold and silver data) from the Parallel Meaning Bank's Universal Semantic Tags dataset [1].
13
 
14
  ## Inference
15
- The model is trained to make predictions for the embedded expression corresponding to the first subword of each word. Inference in the same setting as in training can be achieved with the following code ([huggingface's standard pipeline](https://huggingface.co/docs/transformers/main_classes/pipelines) does not behave as intended here). Note that the model assumes that inputs are already split into words by spaces.
16
  ```python
17
  from transformers import AutoTokenizer, AutoModelForTokenClassification
18
  from spacy_alignments.tokenizations import get_alignments
 
12
  As training data, I use the English fragment (both gold and silver data) from the Parallel Meaning Bank's Universal Semantic Tags dataset [1].
13
 
14
  ## Inference
15
+ The model is trained to make predictions for the embedded representations corresponding to the first subword of each word. Inference in the same setting as in training can be achieved with the following code ([huggingface's standard pipeline](https://huggingface.co/docs/transformers/main_classes/pipelines) does not behave as intended here). Note that the model assumes that inputs are already split into words by spaces.
16
  ```python
17
  from transformers import AutoTokenizer, AutoModelForTokenClassification
18
  from spacy_alignments.tokenizations import get_alignments