RedHitMark
commited on
Update README.md
Browse files
README.md
CHANGED
@@ -26,4 +26,7 @@ y = model.generate(x, max_length=1024)[0]
|
|
26 |
output = tokenizer.decode(y, max_length=1024, truncation=True, skip_special_tokens=True, clean_up_tokenization_spaces=True)
|
27 |
|
28 |
print(output)
|
29 |
-
```
|
|
|
|
|
|
|
|
26 |
output = tokenizer.decode(y, max_length=1024, truncation=True, skip_special_tokens=True, clean_up_tokenization_spaces=True)
|
27 |
|
28 |
print(output)
|
29 |
+
```
|
30 |
+
|
31 |
+
## Acknowledgements
|
32 |
+
This contribution is a result of the research conducted within the framework of the PRIN 2020 (Progetti di Rilevante Interesse Nazionale) "VerbACxSS: on analytic verbs, complexity, synthetic verbs, and simplification. For accessibility" (Prot. 2020BJKB9M), funded by the Italian Ministero dell'Università e della Ricerca.
|