Text2Text Generation
Transformers
PyTorch
Spanish
led
text-generation-inference
Inference Endpoints
vgaraujov commited on
Commit
db3dbc4
1 Parent(s): 73230e8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -13,13 +13,13 @@ widget:
13
  - text: Quito es la capital de <mask>
14
  ---
15
 
16
- # Longformer Encoder-Decoder Spanish (LEDS) (base-sized model)
17
 
18
- LED model based on [BARTO](https://huggingface.co/vgaraujov/bart-base-spanish). It was introduced in the paper [Sequence-to-Sequence Spanish Pre-trained Language Models](https://arxiv.org/abs/2309.11259).
19
 
20
  ## Model description
21
 
22
- LEDS is a BART-based model (transformer encoder-decoder) with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. BART is pre-trained by (1) corrupting text with an arbitrary noising function and (2) learning a model to reconstruct the original text.
23
 
24
  To process 16K tokens, the BARTO's position embedding matrix was simply copied 16 times.
25
 
 
13
  - text: Quito es la capital de <mask>
14
  ---
15
 
16
+ # Longformer Encoder-Decoder Spanish (LEDO) (base-sized model)
17
 
18
+ LEDO is based on [BARTO](https://huggingface.co/vgaraujov/bart-base-spanish) and was introduced in the paper [Sequence-to-Sequence Spanish Pre-trained Language Models](https://arxiv.org/abs/2309.11259).
19
 
20
  ## Model description
21
 
22
+ LEDO is a BART-based model (transformer encoder-decoder) with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. BART is pre-trained by (1) corrupting text with an arbitrary noising function and (2) learning a model to reconstruct the original text.
23
 
24
  To process 16K tokens, the BARTO's position embedding matrix was simply copied 16 times.
25