RedXeol commited on
Commit
0755fce
1 Parent(s): 274c031

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -13
README.md CHANGED
@@ -25,33 +25,36 @@ This is a 4-bit GPTQ version of the [bertin-project/bertin-gpt-j-6B-alpaca]( htt
25
 
26
  this is the result of quantizing to 4 bits using [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ).
27
 
 
 
28
  **How to easily download and use this model in text-generation-webui** (tutorial by [TheBloke](https://huggingface.co/TheBloke))
29
 
30
- Open [the text-generation-webui UI]( https://github.com/oobabooga/text-generation-webui) as normal.
 
 
31
 
32
- here is a tutorial how to install the text-generation-webui UI: [tutorial]( https://www.youtube.com/watch?v=lb_lC4XFedU&t).
33
 
34
- Click the Model tab.
35
 
36
- Under Download custom model or LoRA, enter RedXeol/bertin-gpt-j-6B-alpaca-4bit-128g.
37
 
38
- Click Download.
39
 
40
- Wait until it says it's finished downloading.
41
 
42
- Click the Refresh icon next to Model in the top left.
43
 
44
- In the Model drop-down: choose the model you just downloaded, bertin-gpt-j-6B-alpaca-4bit-128g.
45
 
46
- If you see an error in the bottom right, ignore it - it's temporary.
47
 
48
- Fill out the GPTQ parameters on the right: Bits = 4, Groupsize = 128, model_type = gptj
49
 
50
- Click Save settings for this model in the top right.
51
 
52
- Click Reload the Model in the top right.
53
 
54
- Once it says it's loaded, click the Text Generation tab and enter a prompt!
55
 
56
 
57
  **Model details**
 
25
 
26
  this is the result of quantizing to 4 bits using [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ).
27
 
28
+
29
+
30
  **How to easily download and use this model in text-generation-webui** (tutorial by [TheBloke](https://huggingface.co/TheBloke))
31
 
32
+ <p align="center">Open [the text-generation-webui UI](https://github.com/oobabooga/text-generation-webui) as normal.</p>
33
+
34
+ <p align="center">Aquí tienes un tutorial sobre cómo instalar el UI de text-generation-webui: [tutorial](https://www.youtube.com/watch?v=lb_lC4XFedU&t).</p>
35
 
36
+ <p align="center">Haz clic en la pestaña Model.</p>
37
 
38
+ <p align="center">Bajo Download custom model or LoRA, ingresa RedXeol/bertin-gpt-j-6B-alpaca-4bit-128g.</p>
39
 
40
+ <p align="center">Haz clic en Download.</p>
41
 
42
+ <p align="center">Espera hasta que diga que la descarga ha finalizado.</p>
43
 
44
+ <p align="center">Haz clic en el ícono Refresh junto a Model en la parte superior izquierda.</p>
45
 
46
+ <p align="center">En el menú desplegable Model, elige el modelo que acabas de descargar, bertin-gpt-j-6B-alpaca-4bit-128g.</p>
47
 
48
+ <p align="center">Si ves un error en la esquina inferior derecha, ignóralo, es temporal.</p>
49
 
50
+ <p align="center">Completa los parámetros GPTQ a la derecha: Bits = 4, Groupsize = 128, model_type = gptj.</p>
51
 
52
+ <p align="center">Haz clic en Save settings for this model en la parte superior derecha.</p>
53
 
54
+ <p align="center">Haz clic en Reload the Model en la parte superior derecha.</p>
55
 
56
+ <p align="center">Una vez que diga que se ha cargado, haz clic en la pestaña Text Generation y escribe un prompt.</p>
57
 
 
58
 
59
 
60
  **Model details**