Update README.md
Browse files
README.md
CHANGED
@@ -30,31 +30,31 @@ this is the result of quantizing to 4 bits using [AutoGPTQ](https://github.com/P
|
|
30 |
**How to easily download and use this model in text-generation-webui** (tutorial by [TheBloke](https://huggingface.co/TheBloke))
|
31 |
|
32 |
|
33 |
-
|
34 |
|
35 |
-
|
36 |
|
37 |
-
|
38 |
|
39 |
-
|
40 |
|
41 |
-
|
42 |
|
43 |
-
|
44 |
|
45 |
-
|
46 |
|
47 |
-
|
48 |
|
49 |
-
|
50 |
|
51 |
-
|
52 |
|
53 |
-
|
54 |
|
55 |
-
|
56 |
|
57 |
-
|
58 |
|
59 |
|
60 |
|
@@ -83,31 +83,31 @@ Este es el resultado de cuantificar a 4 bits usando [AutoGPTQ](https://github.co
|
|
83 |
**Cómo descargar y usar fácilmente este modelo en text-generation-webui** (tutorial de [TheBloke](https://huggingface.co/TheBloke))
|
84 |
|
85 |
|
86 |
-
|
87 |
-
|
88 |
-
|
89 |
-
|
90 |
-
|
91 |
-
|
92 |
-
|
93 |
-
|
94 |
-
|
95 |
-
|
96 |
-
|
97 |
-
|
98 |
-
|
99 |
-
|
100 |
-
|
101 |
-
|
102 |
-
|
103 |
-
|
104 |
-
|
105 |
-
|
106 |
-
|
107 |
-
|
108 |
-
|
109 |
-
|
110 |
-
|
111 |
|
112 |
|
113 |
**Detalles del modelo**
|
|
|
30 |
**How to easily download and use this model in text-generation-webui** (tutorial by [TheBloke](https://huggingface.co/TheBloke))
|
31 |
|
32 |
|
33 |
+
Open [the text-generation-webui UI]( https://github.com/oobabooga/text-generation-webui) as normal.
|
34 |
|
35 |
+
here is a tutorial how to install the text-generation-webui UI: [tutorial]( https://www.youtube.com/watch?v=lb_lC4XFedU&t).
|
36 |
|
37 |
+
Click the Model tab.
|
38 |
|
39 |
+
Under Download custom model or LoRA, enter RedXeol/bertin-gpt-j-6B-alpaca-4bit-128g.
|
40 |
|
41 |
+
Click Download.
|
42 |
|
43 |
+
Wait until it says it's finished downloading.
|
44 |
|
45 |
+
Click the Refresh icon next to Model in the top left.
|
46 |
|
47 |
+
In the Model drop-down: choose the model you just downloaded, bertin-gpt-j-6B-alpaca-4bit-128g.
|
48 |
|
49 |
+
If you see an error in the bottom right, ignore it - it's temporary.
|
50 |
|
51 |
+
Fill out the GPTQ parameters on the right: Bits = 4, Groupsize = 128, model_type = gptj
|
52 |
|
53 |
+
Click Save settings for this model in the top right.
|
54 |
|
55 |
+
Click Reload the Model in the top right.
|
56 |
|
57 |
+
Once it says it's loaded, click the Text Generation tab and enter a prompt!
|
58 |
|
59 |
|
60 |
|
|
|
83 |
**Cómo descargar y usar fácilmente este modelo en text-generation-webui** (tutorial de [TheBloke](https://huggingface.co/TheBloke))
|
84 |
|
85 |
|
86 |
+
Abra la interfaz de usuario [the text-generation-webui UI]( https://github.com/oobabooga/text-generation-webui) normal.
|
87 |
+
|
88 |
+
aquí hay un tutorial de cómo instalar la interfaz de usuario text-generation-webui: [tutorial]( https://www.youtube.com/watch?v=lb_lC4XFedU&t).
|
89 |
+
|
90 |
+
Haga clic en la pestaña Modelo.
|
91 |
+
|
92 |
+
En Descargar modelo personalizado o LoRA, ingrese RedXeol/bertin-gpt-j-6B-alpaca-4bit-128g.
|
93 |
+
|
94 |
+
Haz clic en Descargar.
|
95 |
+
|
96 |
+
Espera hasta que diga que ha terminado de descargarse.
|
97 |
+
|
98 |
+
Haga clic en el icono Actualizar junto a Modelo en la parte superior izquierda.
|
99 |
+
|
100 |
+
En el menú desplegable Modelo: elija el modelo que acaba de descargar, bertin-gpt-j-6B-alpaca-4bit-128g.
|
101 |
+
|
102 |
+
Si ve un error en la parte inferior derecha, ignórelo, es temporal.
|
103 |
+
|
104 |
+
Complete los parámetros GPTQ a la derecha: Bits = 4, Groupsize = 128, model_type = gptj
|
105 |
+
|
106 |
+
Haz clic en Guardar configuración para este modelo en la parte superior derecha.
|
107 |
+
|
108 |
+
Haga clic en Recargar el modelo en la parte superior derecha.
|
109 |
+
|
110 |
+
Una vez que diga que está cargado, haga clic en la pestaña Generación de texto e ingrese un mensaje.
|
111 |
|
112 |
|
113 |
**Detalles del modelo**
|