pseudotensor
commited on
Commit
•
c2dd818
1
Parent(s):
1d69b6f
Update README.md
Browse files
README.md
CHANGED
@@ -30,7 +30,19 @@ To use the chatbot with such docs, run:
|
|
30 |
```bash
|
31 |
python generate.py --base_model=h2oai/h2ogpt-oig-oasst1-512-6.9b --langchain_mode=UserData
|
32 |
```
|
33 |
-
using [h2oGPT](https://github.com/h2oai/h2ogpt) .
|
34 |
|
35 |
See also LangChain example use with [test_langchain_simple.py](https://github.com/h2oai/h2ogpt/blob/4637531b928dfa458d708615ebd2cb6454d23064/tests/test_langchain_simple.py)
|
36 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
30 |
```bash
|
31 |
python generate.py --base_model=h2oai/h2ogpt-oig-oasst1-512-6.9b --langchain_mode=UserData
|
32 |
```
|
33 |
+
using [h2oGPT](https://github.com/h2oai/h2ogpt) . Any other instruct-tuned base model can be used, including non-h2oGPT ones, as long as required GPU memory is avaialble for given model size. Or one can choose 8-bit generation.
|
34 |
|
35 |
See also LangChain example use with [test_langchain_simple.py](https://github.com/h2oai/h2ogpt/blob/4637531b928dfa458d708615ebd2cb6454d23064/tests/test_langchain_simple.py)
|
36 |
|
37 |
+
If one has obtained all databases (except wiki_full) and unzipped them into the current directory, then one can run h2oGPT Chatbot like:
|
38 |
+
```bash
|
39 |
+
python generate.py --base_model=h2oai/h2ogpt-oasst1-512-12b --load_8bit=True --langchain_mode=UserData --visible_langchain_modes="['UserData', 'wiki', 'MyData', 'github h2oGPT', 'DriverlessAI docs']"
|
40 |
+
```
|
41 |
+
which uses now 12B model in 8-bit mode, that fits onto single 24GB GPU.
|
42 |
+
|
43 |
+
If one has obtained all databases (including wiki_full) and unzipped them into the current directory, then one can run h2oGPT Chatbot like:
|
44 |
+
```bash
|
45 |
+
python generate.py --base_model=h2oai/h2ogpt-oasst1-512-12b --load_8bit=True --langchain_mode=wiki_full --visible_langchain_modes="['UserData', 'wiki_full', 'MyData', 'github h2oGPT', 'DriverlessAI docs']"
|
46 |
+
```
|
47 |
+
which will default to wiki_full for QA against full Wikipedia.
|
48 |
+
|