--- base_model: openchat/openchat_3.5 inference: false license: mit model_creator: Neboola model_name: NAI-3.5 model_type: mistral prompt_template: 'GPT3.5 User: {prompt}<|end_of_turn|>GPT3.5 Assistant: ' quantized_by: NeboolaAI ---
NeboolaAI


Chat & support: Neboola AI Telegram & Announcement

Want to contribute? Neboola's Github page

# NAI-3.5 - Original model: [llama v2 2bit](https://huggingface.co/ikawrakow/llama-v2-2bit-gguf) ## Description This repo contains GPTQ model files for NAI-3.5 Multiple GPTQ parameter permutations are provided; see Provided Files below for details of the options provided, their parameters, and the software used to create them. These files were quantised using hardware kindly provided by [Massed Compute](https://massedcompute.com/). ## Known compatible clients / servers These GPTQ models are known to work in the following inference servers/webuis. - [text-generation-webui](https://github.com/oobabooga/text-generation-webui) - [KoboldAI United](https://github.com/henk717/koboldai) - [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui) - [Hugging Face Text Generation Inference (TGI)](https://github.com/huggingface/text-generation-inference) This may not be a complete list; ### From the command line I recommend using the `huggingface-hub` Python library: ```shell pip3 install huggingface-hub ``` To download the `main` branch to a folder called `NAI-3.5`: ```shell mkdir NAI-3.5 huggingface-cli download neboolaai/NAI-3.5 --local-dir NAI-3.5 --local-dir-use-symlinks False ``` To download from a different branch, add the `--revision` parameter: ```shell mkdir NAI-3.5 huggingface-cli download neboolaai/NAI-3.5 --revision gptq-4bit-32g-actorder_True --local-dir NAI-3.5 --local-dir-use-symlinks False ```
More advanced huggingface-cli download usage If you remove the `--local-dir-use-symlinks False` parameter, the files will instead be stored in the central Hugging Face cache directory (default location on Linux is: `~/.cache/huggingface`), and symlinks will be added to the specified `--local-dir`, pointing to their real location in the cache. This allows for interrupted downloads to be resumed, and allows you to quickly clone the repo to multiple places on disk without triggering a download again. The downside, and the reason why I don't list that as the default option, is that the files are then hidden away in a cache folder and it's harder to know where your disk space is being used, and to clear it up if/when you want to remove a download model. The cache location can be changed with the `HF_HOME` environment variable, and/or the `--cache-dir` parameter to `huggingface-cli`. For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli). To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`: ```shell pip3 install hf_transfer ``` And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`: ```shell mkdir NAI-3.5 HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download neboolaai/NAI-3.5 --local-dir NAI-3.5 --local-dir-use-symlinks False ``` Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.