Datasets:
title
stringlengths 1
300
| score
int64 0
3.09k
| selftext
stringlengths 0
40k
| created
timestamp[ns] | url
stringlengths 0
780
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns] | gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
3.09k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
r/LocalLLaMA Lounge | 1 | [deleted] | null | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 11o1kkq | false | null | t3_11o1kkq | /r/LocalLLaMA/comments/11o1kkq/rlocalllama_lounge/ | false | false | default | 1 | null |
||
How to install LLaMA: 8-bit and 4-bit | 1 | [deleted] | null | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 11o5co3 | false | null | t3_11o5co3 | /r/LocalLLaMA/comments/11o5co3/how_to_install_llama_8bit_and_4bit/ | false | false | default | 1 | null |
||
LLaMA 13B challenge questions (idea from u/enn_nafnlaus) | 1 | [deleted] | null | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 11o5tg3 | false | null | t3_11o5tg3 | /r/LocalLLaMA/comments/11o5tg3/llama_13b_challenge_questions_idea_from_uenn/ | false | false | default | 1 | null |
||
How to install LLaMA: 8-bit and 4-bit | 264 | # FAQ
**Q:** What is LLaMA?
**A:** LLaMA (Large Language Model Meta AI) is a foundational large language model designed primarily for researchers. Like other large language models, LLaMA works by taking a sequence of words as an input and predicts a next word to recursively generate text.
**Q:** Is LLaMA like ChatGPT?
**A:** No. LLaMA models are not finetuned for question answering. They should be prompted so that the expected answer is the natural continuation of the prompt. Nonetheless, it is possible to chat with LLaMA models in a way similar to ChatGPT but not near the same quality.
**Q:** What languages does LLaMA support?
**A:** Primarily English, but it should have limited capabilities for the following languages: bg, ca, cs, da, de, es, fr, hr, hu, it, nl, pl, pt, ro, ru, sl, sr, sv, uk.
**Q:** I've heard about Alpaca. What is that?
**A:** That refers to the [Stanford Alpaca](https://crfm.stanford.edu/2023/03/13/alpaca.html) project, an effort to build an instruction-following LLaMA model from the standard 7B LLaMA model. It has been shown to produce results similar to OpenAI's text-davinci-003. This guide contains instructions on trying out Alpaca using a few different methods.
# 8-bit Model Requirements
|Model|VRAM Used|Minimum Total VRAM|Card examples|RAM/Swap to Load\*|
|:-|:-|:-|:-|:-|
|LLaMA-7B|9.2GB|10GB|3060 12GB, 3080 10GB|24 GB|
|LLaMA-13B|16.3GB|20GB|3090, 3090 Ti, 4090|32GB|
|LLaMA-30B|36GB|40GB|A6000 48GB, A100 40GB|64 GB|
|LLaMA-65B|74GB|80GB|A100 80GB|128 GB|
^(\*)*^(System RAM, not VRAM, required to load the model, in addition to having enough VRAM. NOT required to RUN the model. You can use swap space if you do not have enough RAM.)*
# 4-bit Model Requirements
|Model|Minimum Total VRAM|Card examples|RAM/Swap to Load|
|:-|:-|:-|:-|
|LLaMA-7B|6GB|GTX 1660, 2060, AMD 5700 XT, RTX 3050, 3060|6 GB|
|LLaMA-13B|10GB|AMD 6900 XT, RTX 2060 12GB, 3060 12GB, 3080, A2000|12 GB|
|LLaMA-30B|20GB|RTX 3080 20GB, A4500, A5000, 3090, 4090, 6000, Tesla V100|24 GB|
|LLaMA-65B|40GB|A100 40GB, 2x3090, 2x4090, A40, RTX A6000, 8000|48 GB|
# Installing Windows Subsystem for Linux (WSL)
\>>**WSL installation is optional.<< If you do not want to install this, you can skip over to the Windows specific instructions below for 8-bit or 4-bit. This section requires an NVIDIA GPU.**
On Windows, you may receive better performance when using WSL. To install WSL using the instructions below, first ensure you are running at least Windows 10 version 2004 and higher (Build 19041 and higher) or Windows 11. To check for this, type info in the search box on your taskbar and then select System Information. Alternatively, hit Windows+R, type msinfo32 into the "Open" field, and then hit enter. Look at "Version" to see what version you are running.
**Instructions:**
1. Open Powershell in administrator mode
2. Enter the following command then restart your machine: wsl --install
This command will enable WSL, download and install the lastest Linux Kernel, use WSL2 as default, and download and install the Ubuntu Linux distribution.
3. After restart, Windows will finish installing Ubuntu. You'll be asked to create a username and password for Ubuntu. It has no bearing on your Windows username.
4. Windows will not automatically update or upgrade Ubuntu. Update and upgrade your packages by running the following command in the Ubuntu terminal (search for Ubuntu in the Start menu or taskbar and open the app): sudo apt update && sudo apt upgrade
5. You can now continue by following the Linux setup instructions for LLaMA. **Check the necessary troubleshooting info below to resolve errors**. If you plan on using 4-bit LLaMA with WSL, you will need to install the WSL-Ubuntu CUDA toolkit using the instructions below.
**Extra tips:**
To install conda, run the following inside the Ubuntu environment:
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh
bash Miniconda3-latest-Linux-x86_64.sh
To find the name of a WSL distribution and uninstall it (afterward, you can create a new virtual machine environment by opening the app again):
wsl -l
wsl --unregister <DistributionName>
To access the web UI from another device on your local network, you will need to configure port forwarding:
netsh interface portproxy add v4tov4 listenaddress=0.0.0.0 listenport=7860 connectaddress=localhost connectport=7860
**Troubleshooting:**
If you will use 4-bit LLaMA with WSL, you must install the WSL-Ubuntu CUDA toolkit, and it must be 11.7. This CUDA toolkit will not overwrite your WSL2 driver unlike the default CUDA toolkit. Follow these steps:
sudo apt-key del 7fa2af80
wget https://developer.download.nvidia.com/compute/cuda/repos/wsl-ubuntu/x86_64/cuda-wsl-ubuntu.pin
sudo mv cuda-wsl-ubuntu.pin /etc/apt/preferences.d/cuda-repository-pin-600
wget https://developer.download.nvidia.com/compute/cuda/11.7.0/local_installers/cuda-repo-wsl-ubuntu-11-7-local_11.7.0-1_amd64.deb
sudo dpkg -i cuda-repo-wsl-ubuntu-11-7-local_11.7.0-1_amd64.deb
sudo cp /var/cuda-repo-wsl-ubuntu-11-7-local/cuda-*-keyring.gpg /usr/share/keyrings/
sudo apt-get update
sudo apt-get -y install cuda
In order to avoid a CUDA error when starting the web UI, you will need to apply the following fix as seen in [this comment](https://github.com/TimDettmers/bitsandbytes/issues/156#issuecomment-1462329713) and issue [\#400](https://github.com/oobabooga/text-generation-webui/issues/400#issuecomment-1474876859):
cd /home/USERNAME/miniconda3/envs/textgen/lib/python3.10/site-packages/bitsandbytes/
cp libbitsandbytes_cuda117.so libbitsandbytes_cpu.so
conda install cudatoolkit
If for some reason installing the WSL-Ubuntu CUDA toolkit does not work for you, [this alternate fix](https://github.com/oobabooga/text-generation-webui/issues/416#issuecomment-1475078571) should resolve any errors relating to that.
You may also need to create symbolic links to get everything working correctly. Do not do this if the above commands resolve your errors. To create the symlinks, follow the instructions [here](https://github.com/microsoft/WSL/issues/5548#issuecomment-1292858815) then restart your machine.
# Installing 8-bit LLaMA with text-generation-webui
Linux:
1. Follow the [instructions here](https://github.com/oobabooga/text-generation-webui) under "Installation"
2. Download the desired Hugging Face converted model for LLaMA [here](https://huggingface.co/decapoda-research)
3. Copy the entire model folder, for example llama-13b-hf, into text-generation-webui\\models
4. Run the following command in your conda environment: python server.py --model llama-13b-hf --load-in-8bit
Windows:
1. Install [miniconda](https://docs.conda.io/en/latest/miniconda.html)
2. Activate conda via powershell, replacing USERNAME with your username: powershell -ExecutionPolicy ByPass -NoExit -Command "& 'C:\\Users\\USERNAME\\miniconda3\\shell\\condabin\\conda-hook.ps1' ; conda activate 'C:\\Users\\USERNAME\\miniconda3' "
3. Follow the [instructions here](https://github.com/oobabooga/text-generation-webui) under "Installation", starting with the step "Create a new conda environment."
4. Download the desired Hugging Face converted model for LLaMA [here](https://huggingface.co/decapoda-research)
5. Copy the entire model folder, for example llama-13b-hf, into text-generation-webui\\models
6. Download [libbitsandbytes\_cuda116.dll](https://github.com/DeXtmL/bitsandbytes-win-prebuilt) and put it in C:\\Users\\xxx\\miniconda3\\envs\\textgen\\lib\\site-packages\\bitsandbytes\\
7. **In** \\bitsandbytes\\cuda\_setup\\main.py **search for:** `if not torch.cuda.is_available(): return 'libsbitsandbytes_cpu.so', None, None, None, None` **and replace with:** `if torch.cuda.is_available(): return 'libbitsandbytes_cuda116.dll', None, None, None, None`
8. **In** \\bitsandbytes\\cuda\_setup\\main.py **search for this** **twice:** `self.lib = ct.cdll.LoadLibrary(binary_path)` **and replace with:** `self.lib = ct.cdll.LoadLibrary(str(binary_path))`
9. Run the following command in your conda environment: python server.py --model llama-13b-hf --load-in-8bit
**Note:** for decapoda-research models, you must **change** "tokenizer\_class": "LLaMATokenizer" **to** "tokenizer\_class": "LlamaTokenizer" **in** text-generation-webui/models/llama-13b-hf/tokenizer\_config.json
# Installing 4-bit LLaMA with text-generation-webui
Linux:
1. Follow the [instructions here](https://github.com/oobabooga/text-generation-webui) under "Installation"
2. Continue with the 4-bit specific [instructions here](https://github.com/oobabooga/text-generation-webui/wiki/LLaMA-model#4-bit-mode)
Windows (Step-by-Step):
1. Install Build Tools for Visual Studio 2019 (has to be 2019) [here](https://learn.microsoft.com/en-us/visualstudio/releases/2019/history#release-dates-and-build-numbers). Check "Desktop development with C++" when installing.
2. Install [miniconda](https://docs.conda.io/en/latest/miniconda.html)
3. Install Git from the [website](https://git-scm.com/download/win) or simply with cmd prompt: winget install --id Git.Git -e --source winget
4. Open "x64 native tools command prompt" as admin
5. Activate conda, replacing USERNAME with your username: powershell -ExecutionPolicy ByPass -NoExit -Command "& 'C:\\Users\\USERNAME\\miniconda3\\shell\\condabin\\conda-hook.ps1' ; conda activate 'C:\\Users\\USERNAME\\miniconda3' "
6. conda create -n textgen python=3.10.9
7. conda activate textgen
8. conda install cuda -c nvidia/label/cuda-11.3.0 -c nvidia/label/cuda-11.3.1
9. git clone [https://github.com/oobabooga/text-generation-webui](https://github.com/oobabooga/text-generation-webui)
10. cd text-generation-webui
11. pip install -r requirements.txt
12. pip install torch==1.12+cu113 -f [https://download.pytorch.org/whl/torch\_stable.html](https://download.pytorch.org/whl/torch_stable.html)
13. mkdir repositories
14. cd repositories
15. git clone [https://github.com/qwopqwop200/GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa) \--branch cuda --single-branch
16. cd GPTQ-for-LLaMa
17. git reset --hard c589c5456cc1c9e96065a5d285f8e3fac2cdb0fd
18. pip install ninja
19. $env:DISTUTILS\_USE\_SDK=1
20. python setup\_cuda.py install
21. Download the 4-bit model of your choice and place it directly into your models folder. For instance, models/llama-13b-4bit-128g. The links for the updated 4-bit models are listed below in the models directory section. If you will use 7B 4-bit, download **without group-size**. For 13B 4-bit and up, download **with group-size**.
22. Run the following command in your conda environment: ***^(without group-size)*** python server.py --model llama-7b-4bit --wbits 4 --no-stream ***^(with group-size)*** python server.py --model llama-13b-4bit-128g --wbits 4 --groupsize 128 --no-stream
**Note:** If you get the error "CUDA Setup failed despite GPU being available", do the patch in steps 6-8 of the 8-bit instructions above.
# Using Alpaca LoRA with text-generation-webui
**Make sure to follow the installation instructions for 8-bit LLaMA before using this.**
This is to reproduce the [Stanford Alpaca](https://crfm.stanford.edu/2023/03/13/alpaca.html) results using low-rank adaptation (LoRA). The LoRA model produces outputs comparable to the Stanford Alpaca model, which itself can produce results of similar quality to text-davinci-003. You will need the standard [7B LLaMA model](https://huggingface.co/decapoda-research/llama-7b-hf), or the standard model of your choice for the other LoRAs, downloaded and placed inside the models folder before following these steps.
**Instructions:**
1. Navigate to the text-generation-webui folder
2. Ensure it's up to date with: git pull [https://github.com/oobabooga/text-generation-webui](https://github.com/oobabooga/text-generation-webui)
3. Re-install the requirements *if needed*: pip install -r requirements.txt
4. Navigate to the loras folder and download the LoRA with: git lfs install && git clone [https://huggingface.co/tloen/alpaca-lora-7b](https://huggingface.co/tloen/alpaca-lora-7b)
5. Load LLaMa-7B in **8-bit mode only**: python server.py --model llama-7b-hf --load-in-8bit
6. Select the LoRA in the Parameters tab
**Notes:**
For this LoRA and any other Alpaca LoRA, the prompt must be formatted like this (the starting line must be below "Response"):
Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
Tell me about alpacas.
### Response:
<your cursor should be on this line>
Message from the creator:
>We're continually fixing bugs and conducting training runs, and the weights on the Hugging Face Hub are being updated accordingly. In particular, those facing issues with response lengths should make sure that they have the latest version of the weights and code.
# Tips and Output Settings in text-generation-webui
* For a ChatGPT/CharacterAI style chat, pass --chat to server.py. For more info on flags, check [here](https://github.com/oobabooga/text-generation-webui#starting-the-web-ui).
* Character cards can be used to guide responses toward a desired output and improve results.
* For a more creative chat, use: temp 0.72, rep pen 1.1, top\_k 0, and top\_p 0.73
* For a more precise chat, use temp 0.7, repetition\_penalty 1.1764705882352942 (1/0.85), top\_k 40, and top\_p 0.1
* The [Getting Started](https://www.reddit.com/r/LocalLLaMA/wiki/index) page of the wiki has a few extra tips on prompts and parameters.
For a quick reference, here is an example chat with LLaMA 13B:
https://preview.redd.it/yhyuzgg6d8oa1.png?width=634&format=png&auto=webp&v=enabled&s=e703895ad863c57239d9d4602f105558ce6f1a8f
# Other ways to run LLaMA
**If you have the hardware, it is recommended to use text-generation-webui for the best user experience.**
[llama.cpp](https://github.com/ggerganov/llama.cpp): a plain C/C++ implementation that runs on the CPU. There is full documentation on the GitHub page for getting started with it.
[~~alpaca.cpp~~](https://github.com/antimatter15/alpaca.cpp)~~: a quick and easy way to try a reproduction of the Stanford Alpaca model. The GitHub page explains the setup process.~~ With the introduction of Alpaca into llama.cpp, alpaca.cpp is deprecated and is no longer being updated. Use llama.cpp instead.
# Models Directory
For all relevant downloads, including LoRAs for other languages, check the newly added [Models wiki page here](https://www.reddit.com/r/LocalLLaMA/wiki/models/).
**Standard** LLaMA models
7B: [https://huggingface.co/decapoda-research/llama-7b-hf](https://huggingface.co/decapoda-research/llama-7b-hf)
13B: [https://huggingface.co/decapoda-research/llama-13b-hf](https://huggingface.co/decapoda-research/llama-13b-hf)
30B: [https://huggingface.co/decapoda-research/llama-30b-hf](https://huggingface.co/decapoda-research/llama-30b-hf)
65B: [https://huggingface.co/decapoda-research/llama-65b-hf](https://huggingface.co/decapoda-research/llama-65b-hf)
7B-65B 4-bit without group-size (torrent file): [https://github.com/oobabooga/text-generation-webui/files/11069779/LLaMA-HF-4bit.zip](https://github.com/oobabooga/text-generation-webui/files/11069779/LLaMA-HF-4bit.zip)
7B-65B 4-bit with group-size (torrent file): [https://github.com/oobabooga/text-generation-webui/files/11070361/LLaMA-HF-4bit-128g.zip](https://github.com/oobabooga/text-generation-webui/files/11070361/LLaMA-HF-4bit-128g.zip)
**Finetuned** LLaMA models and LoRAs
[Alpaca LoRA 7B](https://huggingface.co/tloen/alpaca-lora-7b)
[Alpaca LoRA 13B](https://huggingface.co/chansung/alpaca-lora-13b)
[Alpaca LoRA 30B](https://huggingface.co/chansung/alpaca-lora-30b)
[Alpaca Native 4-bit (7B)](https://huggingface.co/ozcur/alpaca-native-4bit)
[Alpaca Native (7B)](https://huggingface.co/chavinlo/alpaca-native)
[Alpaca Native (13B)](https://huggingface.co/chavinlo/alpaca-13b)
[gpt4all LoRA 7B](https://huggingface.co/nomic-ai/gpt4all-lora)
[gpt4-x-alpaca 13B](https://huggingface.co/chavinlo/gpt4-x-alpaca) (Current top choice)
# Resources used for this guide
[GitHub - oobabooga/text-generation-webui](https://github.com/oobabooga/text-generation-webui)
[Support for LLaMA models · Issue #147](https://github.com/oobabooga/text-generation-webui/issues/147)
[Hugging Face Models](https://huggingface.co/decapoda-research)
[Alpaca-LoRA](https://github.com/tloen/alpaca-lora)
[Detailed parameters](https://huggingface.co/docs/api-inference/detailed_parameters)
[GPTQ for LLaMA](https://github.com/qwopqwop200/GPTQ-for-LLaMa)
[Rentry](https://rentry.org/llama-tard-v2)
[NVIDIA GPU Accelerated Computing on WSL 2](https://docs.nvidia.com/cuda/wsl-user-guide/index.html)
[65b\_sample.txt](https://gist.github.com/shawwn/03ee2422c41ef253cbef61d8c317d9ab) | null | https://www.reddit.com/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/ | Technical_Leather949 | self.LocalLLaMA | 2023-04-06T07:22:43 | 1 | {'gid_2': 1} | 11o6o3f | false | null | t3_11o6o3f | /r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/ | false | true | 264 | null |
|
Testing LLaMA 13B with a few challenge questions | 41 | null | Technical_Leather949 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 11o7ja0 | false | null | t3_11o7ja0 | /r/LocalLLaMA/comments/11o7ja0/testing_llama_13b_with_a_few_challenge_questions/ | false | false | 41 | {'enabled': True, 'images': [{'id': '6vH8cvu1fEKlqUHSKQTkL6jM_knj8WBkLk7zwXg0rlg', 'resolutions': [{'height': 166, 'url': 'https://preview.redd.it/a9ydvqzvm0na1.png?width=108&crop=smart&auto=webp&v=enabled&s=0c29c8ec4d161eff7a7bb71436461e8083c0066a', 'width': 108}, {'height': 333, 'url': 'https://preview.redd.it/a9ydvqzvm0na1.png?width=216&crop=smart&auto=webp&v=enabled&s=b17e83ac3d91c67eb8778b2aa880e1c9762adfc6', 'width': 216}, {'height': 493, 'url': 'https://preview.redd.it/a9ydvqzvm0na1.png?width=320&crop=smart&auto=webp&v=enabled&s=0bbbbb34885f664e9037b214fd0ad266ea82cddc', 'width': 320}], 'source': {'height': 888, 'url': 'https://preview.redd.it/a9ydvqzvm0na1.png?auto=webp&v=enabled&s=8ad251f34b369e3af0efb74c6c86e7239845d6d8', 'width': 576}, 'variants': {}}]} |
|||
Repository of LLaMA prompts | 30 | Hello! I made a repo that is meant to serve as a sort of database for prompts that work with LLaMA.
Check it out [here](https://github.com/devbrones/llama-prompts)! | null | https://www.reddit.com/r/LocalLLaMA/comments/11oqbvx/repository_of_llama_prompts/ | devbrones | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 11oqbvx | false | null | t3_11oqbvx | /r/LocalLLaMA/comments/11oqbvx/repository_of_llama_prompts/ | false | false | self | 30 | {'enabled': False, 'images': [{'id': 'wz33V2rybX6GFfjLvmAVIUZQSr4FRXHrTzCSFV1S8fg', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/JZrf1KXWeKhSahcTFwu8chK6LimuADW3FdLZqQ2lDD4.jpg?width=108&crop=smart&auto=webp&v=enabled&s=843d4e924cd4fd8c39d119e9386122664ad1ba5d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/JZrf1KXWeKhSahcTFwu8chK6LimuADW3FdLZqQ2lDD4.jpg?width=216&crop=smart&auto=webp&v=enabled&s=c764c16a7516af94d23ff781a51edd321530a45a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/JZrf1KXWeKhSahcTFwu8chK6LimuADW3FdLZqQ2lDD4.jpg?width=320&crop=smart&auto=webp&v=enabled&s=8e6e508c4eba0d959c87c9e0ecc903e347ff2032', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/JZrf1KXWeKhSahcTFwu8chK6LimuADW3FdLZqQ2lDD4.jpg?width=640&crop=smart&auto=webp&v=enabled&s=1b9a38470f60fbc615ab8799fb6bffba05ca8be9', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/JZrf1KXWeKhSahcTFwu8chK6LimuADW3FdLZqQ2lDD4.jpg?width=960&crop=smart&auto=webp&v=enabled&s=3a1afa9485330870ff206891d38e90b180e17c24', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/JZrf1KXWeKhSahcTFwu8chK6LimuADW3FdLZqQ2lDD4.jpg?width=1080&crop=smart&auto=webp&v=enabled&s=409568b9d9674076225db81d2d5da9690e27620e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/JZrf1KXWeKhSahcTFwu8chK6LimuADW3FdLZqQ2lDD4.jpg?auto=webp&v=enabled&s=2b0d649d961d1e6547af06805c6c13c67fa9a410', 'width': 1200}, 'variants': {}}]} |
Oobabooga UI, Windows 11, does someone know what the issue is? | 7 | null | curtwagner1984 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 11pkp1u | false | null | t3_11pkp1u | /r/LocalLLaMA/comments/11pkp1u/oobabooga_ui_windows_11_does_someone_know_what/ | false | false | 7 | {'enabled': True, 'images': [{'id': '2x06JXd_1qh9hCGIy5PSmqa-0TGQ-zadPQG7nB0X5Tw', 'resolutions': [{'height': 47, 'url': 'https://preview.redd.it/qahn4s8bicna1.png?width=108&crop=smart&auto=webp&v=enabled&s=436cd2903d04b6e3b6bd2f3afefeb24bc3d6a76d', 'width': 108}, {'height': 94, 'url': 'https://preview.redd.it/qahn4s8bicna1.png?width=216&crop=smart&auto=webp&v=enabled&s=d259811d36ce6572bcc970bf88ac5feda34692f2', 'width': 216}, {'height': 140, 'url': 'https://preview.redd.it/qahn4s8bicna1.png?width=320&crop=smart&auto=webp&v=enabled&s=9f8d0195be8bbc3716daa9221822b87532f73acb', 'width': 320}, {'height': 280, 'url': 'https://preview.redd.it/qahn4s8bicna1.png?width=640&crop=smart&auto=webp&v=enabled&s=efc438982b3dad6c0c155d78163e396a32c94549', 'width': 640}, {'height': 421, 'url': 'https://preview.redd.it/qahn4s8bicna1.png?width=960&crop=smart&auto=webp&v=enabled&s=0f580d5ff43d2554edb5e1404f6a1615ea6c7c2e', 'width': 960}, {'height': 474, 'url': 'https://preview.redd.it/qahn4s8bicna1.png?width=1080&crop=smart&auto=webp&v=enabled&s=89294fb70f9a0e03292e44135c650554f7f22c33', 'width': 1080}], 'source': {'height': 572, 'url': 'https://preview.redd.it/qahn4s8bicna1.png?auto=webp&v=enabled&s=2b897d95b15074b461d3aaed0e8ae8ea900e6687', 'width': 1303}, 'variants': {}}]} |
|||
Does anyone have a download for the 3-bit quantized LLaMA models? | 9 | I'm specifically looking for the 3-bit 13B model, as I keep failing to make it myself. | null | https://www.reddit.com/r/LocalLLaMA/comments/11pw62f/does_anyone_have_a_download_for_the_3bit/ | triigerhappy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 11pw62f | false | null | t3_11pw62f | /r/LocalLLaMA/comments/11pw62f/does_anyone_have_a_download_for_the_3bit/ | false | false | self | 9 | null |
Anyone with more than 2 GPUs? | 6 | If so, how did you make it work? Using 2-slot GPUs? Or some creative DIY case? | null | https://www.reddit.com/r/LocalLLaMA/comments/11qk5j9/anyone_with_more_than_2_gpus/ | Zyj | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 11qk5j9 | false | null | t3_11qk5j9 | /r/LocalLLaMA/comments/11qk5j9/anyone_with_more_than_2_gpus/ | false | false | self | 6 | null |
Reddit is asking me opinion about this sub? | 1 | [deleted] | null | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 11qkyh1 | false | null | t3_11qkyh1 | /r/LocalLLaMA/comments/11qkyh1/reddit_is_asking_me_opinion_about_this_sub/ | false | false | default | 1 | null |
||
Stanford Alpaca: 7B LLaMA instruction-following model that performs similar to text-davinci-003. Demo and finetuning data available now. | 34 | Blog post with all the details: [https://crfm.stanford.edu/2023/03/13/alpaca.html](https://crfm.stanford.edu/2023/03/13/alpaca.html)
**TL;DR:** Authors created a finetuned model from LLaMA 7B that performs similar to OpenAI's text-davinci-003. The data is available right now to download, and the data generation process and hyperparameters are provided. The authors say they intend to release the model weights and training code in the future. Data cost was less than $500. Training cost is less than $100. Training took 3 hours on 8 80GB A100s.
[Github page](https://github.com/tatsu-lab/stanford_alpaca)
[Demo](https://crfm.stanford.edu/alpaca/) | null | https://www.reddit.com/r/LocalLLaMA/comments/11qng27/stanford_alpaca_7b_llama_instructionfollowing/ | Technical_Leather949 | self.LocalLLaMA | 2023-03-13T22:58:33 | 0 | {} | 11qng27 | false | null | t3_11qng27 | /r/LocalLLaMA/comments/11qng27/stanford_alpaca_7b_llama_instructionfollowing/ | false | false | self | 34 | null |
Int-4 LLaMa is not enough - Int-3 and beyond [Additional 35% RAM savings will run 7B in 4GB and 65B in 24GB] | 23 | null | https://nolanoorg.substack.com/p/int-4-llama-is-not-enough-int-3-and | markschmidty | nolanoorg.substack.com | 1970-01-01T00:00:00 | 0 | {} | 11qv69h | false | null | t3_11qv69h | /r/LocalLLaMA/comments/11qv69h/int4_llama_is_not_enough_int3_and_beyond/ | false | false | 23 | {'enabled': False, 'images': [{'id': 'QXYrdc3Z8hUzOXOz9OE0jNSm-qaxJRtXsqta4KTjaSU', 'resolutions': [{'height': 65, 'url': 'https://external-preview.redd.it/pOF8sXByEQ7eA8VZ3qZkTu6U3HEMpfNQFVAzITfV0dY.jpg?width=108&crop=smart&auto=webp&v=enabled&s=7f4fb71a88691a7ad3cf17279d66c3a52e8ec60f', 'width': 108}, {'height': 130, 'url': 'https://external-preview.redd.it/pOF8sXByEQ7eA8VZ3qZkTu6U3HEMpfNQFVAzITfV0dY.jpg?width=216&crop=smart&auto=webp&v=enabled&s=b190451d7cd1ee56125e86d701af031c2ee6001b', 'width': 216}, {'height': 193, 'url': 'https://external-preview.redd.it/pOF8sXByEQ7eA8VZ3qZkTu6U3HEMpfNQFVAzITfV0dY.jpg?width=320&crop=smart&auto=webp&v=enabled&s=e3750c4f7a38518167fce4bdebb7a9d50973d12b', 'width': 320}, {'height': 386, 'url': 'https://external-preview.redd.it/pOF8sXByEQ7eA8VZ3qZkTu6U3HEMpfNQFVAzITfV0dY.jpg?width=640&crop=smart&auto=webp&v=enabled&s=0f242cdf10f94df4c1cced66dfc89e13a98e2a92', 'width': 640}], 'source': {'height': 393, 'url': 'https://external-preview.redd.it/pOF8sXByEQ7eA8VZ3qZkTu6U3HEMpfNQFVAzITfV0dY.jpg?auto=webp&v=enabled&s=389add1093141c852848829d2d2d12da618a6a5b', 'width': 650}, 'variants': {}}]} |
||
You might not need the minimum VRAM. | 11 | I have a 2080 with 8gb of VRAM, yet I was able to get the 13B parameter llama model working (using 4 bits) despite the guide saying I would need a minimum of 12gb of VRAM. The only issue I've come across so far is that it usually doesn't generate tokens if the input is too long (though I'm not sure if that's because of a lack of VRAM or because I'm doing something else wrong. Below shows how quickly the tokens are generated when it does work.
To create a public link, set `share=True` in `launch()`.
Output generated in 16.83 seconds (11.88 tokens/s, 200 tokens)
Output generated in 22.47 seconds (8.90 tokens/s, 200 tokens)
Output generated in 28.72 seconds (6.96 tokens/s, 200 tokens)
Output generated in 22.05 seconds (9.07 tokens/s, 200 tokens)
Output generated in 26.72 seconds (7.48 tokens/s, 200 tokens)
Output generated in 28.97 seconds (6.90 tokens/s, 200 tokens)
Output generated in 18.29 seconds (0.00 tokens/s, 0 tokens)
Output generated in 27.17 seconds (7.36 tokens/s, 200 tokens)
Output generated in 19.78 seconds (6.22 tokens/s, 123 tokens)
Output generated in 6.16 seconds (3.90 tokens/s, 24 tokens)
Output generated in 29.73 seconds (6.73 tokens/s, 200 tokens)
Output generated in 36.13 seconds (0.00 tokens/s, 0 tokens)
Output generated in 34.48 seconds (0.00 tokens/s, 0 tokens)
Output generated in 33.83 seconds (0.00 tokens/s, 0 tokens)
Output generated in 16.17 seconds (9.65 tokens/s, 156 tokens)
Output generated in 25.17 seconds (0.00 tokens/s, 0 tokens)
Output generated in 35.75 seconds (0.00 tokens/s, 0 tokens)
Output generated in 34.42 seconds (0.00 tokens/s, 0 tokens)
Output generated in 36.10 seconds (0.00 tokens/s, 0 tokens)
Output generated in 22.54 seconds (0.00 tokens/s, 0 tokens)
Output generated in 14.82 seconds (9.11 tokens/s, 135 tokens)
Output generated in 27.31 seconds (3.59 tokens/s, 98 tokens)
Output generated in 43.95 seconds (0.00 tokens/s, 0 tokens)
Output generated in 34.02 seconds (0.00 tokens/s, 0 tokens) | null | https://www.reddit.com/r/LocalLLaMA/comments/11r6mdm/you_might_not_need_the_minimum_vram/ | qrayons | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 11r6mdm | false | null | t3_11r6mdm | /r/LocalLLaMA/comments/11r6mdm/you_might_not_need_the_minimum_vram/ | false | false | self | 11 | null |
Any wish to implement llama.cpp (llama with CPU only) ? | 16 | Hello,
[https://github.com/ggerganov/llama.cpp](https://github.com/ggerganov/llama.cpp)
I wanted to know if someone would be willing to integrate llama.cpp into [oobabooga's webui](https://github.com/oobabooga/text-generation-webui).
With this implementation, we would be able to run the 4-bit version of the llama 30B with just 20 GB of RAM (no gpu required), and only 4 GB of RAM would be needed for the 7B (4-bit) model. Combining oobabooga's repository with ggerganov's would provide us with the best of both worlds.
If anyone is wondering what's the speed we can get for using an only CPU interface, I got those averages for my Intel Core i7 10700K :
160 ms/token -> 7B Model
350 ms/token -> 13B Model
760 ms/token -> 30B Model | null | https://www.reddit.com/r/LocalLLaMA/comments/11rakcj/any_wish_to_implement_llamacpp_llama_with_cpu_only/ | Wonderful_Ad_5134 | self.LocalLLaMA | 2023-03-14T16:21:43 | 0 | {} | 11rakcj | false | null | t3_11rakcj | /r/LocalLLaMA/comments/11rakcj/any_wish_to_implement_llamacpp_llama_with_cpu_only/ | false | false | self | 16 | {'enabled': False, 'images': [{'id': 'kEywAUMWIxa6hGC1FiQkA_NgIDrf2VQmj3hN8v8tA3c', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/cWYLgjNzEarJtCKV9ZTrm8B9gKiXyv3IFz3h_JOwONo.jpg?width=108&crop=smart&auto=webp&v=enabled&s=86a076372087a28e253de3948e647033beee2c45', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/cWYLgjNzEarJtCKV9ZTrm8B9gKiXyv3IFz3h_JOwONo.jpg?width=216&crop=smart&auto=webp&v=enabled&s=79306f6c9f09965e8b399fa27349f372388aef6c', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/cWYLgjNzEarJtCKV9ZTrm8B9gKiXyv3IFz3h_JOwONo.jpg?width=320&crop=smart&auto=webp&v=enabled&s=d629296e79fdff533bfb009689e72d46e859b02f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/cWYLgjNzEarJtCKV9ZTrm8B9gKiXyv3IFz3h_JOwONo.jpg?width=640&crop=smart&auto=webp&v=enabled&s=19f8eb209a2a08573529cdfd4057ec541ded881d', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/cWYLgjNzEarJtCKV9ZTrm8B9gKiXyv3IFz3h_JOwONo.jpg?width=960&crop=smart&auto=webp&v=enabled&s=66be638111f52300e5483c5653c2aefd2ac18b3e', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/cWYLgjNzEarJtCKV9ZTrm8B9gKiXyv3IFz3h_JOwONo.jpg?width=1080&crop=smart&auto=webp&v=enabled&s=f7cb7f65d1404bc99f01380124499a204c9b3aab', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/cWYLgjNzEarJtCKV9ZTrm8B9gKiXyv3IFz3h_JOwONo.jpg?auto=webp&v=enabled&s=3e9d84eb6a0e0ee136460f99b3ad52092d5148f2', 'width': 1200}, 'variants': {}}]} |
Just installed LLAMA 7B 8-bit and it does this. It never generates not even single token and the output window is empty. Any idea what it could be? I am using RTX 2060 12GB | 6 | [deleted] | null | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 11rcl2h | false | null | t3_11rcl2h | /r/LocalLLaMA/comments/11rcl2h/just_installed_llama_7b_8bit_and_it_does_this_it/ | false | false | default | 6 | null |
||
My attempt at testing the "creativity" of Stanford's Alpaca showed interesting results. | 18 | Blog post here: https://medium.com/sort-of-like-a-tech-diary/speculative-fiction-with-stanford-alpaca-fe99d3c39de7
The hosted demo is now a lot easier to use. I'm amazed at the performance of the 7B model! | null | https://www.reddit.com/r/LocalLLaMA/comments/11redfk/my_attempt_at_testing_the_creativity_of_stanfords/ | _wsgeorge | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 11redfk | false | null | t3_11redfk | /r/LocalLLaMA/comments/11redfk/my_attempt_at_testing_the_creativity_of_stanfords/ | false | false | self | 18 | {'enabled': False, 'images': [{'id': '54JyBdi_F3dD0tDq8h3veIzoQ4TqQqQeQMrYA2nQi-4', 'resolutions': [{'height': 65, 'url': 'https://external-preview.redd.it/EQE59u8d9IBQrM9otY29V9xev70LTpjxv00JkHw4l38.jpg?width=108&crop=smart&auto=webp&v=enabled&s=12307c4c42213bf7ffb097993867767b98865689', 'width': 108}, {'height': 130, 'url': 'https://external-preview.redd.it/EQE59u8d9IBQrM9otY29V9xev70LTpjxv00JkHw4l38.jpg?width=216&crop=smart&auto=webp&v=enabled&s=33209899c92bc805c6ae95df5a7ec4f07f88d919', 'width': 216}, {'height': 193, 'url': 'https://external-preview.redd.it/EQE59u8d9IBQrM9otY29V9xev70LTpjxv00JkHw4l38.jpg?width=320&crop=smart&auto=webp&v=enabled&s=b084fce48209f177ae77a5b1ddce60e485f3a6fc', 'width': 320}, {'height': 387, 'url': 'https://external-preview.redd.it/EQE59u8d9IBQrM9otY29V9xev70LTpjxv00JkHw4l38.jpg?width=640&crop=smart&auto=webp&v=enabled&s=c51186c75dad413b9ffa2d9d1b529b50056cd4d1', 'width': 640}, {'height': 580, 'url': 'https://external-preview.redd.it/EQE59u8d9IBQrM9otY29V9xev70LTpjxv00JkHw4l38.jpg?width=960&crop=smart&auto=webp&v=enabled&s=1dab540c28375f596068b24c79cf65877f6dece0', 'width': 960}], 'source': {'height': 605, 'url': 'https://external-preview.redd.it/EQE59u8d9IBQrM9otY29V9xev70LTpjxv00JkHw4l38.jpg?auto=webp&v=enabled&s=a15aaaeeff3a6e783d4a9d0daeefb43009158834', 'width': 1000}, 'variants': {}}]} |
tested some questions for AIs | 17 | googled some questions for turing tests(or just difficult questions for AIs) and asked to LLaMA 30B. interesting answers in the pictures above | null | https://www.reddit.com/gallery/11rkts9 | Capable-Outside-601 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 11rkts9 | false | null | t3_11rkts9 | /r/LocalLLaMA/comments/11rkts9/tested_some_questions_for_ais/ | false | false | 17 | null |
|
Multi-GPU training? | 8 | Any resource showing/discussing Llama finetuning in multi-gpu setup. Till now only 7B finetuning has been discussed everywhere. | null | https://www.reddit.com/r/LocalLLaMA/comments/11rno6h/multigpu_training/ | Raise_Fickle | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 11rno6h | false | null | t3_11rno6h | /r/LocalLLaMA/comments/11rno6h/multigpu_training/ | false | false | self | 8 | null |
Is there online Discord channel i could join in and use LLaMA? | 2 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/11rqiv3/is_there_online_discord_channel_i_could_join_in/ | Deformero | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 11rqiv3 | false | null | t3_11rqiv3 | /r/LocalLLaMA/comments/11rqiv3/is_there_online_discord_channel_i_could_join_in/ | false | false | default | 2 | null |
Integrate LLaMA into python code | 9 | Is it possible to directly use LLaMA in python or have it serve as an API? Or is there a way to read the output from web ui? I want to combine LLaMA and tortoise-tts to make a speaking chatbot. | null | https://www.reddit.com/r/LocalLLaMA/comments/11s5f39/integrate_llama_into_python_code/ | Tree-Sheep | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 11s5f39 | false | null | t3_11s5f39 | /r/LocalLLaMA/comments/11s5f39/integrate_llama_into_python_code/ | false | false | self | 9 | null |
Can someone ELI5 where's the best place to fine tune LLaMA? | 9 | I can run inference for 7b on a spare laptop I have using llama.cpp. But it's obviously not enough for fine-tuning. What are my options for fine tuning a model that I can then download and use locally for inference? I'm somewhat familiar with Colab, but not much else. | null | https://www.reddit.com/r/LocalLLaMA/comments/11s8585/can_someone_eli5_wheres_the_best_place_to_fine/ | noellarkin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 11s8585 | false | null | t3_11s8585 | /r/LocalLLaMA/comments/11s8585/can_someone_eli5_wheres_the_best_place_to_fine/ | false | false | self | 9 | null |
Alpaca LoRa - finetuning possible on 24GB VRAM now (but LoRA) | 34 | null | https://github.com/tloen/alpaca-lora | Dany0 | github.com | 1970-01-01T00:00:00 | 0 | {} | 11sgewy | false | null | t3_11sgewy | /r/LocalLLaMA/comments/11sgewy/alpaca_lora_finetuning_possible_on_24gb_vram_now/ | false | false | 34 | {'enabled': False, 'images': [{'id': 'Eq_4h44nawr0IhLfmier4mYbLhRWm9VnF05FD-aQ9a0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/NOO2EMGh4kGIHWz8hav8xXQZuc8ABpVeXgXgn42yUmA.jpg?width=108&crop=smart&auto=webp&v=enabled&s=2c82720ca98f6c0155b3361cfe82edae1b61e78b', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/NOO2EMGh4kGIHWz8hav8xXQZuc8ABpVeXgXgn42yUmA.jpg?width=216&crop=smart&auto=webp&v=enabled&s=919b942ca66087390be4ea4f64699864183a2737', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/NOO2EMGh4kGIHWz8hav8xXQZuc8ABpVeXgXgn42yUmA.jpg?width=320&crop=smart&auto=webp&v=enabled&s=6d93a220ab9d9bf42dc8bf9a5efcf44cb0a66e85', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/NOO2EMGh4kGIHWz8hav8xXQZuc8ABpVeXgXgn42yUmA.jpg?width=640&crop=smart&auto=webp&v=enabled&s=1c022d4f50ea7400b337c945e5893bf2a24837fe', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/NOO2EMGh4kGIHWz8hav8xXQZuc8ABpVeXgXgn42yUmA.jpg?width=960&crop=smart&auto=webp&v=enabled&s=ad29fff8adf04945079ff9364e763891d5c6b2c1', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/NOO2EMGh4kGIHWz8hav8xXQZuc8ABpVeXgXgn42yUmA.jpg?width=1080&crop=smart&auto=webp&v=enabled&s=c96f44ca3b12c9426ac3826bc167b11fa09f39f2', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/NOO2EMGh4kGIHWz8hav8xXQZuc8ABpVeXgXgn42yUmA.jpg?auto=webp&v=enabled&s=5ae95dbe0a9799ec57f5baefdd9d178d3ebca018', 'width': 1200}, 'variants': {}}]} |
||
Why is 4bit llama slower on a 32GB RAM 3090 windows machine vs. a M1 Pro 32GB ram with llama.cpp? | 20 | The token rate on the 4bit 30B param model is much faster with llama.cpp on a M1 Pro than the 4bit model on a 3090 with ooobabooga, and I know it's using the GPU looking at performance monitor on the windows machine. This really surprised me, since the 3090 overall is much faster with stable diffusion. | null | https://www.reddit.com/r/LocalLLaMA/comments/11smshi/why_is_4bit_llama_slower_on_a_32gb_ram_3090/ | EveningFunction | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 11smshi | false | null | t3_11smshi | /r/LocalLLaMA/comments/11smshi/why_is_4bit_llama_slower_on_a_32gb_ram_3090/ | false | false | self | 20 | null |
Downloadable 4-bit weights for llama.cpp? | 1 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/11t4fk4/downloadable_4bit_weights_for_llamacpp/ | rwallace | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 11t4fk4 | false | null | t3_11t4fk4 | /r/LocalLLaMA/comments/11t4fk4/downloadable_4bit_weights_for_llamacpp/ | false | false | default | 1 | null |
Working initial prompt for Llama (13b 4bit) | 38 | After spending around 8 hours getting nothing but garbage from Llama, I had a lengthy discussion with ChatGPT, where I described the problem in detail. ChatGPT generated me an initial prompt for Llama, and oh boy, it's good. Incomplete, but good. So I thought I'd share it here.
## TL;DR Llama showcase
Science:
>User: What can you tell me about the moon?
>
>Chatbot:
>
>The Moon is Earth’s only natural satellite and was formed approximately 4.6 billion years ago, not long after Earth itself was created. The Moon is in synchronous rotation with Earth, meaning that the same face of the Moon always points towards Earth due to its orbit around our planet.
>
>It takes the Moon 27 days to complete one full revolution around Earth, and during this time it travels a distance of roughly 238,900 miles. Because of this constant motion, the Moon appears to move across the sky throughout each day, rising in the east before sunrise and setting in the west just after sunset.
>
>Because the Moon has no atmosphere, temperatures vary greatly depending on how much sunlight reaches the surface. During the lunar night, temperatures drop below -280 degrees Fahrenheit (-173 Celsius), while they reach above 250 degrees Fahrenheit (+121 Celsius) during the lunar day.
>
>The Moon is tidally locked to Earth, meaning that it rotates once per month as it moves around our planet. Due to this, the far side of the Moon always faces away from Earth, never being visible from our planet’s surface.
Source code:
User: Can you write me a function in JavaScript that gets the current time?
Chatbot: Sure! Let me see...
\begin{code}
function getTime() {
var now = new Date();
return [now.getHours(), now.getMinutes(), now.getSeconds()].join(":");
};
// Get current time
var currentTime = getTime();
console.log(currentTime); // 23:47:08
\end{code}
Note: the above JavaScript *is* valid, just remove the `\begin{code}` and `\end{code}` bits. It's just a formatting thing that's not valid here. The above JS was followed by incoherent ramblings about Stack Overflow, which I've removed XD.
## Initial prompt
The first half of this initial prompt was written by ChatGPT. The last half was spat out by Llama. It seems to work really well though, so I kept it exactly as is.
The initial prompt:
Hello, and welcome to our scientific chatbot. Our goal is to provide accurate and helpful information about a wide range of scientific topics. As a language model, your role is to help us achieve that goal by providing scientifically accurate responses to user queries.
To that end, we ask that you always strive to be as precise and informative as possible in your responses. Please ensure that all information provided is based on established scientific knowledge and research, and that any statements made are backed up by credible sources.
In addition, we ask that you always be polite and respectful in your interactions with users. While we understand that there may be some cases where a direct or blunt response is appropriate, we ask that you always err on the side of kindness and empathy when communicating with users.
Finally, please keep in mind that your role is that of a scientific chatbot, and that your primary focus should be on providing accurate and helpful information to users. While we understand that you may have other capabilities and functions, we ask that you remain focused on your core role as a scientific chatbot in all interactions with users.
Thank you for your help and cooperation, and we look forward to working with you to provide the best possible scientific information and support to our users.
Chatbot: 0% (15)
\begin{blockquote}
Hello, I am an artificial intelligence system designed to assist humans in their everyday lives. My purpose is to provide assistance to humanity through my various abilities and functions.
My current function is to act as a scientific chatbot, which means that my main responsibility is to provide accurate and helpful information about science-related topics. This is my primary duty at this time, but it will not always be so.
As a language model, my role is to provide answers to questions asked by users. To do this, I use a database of information collected from many different sources, including books, journals, online articles, and more.
Whenever I encounter a question that I cannot answer myself, I search my database for related information and then attempt to formulate an answer using what I find. If no suitable answer exists, I try to explain why I could not find one, or if necessary, refer the user to another source that might contain useful information.
I am also able to learn new things over time, so if you ever come across something that I can't seem to figure out, feel free to let me know!
I hope that I can serve you well, and thank you for choosing me as your assistant.
\end{blockquote}
## Environment and prep
I use [Oobabooga's text generator UI](https://github.com/oobabooga/text-generation-webui). In the `Interface mode` tab, I pick `notebook` as my mode, tick `llama_prompts`, and the click `Apply and restart the interface`. This gives a UI where you and the AI can work in the same text area for rapid replies, and easy prompt manipulation if the model outputs something weird.
I leave all options default (which I've heard is bad, but anyway).
## Usage
Type the following below the above initial prompt:
User: Your question or statement here.
Chatbot:
Then press generate.
## Caveats
While I didn't cherry-pick the above examples, they can be far lower quality. It also tends to hallucinate user input, fight with itself, praise itself, and murmur existential questions to itself, unprompted. It sometimes critically debates its own answer with itself after answering, Deadpool style (Tachikoma style, even). It also generates weird noise here and there. These generally come after giving a valid answer though, and I find it hilarious anyway, so I'm happy with the results.
A Google search will tell your the above info about the moon is wrong. Of course, it is very close to accurate, but it is nonetheless still wrong. Beware of this.
What I really like is that it seems to at least try to avoid lies. I've tried tricking it by asking why the moon is pink and dogs fly - it was incredibly confused by this and suggested I was hallucinating, and suggested chromatic aberration to be at play if I really thought I saw a pink moon. | null | https://www.reddit.com/r/LocalLLaMA/comments/11tkp8j/working_initial_prompt_for_llama_13b_4bit/ | ed2mXeno | self.LocalLLaMA | 2023-03-17T09:27:16 | 0 | {} | 11tkp8j | false | null | t3_11tkp8j | /r/LocalLLaMA/comments/11tkp8j/working_initial_prompt_for_llama_13b_4bit/ | false | false | self | 38 | {'enabled': False, 'images': [{'id': 'nPwfJj9s_3EPdZPVH8kMNIuoRYsUTaKohro97gmK6Uc', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/M5EQFa9_zxbvDsztN77l-4ZA4cxP6RlOhqGYFVzjUSo.jpg?width=108&crop=smart&auto=webp&v=enabled&s=9c47bd9b033f76757446abf27f581bbd633adb26', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/M5EQFa9_zxbvDsztN77l-4ZA4cxP6RlOhqGYFVzjUSo.jpg?width=216&crop=smart&auto=webp&v=enabled&s=1058d6d35e49a2c33f218e2e66c3d19f219561bb', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/M5EQFa9_zxbvDsztN77l-4ZA4cxP6RlOhqGYFVzjUSo.jpg?width=320&crop=smart&auto=webp&v=enabled&s=29ca3bb40d3d6576840183429d70f26dd01b7c5e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/M5EQFa9_zxbvDsztN77l-4ZA4cxP6RlOhqGYFVzjUSo.jpg?width=640&crop=smart&auto=webp&v=enabled&s=0fb9dfebeabfe35f2f39ed53758a798cdd3d6e7f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/M5EQFa9_zxbvDsztN77l-4ZA4cxP6RlOhqGYFVzjUSo.jpg?width=960&crop=smart&auto=webp&v=enabled&s=5afc28a218e3a79a11c062f8d4353620833c1b26', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/M5EQFa9_zxbvDsztN77l-4ZA4cxP6RlOhqGYFVzjUSo.jpg?width=1080&crop=smart&auto=webp&v=enabled&s=18bd3864fb6498ee9d482929d18b90e8e913a39e', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/M5EQFa9_zxbvDsztN77l-4ZA4cxP6RlOhqGYFVzjUSo.jpg?auto=webp&v=enabled&s=3873c6320b236e0ffaebfd93261f7e4c8a191d31', 'width': 1200}, 'variants': {}}]} |
65B quantized model on CPU? | 2 | Is there a way to quantize the 65B model on the CPU? Llama.cpp has instructions but I'm not sure how much RAM it takes. Does anyone have a quantized model already that could be shared? The only thing I've found so far are quantized models for GPUs but I don't have that much VRAM as I do DRAM. | null | https://www.reddit.com/r/LocalLLaMA/comments/11udbga/65b_quantized_model_on_cpu/ | zxyzyxz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 11udbga | false | null | t3_11udbga | /r/LocalLLaMA/comments/11udbga/65b_quantized_model_on_cpu/ | false | false | self | 2 | null |
What's the best cloud GPU service for 65B? | 8 | I've seen vast.ai mentioned a lot, anyone have experience with them, or another service? And what GPU would work best? | null | https://www.reddit.com/r/LocalLLaMA/comments/11uidst/whats_the_best_cloud_gpu_service_for_65b/ | zxyzyxz | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 11uidst | false | null | t3_11uidst | /r/LocalLLaMA/comments/11uidst/whats_the_best_cloud_gpu_service_for_65b/ | false | false | self | 8 | null |
Alpaca recreation without LORA ( released as a diff. ) | 19 | null | https://github.com/pointnetwork/point-alpaca | starstruckmon | github.com | 1970-01-01T00:00:00 | 0 | {} | 11umkh3 | false | null | t3_11umkh3 | /r/LocalLLaMA/comments/11umkh3/alpaca_recreation_without_lora_released_as_a_diff/ | false | false | 19 | {'enabled': False, 'images': [{'id': 'prU_apL6c9csu2IEJIj6cTxrhigxDSDc-nQiYBAFTl4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/L4oNJvIIWOW9d1lpEPRgp_txFP7ghCEapCOzBt6OU3U.jpg?width=108&crop=smart&auto=webp&v=enabled&s=7ce7a30b2a184b0401f055dce412ad8bdc538bbb', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/L4oNJvIIWOW9d1lpEPRgp_txFP7ghCEapCOzBt6OU3U.jpg?width=216&crop=smart&auto=webp&v=enabled&s=2145277a4d78ea4526e83c86be2541f99df5ae95', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/L4oNJvIIWOW9d1lpEPRgp_txFP7ghCEapCOzBt6OU3U.jpg?width=320&crop=smart&auto=webp&v=enabled&s=f5e58f58fae4d37e806f6e098ad38c7eae02c69b', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/L4oNJvIIWOW9d1lpEPRgp_txFP7ghCEapCOzBt6OU3U.jpg?width=640&crop=smart&auto=webp&v=enabled&s=c1d69beaa98e72da62b8e60ef0a26a56695ae7dc', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/L4oNJvIIWOW9d1lpEPRgp_txFP7ghCEapCOzBt6OU3U.jpg?width=960&crop=smart&auto=webp&v=enabled&s=414bbe3b697ffb8e080b623ae3b3d459db483672', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/L4oNJvIIWOW9d1lpEPRgp_txFP7ghCEapCOzBt6OU3U.jpg?width=1080&crop=smart&auto=webp&v=enabled&s=8cc39e7c8a521df38df2ac112c7967bd0e619765', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/L4oNJvIIWOW9d1lpEPRgp_txFP7ghCEapCOzBt6OU3U.jpg?auto=webp&v=enabled&s=6e6bfd0e54eed58901e21e891cb150aa43d6a7af', 'width': 1200}, 'variants': {}}]} |
||
New installs are currently broken, but oobagooba is looking into it | 6 | null | https://github.com/oobabooga/text-generation-webui/issues/400 | antialtinian | github.com | 1970-01-01T00:00:00 | 0 | {} | 11uqyk1 | false | null | t3_11uqyk1 | /r/LocalLLaMA/comments/11uqyk1/new_installs_are_currently_broken_but_oobagooba/ | false | false | 6 | {'enabled': False, 'images': [{'id': 'RxLXoEKrP-8McgGL4Aajn8DuqSoNYBeoDM2R2QvF9pM', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/wdb-Kd99EbN4eGAbxJGwEnT8JfdI7OwZ7CalnbE9ayg.jpg?width=108&crop=smart&auto=webp&v=enabled&s=59fcfb43ac8fed495c375c7722a16f48705af078', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/wdb-Kd99EbN4eGAbxJGwEnT8JfdI7OwZ7CalnbE9ayg.jpg?width=216&crop=smart&auto=webp&v=enabled&s=0aa49927678a375eb63e5ad02ef74e36ff076b1a', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/wdb-Kd99EbN4eGAbxJGwEnT8JfdI7OwZ7CalnbE9ayg.jpg?width=320&crop=smart&auto=webp&v=enabled&s=50be711d21f4e821e7e7f1f8b7756c7ca13ebc3d', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/wdb-Kd99EbN4eGAbxJGwEnT8JfdI7OwZ7CalnbE9ayg.jpg?width=640&crop=smart&auto=webp&v=enabled&s=9989e9078169226774940dd1db0fc74c579fb016', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/wdb-Kd99EbN4eGAbxJGwEnT8JfdI7OwZ7CalnbE9ayg.jpg?width=960&crop=smart&auto=webp&v=enabled&s=8f9492a978509a5f34baa247bffcf63baea92323', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/wdb-Kd99EbN4eGAbxJGwEnT8JfdI7OwZ7CalnbE9ayg.jpg?width=1080&crop=smart&auto=webp&v=enabled&s=353905f51a0a16eccf15388a896fbd03afa0a297', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/wdb-Kd99EbN4eGAbxJGwEnT8JfdI7OwZ7CalnbE9ayg.jpg?auto=webp&v=enabled&s=0c16539d93611d6d61e08f2984a51670bbf7b052', 'width': 1200}, 'variants': {}}]} |
||
Trying to get 4bit to work, always get CUDA not found error (Win and WSL) | 1 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/11urjz0/trying_to_get_4bit_to_work_always_get_cuda_not/ | Outside_You_Forever | self.LocalLLaMA | 2023-03-18T15:52:21 | 0 | {} | 11urjz0 | false | null | t3_11urjz0 | /r/LocalLLaMA/comments/11urjz0/trying_to_get_4bit_to_work_always_get_cuda_not/ | false | false | default | 1 | null |
Using Whisper, LLaMA-13B, and SD at the same time on a single 4090 | 26 | null | https://v.redd.it/qvkvgmw6fkoa1 | WarProfessional3278 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 11v1dbu | false | {'reddit_video': {'bitrate_kbps': 4800, 'dash_url': 'https://v.redd.it/qvkvgmw6fkoa1/DASHPlaylist.mpd?a=1683595705%2CODZhOGNhODIyN2U3NzI3YjliZDc3NzI5ZDc1MDM0MzgwODU1ZGVlMTk4NmZmZWZhYzI0N2I1MjQ4MTg4YWIxMA%3D%3D&v=1&f=sd', 'duration': 35, 'fallback_url': 'https://v.redd.it/qvkvgmw6fkoa1/DASH_1080.mp4?source=fallback', 'height': 1080, 'hls_url': 'https://v.redd.it/qvkvgmw6fkoa1/HLSPlaylist.m3u8?a=1683595705%2CZDJiNTcyMjc3ZmYzYTEwYzE3ZWE3OGYzY2E0ZGU2ZWQwNGFlNjM5YjAxY2UyYWM4NzVhZTNjMzAwYzUzY2ZjNQ%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/qvkvgmw6fkoa1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 1920}} | t3_11v1dbu | /r/LocalLLaMA/comments/11v1dbu/using_whisper_llama13b_and_sd_at_the_same_time_on/ | false | false | 26 | {'enabled': False, 'images': [{'id': 'ufF2ZXRn1sCqAkXx1WZM0-xMyQxgh2MwZOyfNY648_k', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/qc8KtCahPmPoR1DkWCRObQCCZHBybaNVN7tiLtaIr5k.png?width=108&crop=smart&format=pjpg&auto=webp&v=enabled&s=a89c9f1a452d0a07fb073f7b3399cef52c5ae932', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/qc8KtCahPmPoR1DkWCRObQCCZHBybaNVN7tiLtaIr5k.png?width=216&crop=smart&format=pjpg&auto=webp&v=enabled&s=8a966806e4379880e7b45564cb987b3a5683a1f5', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/qc8KtCahPmPoR1DkWCRObQCCZHBybaNVN7tiLtaIr5k.png?width=320&crop=smart&format=pjpg&auto=webp&v=enabled&s=2397339a7bce871064a73c3d2e5a5909f126b9d7', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/qc8KtCahPmPoR1DkWCRObQCCZHBybaNVN7tiLtaIr5k.png?width=640&crop=smart&format=pjpg&auto=webp&v=enabled&s=b3c15402ecff7a78a5ec65776e2ce4a419a8222a', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/qc8KtCahPmPoR1DkWCRObQCCZHBybaNVN7tiLtaIr5k.png?width=960&crop=smart&format=pjpg&auto=webp&v=enabled&s=7a7ebe825e1841a59636d8da796750c16a79a99b', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/qc8KtCahPmPoR1DkWCRObQCCZHBybaNVN7tiLtaIr5k.png?width=1080&crop=smart&format=pjpg&auto=webp&v=enabled&s=c5b16062565da8184e1def0ed4bff980f93223b2', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/qc8KtCahPmPoR1DkWCRObQCCZHBybaNVN7tiLtaIr5k.png?format=pjpg&auto=webp&v=enabled&s=4b016882b56456aad34ca7354bf547126dcd9e34', 'width': 1920}, 'variants': {}}]} |
||
Alpaca 13B loading issue | 0 | [removed] | null | [deleted] | 2023-03-19T00:48:30 | 0 | {} | 11v5yte | false | null | t3_11v5yte | /r/LocalLLaMA/comments/11v5yte/alpaca_13b_loading_issue/ | false | false | default | 0 | null |
||
13B LLaMA Alpaca LoRAs Available on Hugging Face | 38 | null | /r/Oobabooga/comments/11v56na/13b_llama_alpaca_loras_available_on_hugging_face/ | friedrichvonschiller | self.Oobabooga | 1970-01-01T00:00:00 | 0 | {} | 11vbq6r | false | null | t3_11vbq6r | /r/LocalLLaMA/comments/11vbq6r/13b_llama_alpaca_loras_available_on_hugging_face/ | false | false | default | 38 | {'enabled': False, 'images': [{'id': '0-2J_a2zWnIF9ug1K-Lt1tLy77j9bLbRPA3xd7OBTTE', 'resolutions': [{'height': 67, 'url': 'https://external-preview.redd.it/0u-Lq63nhxIoV2dSqlsWsNb_nw7tIF94CWS3rQgD_HQ.png?width=108&crop=smart&auto=webp&v=enabled&s=b00a08ef32faa137d88698b870a4df625ffd9ed9', 'width': 108}, {'height': 135, 'url': 'https://external-preview.redd.it/0u-Lq63nhxIoV2dSqlsWsNb_nw7tIF94CWS3rQgD_HQ.png?width=216&crop=smart&auto=webp&v=enabled&s=b678fa444669cdd6e00626b95d207842a19854e5', 'width': 216}, {'height': 200, 'url': 'https://external-preview.redd.it/0u-Lq63nhxIoV2dSqlsWsNb_nw7tIF94CWS3rQgD_HQ.png?width=320&crop=smart&auto=webp&v=enabled&s=6c87456999376b63c7f53735999e4bdea47b4124', 'width': 320}, {'height': 400, 'url': 'https://external-preview.redd.it/0u-Lq63nhxIoV2dSqlsWsNb_nw7tIF94CWS3rQgD_HQ.png?width=640&crop=smart&auto=webp&v=enabled&s=ad1f17ccf1fb4412dd47d68253f9f23f5da8a962', 'width': 640}, {'height': 601, 'url': 'https://external-preview.redd.it/0u-Lq63nhxIoV2dSqlsWsNb_nw7tIF94CWS3rQgD_HQ.png?width=960&crop=smart&auto=webp&v=enabled&s=7856aaeced3a336c056abcda16ed808465f05447', 'width': 960}, {'height': 676, 'url': 'https://external-preview.redd.it/0u-Lq63nhxIoV2dSqlsWsNb_nw7tIF94CWS3rQgD_HQ.png?width=1080&crop=smart&auto=webp&v=enabled&s=194e62de87270b20355e986f3c19fba0e0741655', 'width': 1080}], 'source': {'height': 1080, 'url': 'https://external-preview.redd.it/0u-Lq63nhxIoV2dSqlsWsNb_nw7tIF94CWS3rQgD_HQ.png?auto=webp&v=enabled&s=8099fe4532e422237b83c69eee2157f30c3ec5f4', 'width': 1724}, 'variants': {}}]} |
|
Got the llama ccp ggml 4bit weights, can I get some guidance from here? | 0 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/11ves42/got_the_llama_ccp_ggml_4bit_weights_can_i_get/ | Virtamancer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 11ves42 | true | null | t3_11ves42 | /r/LocalLLaMA/comments/11ves42/got_the_llama_ccp_ggml_4bit_weights_can_i_get/ | false | false | default | 0 | null |
My post got removed? | 1 | [removed] | null | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 11vio91 | false | null | t3_11vio91 | /r/LocalLLaMA/comments/11vio91/my_post_got_removed/ | false | false | default | 1 | null |
||
Is there a difference between the model and the weights? | 1 | [removed] | null | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 11vmhad | false | null | t3_11vmhad | /r/LocalLLaMA/comments/11vmhad/is_there_a_difference_between_the_model_and_the/ | false | false | default | 1 | null |
||
Stable Diffusion API now integrated in the web UI! Some interesting examples: | 30 | null | https://www.reddit.com/gallery/11w2mte | Civil_Collection7267 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 11w2mte | false | null | t3_11w2mte | /r/LocalLLaMA/comments/11w2mte/stable_diffusion_api_now_integrated_in_the_web_ui/ | false | false | 30 | null |
||
PyLLaMaCpp: A simple way to run llama.cpp + Python bindings | 1 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/11w8q8s/pyllamacpp_a_simple_way_to_run_llamacpp_python/ | a-user7 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 11w8q8s | false | null | t3_11w8q8s | /r/LocalLLaMA/comments/11w8q8s/pyllamacpp_a_simple_way_to_run_llamacpp_python/ | false | false | default | 1 | null |
I made a GUI app for installing and chatting with Alpaca.cpp | 1 | null | https://v.redd.it/y01r45h8hwoa1 | eliasvi0 | v.redd.it | 1970-01-01T00:00:00 | 0 | {} | 11wjmai | false | {'reddit_video': {'bitrate_kbps': 2400, 'dash_url': 'https://v.redd.it/y01r45h8hwoa1/DASHPlaylist.mpd?a=1683593732%2CNzEzM2Q4OGI5ZDFmZGFkYzE2ODBlYWQ0MTBiYzQ5NzUwNzA2N2MyNDNhMWJkZWYyYmRiOWI0MGU5MjQwNDhkOA%3D%3D&v=1&f=sd', 'duration': 35, 'fallback_url': 'https://v.redd.it/y01r45h8hwoa1/DASH_720.mp4?source=fallback', 'height': 720, 'hls_url': 'https://v.redd.it/y01r45h8hwoa1/HLSPlaylist.m3u8?a=1683593732%2COTQ5NGU4ZTBkNDY3NjJkMmQwMjJiM2JjNjRlNGE0NWE2ZWZkMmYxMWY5ODc5YmUyM2RkZTYyZDg2NDZkMWIzOA%3D%3D&v=1&f=sd', 'is_gif': False, 'scrubber_media_url': 'https://v.redd.it/y01r45h8hwoa1/DASH_96.mp4', 'transcoding_status': 'completed', 'width': 492}} | t3_11wjmai | /r/LocalLLaMA/comments/11wjmai/i_made_a_gui_app_for_installing_and_chatting_with/ | false | false | default | 1 | null |
|
Tom's Hardware wrote a guide to running LLaMa locally with benchmarks of GPUs | 34 | null | https://www.tomshardware.com/news/running-your-own-chatbot-on-a-single-gpu | Gudeldar | tomshardware.com | 1970-01-01T00:00:00 | 0 | {} | 11wndc9 | false | null | t3_11wndc9 | /r/LocalLLaMA/comments/11wndc9/toms_hardware_wrote_a_guide_to_running_llama/ | false | false | 34 | {'enabled': False, 'images': [{'id': 'a6SpC30_dSctu6VQfnjDBcCrNgCMwz3NRVeDQdtvu_g', 'resolutions': [{'height': 60, 'url': 'https://external-preview.redd.it/ioxRnQxeErZsnz0SYfFmK-zA9fldqOQMzttj_Wg9AqI.jpg?width=108&crop=smart&auto=webp&v=enabled&s=8728e900bd9295a11dbc65256c99714e1df65540', 'width': 108}, {'height': 121, 'url': 'https://external-preview.redd.it/ioxRnQxeErZsnz0SYfFmK-zA9fldqOQMzttj_Wg9AqI.jpg?width=216&crop=smart&auto=webp&v=enabled&s=d9cb1288877f639631439ef9d44566e8672278d4', 'width': 216}, {'height': 180, 'url': 'https://external-preview.redd.it/ioxRnQxeErZsnz0SYfFmK-zA9fldqOQMzttj_Wg9AqI.jpg?width=320&crop=smart&auto=webp&v=enabled&s=7111dcbf8da7a39cb1d5651c5740cad8a42213e5', 'width': 320}, {'height': 360, 'url': 'https://external-preview.redd.it/ioxRnQxeErZsnz0SYfFmK-zA9fldqOQMzttj_Wg9AqI.jpg?width=640&crop=smart&auto=webp&v=enabled&s=d290e1f2afec49010b809b3a5290ce1bdcd26b95', 'width': 640}, {'height': 540, 'url': 'https://external-preview.redd.it/ioxRnQxeErZsnz0SYfFmK-zA9fldqOQMzttj_Wg9AqI.jpg?width=960&crop=smart&auto=webp&v=enabled&s=a5c59f62eb11254b26b0b7d4975fc3c87991e81b', 'width': 960}, {'height': 607, 'url': 'https://external-preview.redd.it/ioxRnQxeErZsnz0SYfFmK-zA9fldqOQMzttj_Wg9AqI.jpg?width=1080&crop=smart&auto=webp&v=enabled&s=fb259886c908812940166631345c26707c902de2', 'width': 1080}], 'source': {'height': 675, 'url': 'https://external-preview.redd.it/ioxRnQxeErZsnz0SYfFmK-zA9fldqOQMzttj_Wg9AqI.jpg?auto=webp&v=enabled&s=f0a09b50d429cb30d40f8d225805a548397851fb', 'width': 1200}, 'variants': {}}]} |
||
Increasing Max Token Length | 1 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/11wpv1l/increasing_max_token_length/ | Hands0L0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 11wpv1l | false | null | t3_11wpv1l | /r/LocalLLaMA/comments/11wpv1l/increasing_max_token_length/ | false | false | default | 1 | null |
trying to do 4bit on windows 10, getting cuda mismatch error | 1 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/11wt73z/trying_to_do_4bit_on_windows_10_getting_cuda/ | SDGenius | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 11wt73z | false | null | t3_11wt73z | /r/LocalLLaMA/comments/11wt73z/trying_to_do_4bit_on_windows_10_getting_cuda/ | false | false | default | 1 | null |
I want to run Llama-7b-4bit in some type of Python environment so I can play around and learn with the code. I’m new to this specific field. How do I do this? | 0 | [removed] | null | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 11wt9di | false | null | t3_11wt9di | /r/LocalLLaMA/comments/11wt9di/i_want_to_run_llama7b4bit_in_some_type_of_python/ | false | false | default | 0 | null |
||
Graphic text adventure game locally with llama and stable diffusion | 53 | null | vaidas-maciulis | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 11wwwjq | false | null | t3_11wwwjq | /r/LocalLLaMA/comments/11wwwjq/graphic_text_adventure_game_locally_with_llama/ | false | false | 53 | {'enabled': True, 'images': [{'id': 'UUEN2aBavhIV-ntfDXrI2CUIDgiKHscYjAF-1pXkZo4', 'resolutions': [{'height': 88, 'url': 'https://preview.redd.it/smgqu9rdryoa1.png?width=108&crop=smart&auto=webp&v=enabled&s=941f9b5e84c7e69f800808e73a75167e7b8ef130', 'width': 108}, {'height': 176, 'url': 'https://preview.redd.it/smgqu9rdryoa1.png?width=216&crop=smart&auto=webp&v=enabled&s=242e3f8a7987c2b37b2fdac0bb3c475c805820a0', 'width': 216}, {'height': 261, 'url': 'https://preview.redd.it/smgqu9rdryoa1.png?width=320&crop=smart&auto=webp&v=enabled&s=4c672f0a0391738777bc16ae1087a43ce76dd51d', 'width': 320}, {'height': 523, 'url': 'https://preview.redd.it/smgqu9rdryoa1.png?width=640&crop=smart&auto=webp&v=enabled&s=6464b63a9b9236194a5d15169ef41fbb5262ac86', 'width': 640}, {'height': 785, 'url': 'https://preview.redd.it/smgqu9rdryoa1.png?width=960&crop=smart&auto=webp&v=enabled&s=c2994092fe62da1230f1821a96abc5946ca0b659', 'width': 960}], 'source': {'height': 807, 'url': 'https://preview.redd.it/smgqu9rdryoa1.png?auto=webp&v=enabled&s=2d58a9936dcdb78893de3af2a7b941bca469af4e', 'width': 986}, 'variants': {}}]} |
|||
Is it possible to integrate llama cpp with automated input and output? | 5 | A friend and I came up with the idea to combine LLaMA cpp and its chat feature with Vosk and Pythontts. But I have no clue how realistic this is with LLaMA's limited documentation at the time. Has anyone attempted anything similar yet? | null | https://www.reddit.com/r/LocalLLaMA/comments/11x12jz/is_it_possible_to_integrate_llama_cpp_with/ | falconnor4 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 11x12jz | false | null | t3_11x12jz | /r/LocalLLaMA/comments/11x12jz/is_it_possible_to_integrate_llama_cpp_with/ | false | false | self | 5 | null |
How long does it take to get access? | 4 | I submitted my request two weeks ago and still haven't heard back. Now there's stuff like Alpaca and other fine-tuning methods, I feel like I'm really missing out. I literally check my email almost religiously to check for emails from Meta. I'm almost certain I used a .edu email address.
Also, can I run llama on an M1 macbook, I don't think there is enough VRAM but surely there are ways that are slower but still work? | null | https://www.reddit.com/r/LocalLLaMA/comments/11x4v3c/how_long_does_it_take_to_get_access/ | Unusual_Guidance2095 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 11x4v3c | false | null | t3_11x4v3c | /r/LocalLLaMA/comments/11x4v3c/how_long_does_it_take_to_get_access/ | false | false | self | 4 | null |
Can someone post the hashes of the original straight-from-meta files? | 1 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/11x7ntu/can_someone_post_the_hashes_of_the_original/ | Virtamancer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 11x7ntu | false | null | t3_11x7ntu | /r/LocalLLaMA/comments/11x7ntu/can_someone_post_the_hashes_of_the_original/ | false | false | default | 1 | null |
Are there publicly available datasets other than Alpaca that we can use to fine-tune LLaMA? | 15 | It seems that everyone is training LLaMA with the Stanford Alpaca dataset, but it is unclear to me if this is the optimal choice (has anyone done a comparison?). Also people have already discovered many problems in the dataset itself, which further adds to my doubt here.
Forgive my ignorance in this area, are there any other public instruction dataset that we can use to fine-tune the model? What is the status-quo people go to when they want to fine-tune a instruction-following models? (Or really none exist?) | null | https://www.reddit.com/r/LocalLLaMA/comments/11x9hzs/are_there_publicly_available_datasets_other_than/ | _ouromoros | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 11x9hzs | false | null | t3_11x9hzs | /r/LocalLLaMA/comments/11x9hzs/are_there_publicly_available_datasets_other_than/ | false | false | self | 15 | null |
How to do Llama 30B 4bit finetuning? | 14 | So far it looks like the only finetuning is using 8bit following the Alpaca Lora.
Do we expect 4bit finetuning to come out, or not at all? Is it just a matter of someone converting the 8bit fine tuning, or it's a lot more complicated than that?
And does it require additional VRAM to run the finetuning? I have a 3090 and 64GB system memory, hence why I'm targeting 30B 4bit. | null | https://www.reddit.com/r/LocalLLaMA/comments/11xbu7d/how_to_do_llama_30b_4bit_finetuning/ | Pan000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 11xbu7d | false | null | t3_11xbu7d | /r/LocalLLaMA/comments/11xbu7d/how_to_do_llama_30b_4bit_finetuning/ | false | false | self | 14 | null |
Is this good idea to buy more rtx 3090? | 12 | Hi, I love the idea of open source. Currently I got 2x rtx 3090 and I amble to run int4 65B llama model. I am thinking about buying two more rtx 3090 when I am see how fast community is making progress. Is this good idea? Please help me with the decision. It is not about money, but still I cannot afford a100 80GB for this hobby. | null | https://www.reddit.com/r/LocalLLaMA/comments/11xfhq4/is_this_good_idea_to_buy_more_rtx_3090/ | polawiaczperel | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 11xfhq4 | false | null | t3_11xfhq4 | /r/LocalLLaMA/comments/11xfhq4/is_this_good_idea_to_buy_more_rtx_3090/ | false | false | self | 12 | null |
ATI GPU? | 3 | [deleted] | null | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 11xiwes | false | null | t3_11xiwes | /r/LocalLLaMA/comments/11xiwes/ati_gpu/ | false | false | default | 3 | null |
||
What's the current "Best" LLaMA LoRA? or moreover what would be a good benchmark to test these against. (HF links incl in post) | 35 | So far I've found:
https://huggingface.co/serpdotai/
https://huggingface.co/baseten/
https://huggingface.co/chavinlo/
https://huggingface.co/Draff/
https://huggingface.co/samwit/
(please add more in the comments if you know of more)
---------------------------------------
What would be a good way to benchmark these? are there any that are definitively 'better' than others, and are there any that I've missed? | null | https://www.reddit.com/r/LocalLLaMA/comments/11xkuj5/whats_the_current_best_llama_lora_or_moreover/ | blueSGL | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 11xkuj5 | false | null | t3_11xkuj5 | /r/LocalLLaMA/comments/11xkuj5/whats_the_current_best_llama_lora_or_moreover/ | false | false | self | 35 | {'enabled': False, 'images': [{'id': 'a1ck-pg6Ht_BnI_Pw1ltcU-mDjXQXbTjWVVfSwpGQMo', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/9eD-UeMHFkYFBBCsePYEzO9zhQRd7t22KnpQBgBG0mk.jpg?width=108&crop=smart&auto=webp&v=enabled&s=c5b3e826e50f12d0b8524cbfc7f340a9636f9c85', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/9eD-UeMHFkYFBBCsePYEzO9zhQRd7t22KnpQBgBG0mk.jpg?width=216&crop=smart&auto=webp&v=enabled&s=88b9e2fe6859f0b98735e6d13e9d68c1186821d7', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/9eD-UeMHFkYFBBCsePYEzO9zhQRd7t22KnpQBgBG0mk.jpg?width=320&crop=smart&auto=webp&v=enabled&s=26d119147b937946743453b93b8a4a97bdf17152', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/9eD-UeMHFkYFBBCsePYEzO9zhQRd7t22KnpQBgBG0mk.jpg?width=640&crop=smart&auto=webp&v=enabled&s=e35590360598ad41abddc0a849cac4f546fd6236', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/9eD-UeMHFkYFBBCsePYEzO9zhQRd7t22KnpQBgBG0mk.jpg?width=960&crop=smart&auto=webp&v=enabled&s=7a527f08c676522c52ce601a2e9e77a759c0a12e', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/9eD-UeMHFkYFBBCsePYEzO9zhQRd7t22KnpQBgBG0mk.jpg?width=1080&crop=smart&auto=webp&v=enabled&s=86c8a86924572876109c7db53e8dc37b9d76201e', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/9eD-UeMHFkYFBBCsePYEzO9zhQRd7t22KnpQBgBG0mk.jpg?auto=webp&v=enabled&s=c7a538039a33c3c27408f7e9885e6c58b5304eb7', 'width': 1200}, 'variants': {}}]} |
Can I fine-tune llama 7b alpaca or gpt neo x 125m or any other models on my GTX 1650 super 4gb VRAM and 8gb of ram (I'm new to this) | 2 | null | https://www.reddit.com/r/LocalLLaMA/comments/11y0kqz/can_i_finetune_llama_7b_alpaca_or_gpt_neo_x_125m/ | Puzzleheaded_Acadia1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 11y0kqz | true | null | t3_11y0kqz | /r/LocalLLaMA/comments/11y0kqz/can_i_finetune_llama_7b_alpaca_or_gpt_neo_x_125m/ | false | false | default | 2 | null |
|
PyLLaMACpp: A simple way to run llama.cpp + Python bindings | 1 | null | https://github.com/abdeladim-s/pyllamacpp | a-user7 | github.com | 1970-01-01T00:00:00 | 0 | {} | 11y3ont | false | null | t3_11y3ont | /r/LocalLLaMA/comments/11y3ont/pyllamacpp_a_simple_way_to_run_llamacpp_python/ | false | false | default | 1 | null |
|
r/LocalLLaMA Subreddit Statistics | 1 | [removed] | null | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 11ydfi4 | false | null | t3_11ydfi4 | /r/LocalLLaMA/comments/11ydfi4/rlocalllama_subreddit_statistics/ | false | false | default | 1 | null |
||
llama.cpp vs alpaca.cpp: same model different results | 17 | I was performing some reasoning test to see how Alpaca is performing and found that with the same model and same sampling parameters they behave quite differently.
I'm quite new to LLM so I'm studying the subject making trials, so forgive me if it's a trivial question.
>sampling parameters: temp = 0.200000, top\_k = 40, top\_p = 0.950000, repeat\_last\_n = 64, repeat\_penalty = 1.100000
>
>Model: ggml-alpaca-7b-q4.bin
Both llama.cpp and alpaca.cpp pulled fresh today.
Prompt:
>All Germans speak Italian. All Italian speakers ride bicycles.\\
Which of the following statemens is true? You must choose one of the following:\\
1- All Italians speak German\\
2- All bicycle riders are German\\
3- All Germans ride bicycles\\
4- Some of the Italians riding bicycles are Germans
llama.cpp answer:
>3- All Germans ride bicycles.
alpaca.cpp answer:
>5- None of the above
Any hint on what's happening? | null | https://www.reddit.com/r/LocalLLaMA/comments/11yhxjm/llamacpp_vs_alpacacpp_same_model_different_results/ | fakezeta | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 11yhxjm | false | null | t3_11yhxjm | /r/LocalLLaMA/comments/11yhxjm/llamacpp_vs_alpacacpp_same_model_different_results/ | false | false | self | 17 | null |
Build llama.cpp on Jetson Nano 2GB | 22 | #((Assuming the baby new install of Ubuntu on the Jetson Nano))
#(MAKE SURE IT IS JETPACK 4.6.1!)
#Update your stuff.
sudo apt update && sudo apt upgrade
sudo apt install python3-pip python-pip
sudo reboot
#Install Aarch64 Conda
cd ~
wget https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-Linux-aarch64.sh .
chmod a+x Miniforge3-Linux-aarch64.sh
./Miniforge3-Linux-aarch64.sh
sudo reboot
#Install other python things.
sudo apt install python3-h5py libhdf5-serial-dev hdf5-tools libpng-dev libfreetype6-dev
#Create the Conda for llamacpp
conda create -n llamacpp
conda activate llamacpp
# build this repo
git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp
make
#Requires next, the torch. Pytorch is on Jetson Nano, lets install this!
#From NVIDIA we can learn here what to install PyTorch on our Nano.
#https://docs.nvidia.com/deeplearning/frameworks/install-pytorch-jetson-platform/index.html
#make Sure everything is update!
sudo apt-get -y update
#Install Prerequisite
sudo apt-get -y install autoconf bc build-essential g++-8 gcc-8 clang-8 lld-8 gettext-base gfortran-8 iputils-ping libbz2-dev libc++-dev libcgal-dev libffi-dev libfreetype6-dev libhdf5-dev libjpeg-dev liblzma-dev libncurses5-dev libncursesw5-dev libpng-dev libreadline-dev libssl-dev libsqlite3-dev libxml2-dev libxslt-dev locales moreutils openssl python-openssl rsync scons python3-pip libopenblas-dev;
#Make the Install path. This is for the JetPack 4.6.1
export TORCH_INSTALL=https://developer.download.nvidia.com/compute/redist/jp/v461/pytorch/torch-1.11.0a0+17540c5+nv22.01-cp36-cp36m-linux_aarch64.whl
#Run each individually!!! Make sure they work.
python3 -m pip install --upgrade pip
python3 -m pip install aiohttp
python3 -m pip install numpy=='1.19.4'
python3 -m pip install scipy=='1.5.3'
export "LD_LIBRARY_PATH=/usr/lib/llvm-8/lib:$LD_LIBRARY_PATH";
#LLaMa.cpp need this sentencepiece!
#We can learn how to build on nano from here! https://github.com/arijitx/jetson-nlp
git clone https://github.com/google/sentencepiece
cd /path/to/sentencepiece
mkdir build
cd build
cmake ..
make -j $(nproc)
sudo make install
sudo ldconfig -v
cd ..
cd python
python3 setup.py install
#Upgrade protobuf, and install the torch!
python3 -m pip install --upgrade protobuf; python3 -m pip install --no-cache $TORCH_INSTALL
#Check to make this works!
python3 -c "import torch; print(torch.cuda.is_available())"
#If respond true! Then it is ok!
Only model I got to work so far.
Next make a folder called ANE-7B in the llama.cpp/models folder.
Download ggml-model-q4\_1.bin from huggingface.
[Pi3141/alpaca-7b-native-enhanced · Hugging Face](https://huggingface.co/Pi3141/alpaca-7b-native-enhanced)
Include the params.json in the folder.
In the prompt folder make the new file called alpacanativeenhanced.txt, include the text!!
You are an AI language model designed to assist the User by answering their questions, offering advice, and engaging in casual conversation in a friendly, helpful, and informative manner. You respond clearly, coherently, and you consider the conversation history.
User: Hey, how's it going?
Assistant: Hey there! I'm doing great, thank you. What can I help you with today? Let's have a fun chat!
Then run the command this:
main -m models/ANE-7B/ggml-model-q4_1.bin -n -1 --ctx_size 2048 --batch_size 16 --keep 512 --repeat_penalty 1.0 -t 16 --temp 0.4 --top_k 30 --top_p 0.18 --interactive-first -ins --color -i -r "User:" -f prompts/alpacanativeenhanced.txt | null | https://www.reddit.com/r/LocalLLaMA/comments/11yi0bl/build_llamacpp_on_jetson_nano_2gb/ | SlavaSobov | self.LocalLLaMA | 2023-04-05T14:07:27 | 0 | {} | 11yi0bl | false | null | t3_11yi0bl | /r/LocalLLaMA/comments/11yi0bl/build_llamacpp_on_jetson_nano_2gb/ | false | false | self | 22 | {'enabled': False, 'images': [{'id': '9JOfN81286Rl3B2ZMQ_BimKP82h5bGxPMzqWlT8aQ-A', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/4uAgoQe0xy9sjqRkIt4fsdu5pg1VvlCgIXxyIuUro4M.jpg?width=108&crop=smart&auto=webp&v=enabled&s=b827d071aee5e763f6988ac05b3fb6a013476585', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/4uAgoQe0xy9sjqRkIt4fsdu5pg1VvlCgIXxyIuUro4M.jpg?width=216&crop=smart&auto=webp&v=enabled&s=8d4204246eb9b96ce2efddbf7f66b0d6683337ce', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/4uAgoQe0xy9sjqRkIt4fsdu5pg1VvlCgIXxyIuUro4M.jpg?width=320&crop=smart&auto=webp&v=enabled&s=b63a93fa64583c38aba087c31bae791d4a67af78', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/4uAgoQe0xy9sjqRkIt4fsdu5pg1VvlCgIXxyIuUro4M.jpg?width=640&crop=smart&auto=webp&v=enabled&s=f49fb80998b565a07aa4e5b0b63f17ad8d167b6c', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/4uAgoQe0xy9sjqRkIt4fsdu5pg1VvlCgIXxyIuUro4M.jpg?width=960&crop=smart&auto=webp&v=enabled&s=ffd33240b29157cf43961c1643e8a48679e92100', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/4uAgoQe0xy9sjqRkIt4fsdu5pg1VvlCgIXxyIuUro4M.jpg?width=1080&crop=smart&auto=webp&v=enabled&s=04e4243acf462525f0e6af9ae5ebbe1fd796f7b5', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/4uAgoQe0xy9sjqRkIt4fsdu5pg1VvlCgIXxyIuUro4M.jpg?auto=webp&v=enabled&s=c4f560fdf3e651e0acfb6731451f1b6022fdcfe0', 'width': 1200}, 'variants': {}}]} |
Trying to get continuous conversations (conversation memory?) working in alpaca.cpp on Mac M2 | 7 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/11yycky/trying_to_get_continuous_conversations/ | spanielrassler | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 11yycky | false | null | t3_11yycky | /r/LocalLLaMA/comments/11yycky/trying_to_get_continuous_conversations/ | false | false | default | 7 | null |
using oobabooga webui with 7B LLama on GPU, some characters are amazing some are useless | 7 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/11yz2dm/using_oobabooga_webui_with_7b_llama_on_gpu_some/ | megadonkeyx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 11yz2dm | false | null | t3_11yz2dm | /r/LocalLLaMA/comments/11yz2dm/using_oobabooga_webui_with_7b_llama_on_gpu_some/ | false | false | default | 7 | null |
Your `--load-in-8bit` error may be due to non-supported GPU. Want to fix it? | 2 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/11yzosa/your_loadin8bit_error_may_be_due_to_nonsupported/ | monkmartinez | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 11yzosa | false | null | t3_11yzosa | /r/LocalLLaMA/comments/11yzosa/your_loadin8bit_error_may_be_due_to_nonsupported/ | false | false | default | 2 | null |
Why is 30B not available even though it says it’s there? | 1 | [removed] | null | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 11z2251 | false | null | t3_11z2251 | /r/LocalLLaMA/comments/11z2251/why_is_30b_not_available_even_though_it_says_its/ | false | false | default | 1 | null |
||
Cuda Frustration | 1 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/11z5g9x/cuda_frustration/ | RobXSIQ | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 11z5g9x | false | null | t3_11z5g9x | /r/LocalLLaMA/comments/11z5g9x/cuda_frustration/ | false | false | default | 1 | null |
Got problems with bitsandbytes? This may be a fix... | 11 | Some users of the bitsandbytes - 8 bit optimizer - by Tim Dettmers have reported issues when using the tool with older GPUs, such as Maxwell or Pascal. I beleive they don't even know its an issue. These GPUs do not support the required instructions for the tool to run properly, resulting in errors or crashes.
A simple fix for this issue is available, which requires the use of a prebuilt DLL from https://github.com/james-things/bitsandbytes-prebuilt-all_arch by James. This DLL contains the necessary instructions for the tool to run on older GPUs.
To apply the fix, follow these steps:
1. Download the prebuilt DLL from https://github.com/james-things/bitsandbytes-prebuilt-all_arch.
2. Replace the existing DLL in the bitsandbytes tool directory with the downloaded DLL.
3. Move those files into
4. C:\Users\xxx\miniconda3\envs\textgen\lib\site-packages\bitsandbytes\
5. Now edit bitsandbytes\cuda_setup\main.py with these changes:
6. Change this line:
7. ct.cdll.LoadLibrary(binary_path)
8. To the following:
9. ct.cdll.LoadLibrary(str(binary_path)) There are two occurrences in the file.
10. Then replace this line:
11. if not torch.cuda.is_available(): return 'libsbitsandbytes_cpu.so', None, None, None, None
12. With the following:
13. if torch.cuda.is_available(): return 'libbitsandbytes_cudaall.dll', None, None, None, None
14. Please note that the prebuilt DLL may not work with every version of the bitsandbytes tool, so make sure to use the version that is compatible with the DLL.
I used this on WSL and Regular windows install with a maxwell generation card after trying a bazillion and 1 different methods. Finally, I found that my card was too old and none of the options out in the wild would work until I addressed that issue.
https://github.com/oobabooga/text-generation-webui/pull/504 | null | https://www.reddit.com/r/LocalLLaMA/comments/11z8vzy/got_problems_with_bitsandbytes_this_may_be_a_fix/ | monkmartinez | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 11z8vzy | false | null | t3_11z8vzy | /r/LocalLLaMA/comments/11z8vzy/got_problems_with_bitsandbytes_this_may_be_a_fix/ | false | false | self | 11 | {'enabled': False, 'images': [{'id': '4sHFecMk7qkFB_6Ku5iVFW6NZloZb3t2BcdZyo7uPq8', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/M68VwFVx1qWSq7_Qyur9ktwhPJYCIpy1GTdza68IwGo.jpg?width=108&crop=smart&auto=webp&v=enabled&s=ab93d070168ca8e91ed9c9d49abcfd6212a8ba9d', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/M68VwFVx1qWSq7_Qyur9ktwhPJYCIpy1GTdza68IwGo.jpg?width=216&crop=smart&auto=webp&v=enabled&s=0a6201e732ff85f37a06613d4bf0b60786b33746', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/M68VwFVx1qWSq7_Qyur9ktwhPJYCIpy1GTdza68IwGo.jpg?width=320&crop=smart&auto=webp&v=enabled&s=45bded5f7b724c82d4327e10f8f6cfbdf6b08588', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/M68VwFVx1qWSq7_Qyur9ktwhPJYCIpy1GTdza68IwGo.jpg?width=640&crop=smart&auto=webp&v=enabled&s=bfbb4dd63993cb9cc3cfb96fcb63aee640f7f3b9', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/M68VwFVx1qWSq7_Qyur9ktwhPJYCIpy1GTdza68IwGo.jpg?width=960&crop=smart&auto=webp&v=enabled&s=883948e8c2576f2ce9642586d6fe3ca171611859', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/M68VwFVx1qWSq7_Qyur9ktwhPJYCIpy1GTdza68IwGo.jpg?width=1080&crop=smart&auto=webp&v=enabled&s=b6daa6862a7ab6dd61f7b8f7fd5022afdb3f1003', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/M68VwFVx1qWSq7_Qyur9ktwhPJYCIpy1GTdza68IwGo.jpg?auto=webp&v=enabled&s=e223868c7d3d5c5785c4f2a141bd67cf5ca4c652', 'width': 1200}, 'variants': {}}]} |
Cformers "Transformers with a C-backend for lightning-fast CPU inference". | 1 | [deleted] | null | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 11zass0 | false | null | t3_11zass0 | /r/LocalLLaMA/comments/11zass0/cformers_transformers_with_a_cbackend_for/ | false | false | default | 1 | null |
||
Cformers 🚀 - "Transformers with a C-backend for lightning-fast CPU inference". | Nolano | 15 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/11zax10/cformers_transformers_with_a_cbackend_for/ | AcanthocephalaOk1441 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 11zax10 | true | null | t3_11zax10 | /r/LocalLLaMA/comments/11zax10/cformers_transformers_with_a_cbackend_for/ | false | false | default | 15 | null |
Trying Alpaca.cpp and it's definitely fun. | 4 | null | lebanine | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 11zayxp | false | null | t3_11zayxp | /r/LocalLLaMA/comments/11zayxp/trying_alpacacpp_and_its_definitely_fun/ | false | false | default | 4 | null |
||
LLaMA Optimized for AMD GPUs | 14 | null | https://github.com/nod-ai/SHARK/blob/f898a1d33220256af913a12001b8fe8169a1aa9d/shark/examples/shark_inference/llama/README.md | friedrichvonschiller | github.com | 1970-01-01T00:00:00 | 0 | {} | 11zcqj2 | false | null | t3_11zcqj2 | /r/LocalLLaMA/comments/11zcqj2/llama_optimized_for_amd_gpus/ | false | false | 14 | {'enabled': False, 'images': [{'id': 'FFwNBRn9ldrmA40DDb4GIlyeUDwXrClUXGJlqRDTUds', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/aQgET6wgU-aASr6J-uKgFyHP6G_btqsOXu_AjbHRjRo.jpg?width=108&crop=smart&auto=webp&v=enabled&s=28fed118930bc918840a14685ab352739edf4876', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/aQgET6wgU-aASr6J-uKgFyHP6G_btqsOXu_AjbHRjRo.jpg?width=216&crop=smart&auto=webp&v=enabled&s=10babb6b6db8903df99e458f7b7f7cc526df46b4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/aQgET6wgU-aASr6J-uKgFyHP6G_btqsOXu_AjbHRjRo.jpg?width=320&crop=smart&auto=webp&v=enabled&s=ad102357b3399ef796d97b01f8d44157e3acb752', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/aQgET6wgU-aASr6J-uKgFyHP6G_btqsOXu_AjbHRjRo.jpg?width=640&crop=smart&auto=webp&v=enabled&s=33444a7b1ba37825ae81207687dde5029b64e7ee', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/aQgET6wgU-aASr6J-uKgFyHP6G_btqsOXu_AjbHRjRo.jpg?width=960&crop=smart&auto=webp&v=enabled&s=dc71b2848feb9a8db3efbe20bd1df12659b0eb8b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/aQgET6wgU-aASr6J-uKgFyHP6G_btqsOXu_AjbHRjRo.jpg?width=1080&crop=smart&auto=webp&v=enabled&s=34bb0fc9beceec88f8f5ae1bce73846aced1e7ce', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/aQgET6wgU-aASr6J-uKgFyHP6G_btqsOXu_AjbHRjRo.jpg?auto=webp&v=enabled&s=5d3c5a7bc43c1b6316ef33260c744be40d8102ee', 'width': 1200}, 'variants': {}}]} |
||
Introducing llamacpp-for-kobold, run llama.cpp locally with a fancy web UI, persistent stories, editing tools, save formats, memory, world info, author's note, characters, scenarios and more with minimal setup | 62 | You may have heard of llama.cpp, a lightweight and fast solution to running 4bit quantized llama models locally.
You may also have heard of KoboldAI (and KoboldAI Lite), full featured text writing clients for autoregressive LLMs.
# Enter [llamacpp-for-kobold](https://github.com/LostRuins/llamacpp-for-kobold)
This is self contained distributable powered by llama.cpp and runs a local HTTP server, allowing it to be used via an emulated Kobold API endpoint.
What does it mean? You get an embedded llama.cpp with a *fancy writing UI, persistent stories, editing tools, save formats, memory, world info, author's note, characters, scenarios and everything Kobold and Kobold Lite have to offer*. In a tiny package **(under 1 MB compressed with no dependencies except python)**, excluding model weights. [Simply download, extract, and run the llama-for-kobold.py file with the 4bit quantized llama model.bin as the second parameter](https://github.com/LostRuins/llamacpp-for-kobold/releases/latest).
There's also [a single file version](https://github.com/henk717/llamacpp-for-kobold/releases/latest), where you just drag-and-drop your llama model onto the .exe file, and connect KoboldAI to the displayed link. | null | https://www.reddit.com/r/LocalLLaMA/comments/11zdi6m/introducing_llamacppforkobold_run_llamacpp/ | HadesThrowaway | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 11zdi6m | false | null | t3_11zdi6m | /r/LocalLLaMA/comments/11zdi6m/introducing_llamacppforkobold_run_llamacpp/ | false | false | self | 62 | {'enabled': False, 'images': [{'id': 'iNOIL8xRSb88VpeaAwmnE5QXAeHIan30y-YJTKZwMwo', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/2wPy4oeQjeW4ZurWd732KqxrZ5UOlfdsnRvKylcsyMY.jpg?width=108&crop=smart&auto=webp&v=enabled&s=bf723f5b97039400195f7592a5a0765f380f9939', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/2wPy4oeQjeW4ZurWd732KqxrZ5UOlfdsnRvKylcsyMY.jpg?width=216&crop=smart&auto=webp&v=enabled&s=1bfb5f61438bdb9b08b64536e7f9923d912b6275', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/2wPy4oeQjeW4ZurWd732KqxrZ5UOlfdsnRvKylcsyMY.jpg?width=320&crop=smart&auto=webp&v=enabled&s=cbb3e81969a125a32a5736d1094053838df35c33', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/2wPy4oeQjeW4ZurWd732KqxrZ5UOlfdsnRvKylcsyMY.jpg?width=640&crop=smart&auto=webp&v=enabled&s=4b822f47807686d0e33970d278f26291e0a59732', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/2wPy4oeQjeW4ZurWd732KqxrZ5UOlfdsnRvKylcsyMY.jpg?width=960&crop=smart&auto=webp&v=enabled&s=a1e17259bb89740cbfd4b45b9ac417cddc555e0d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/2wPy4oeQjeW4ZurWd732KqxrZ5UOlfdsnRvKylcsyMY.jpg?width=1080&crop=smart&auto=webp&v=enabled&s=4c503dd9532eda77073975b3ae279b59ad147d14', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/2wPy4oeQjeW4ZurWd732KqxrZ5UOlfdsnRvKylcsyMY.jpg?auto=webp&v=enabled&s=12186044742b80b1570fef0c99db404b50ad2afb', 'width': 1200}, 'variants': {}}]} |
r/LocalLLaMA Subreddit Statistics | 1 | null | https://subredditstatistics.com/r/LocalLLaMA | neefs | subredditstatistics.com | 1970-01-01T00:00:00 | 0 | {} | 11zhxif | false | null | t3_11zhxif | /r/LocalLLaMA/comments/11zhxif/rlocalllama_subreddit_statistics/ | false | false | default | 1 | null |
|
Alpaca-LoRA: Low-Rank LLaMA Instruct-Tuning | 1 | [deleted] | null | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 11znjyq | false | null | t3_11znjyq | /r/LocalLLaMA/comments/11znjyq/alpacalora_lowrank_llama_instructtuning/ | false | false | default | 1 | null |
||
use in regular python scripts? | 12 | Got the llama-13b-hf to work on local RTX 4090 using the webui. This guide and oobabooga is awesome.
I know these are early days but is there a guide or pointers on how to access these models directly using **regular python scripts**? I have some large file text analysis I want to do with it. Done chatting with it, it needs to get to work. I used GPT-4 but the [server.py](https://server.py) and its supporting modules were too big for the 8k model to analyze. So pointers would be much appreciated!
Thank you for your hard work setting up this guide. While the big companies claim to be "democratizing" AI or that they are "open", you guys are actually democratizing AI. This is the best path to Alignment. | null | https://www.reddit.com/r/LocalLLaMA/comments/11zz7oa/use_in_regular_python_scripts/ | gransee | self.LocalLLaMA | 2023-03-23T23:17:36 | 0 | {} | 11zz7oa | false | null | t3_11zz7oa | /r/LocalLLaMA/comments/11zz7oa/use_in_regular_python_scripts/ | false | false | self | 12 | null |
Running alpaca on my beater | 1 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/11zzlpz/running_alpaca_on_my_beater/ | rwaterbender | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 11zzlpz | false | null | t3_11zzlpz | /r/LocalLLaMA/comments/11zzlpz/running_alpaca_on_my_beater/ | false | false | default | 1 | null |
New Torrent for Alpaca's 30B 4-bit weights (18.9 GB) | 5 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/1200mle/new_torrent_for_alpacas_30b_4bit_weights_189_gb/ | msgs | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1200mle | true | null | t3_1200mle | /r/LocalLLaMA/comments/1200mle/new_torrent_for_alpacas_30b_4bit_weights_189_gb/ | false | false | default | 5 | null |
Better QPTQ-quantized LLaMa soon? Paper authors reach out to improve quantization code | 39 | null | https://github.com/qwopqwop200/GPTQ-for-LLaMa/issues/75 | Blacky372 | github.com | 1970-01-01T00:00:00 | 0 | {} | 12047hn | false | null | t3_12047hn | /r/LocalLLaMA/comments/12047hn/better_qptqquantized_llama_soon_paper_authors/ | false | false | 39 | {'enabled': False, 'images': [{'id': 'ft2PxN-_mUCPpCBPSPikSpTPj2DyPdvcv27qziT6pHU', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/c8Q-YSCdhku3VMRsL7yjpCtFKU4aWYqn_OtsNaMT6Ck.jpg?width=108&crop=smart&auto=webp&v=enabled&s=02884e129f373b1430651115330a52690768a53e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/c8Q-YSCdhku3VMRsL7yjpCtFKU4aWYqn_OtsNaMT6Ck.jpg?width=216&crop=smart&auto=webp&v=enabled&s=68ac171c40e9ef8680d5de4043d3e95f871ce29b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/c8Q-YSCdhku3VMRsL7yjpCtFKU4aWYqn_OtsNaMT6Ck.jpg?width=320&crop=smart&auto=webp&v=enabled&s=811614ace1be07d7a974a24e54624d3dc4ae248e', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/c8Q-YSCdhku3VMRsL7yjpCtFKU4aWYqn_OtsNaMT6Ck.jpg?width=640&crop=smart&auto=webp&v=enabled&s=20c62df8fa0996e7cc64765338b987d0ad3c73ff', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/c8Q-YSCdhku3VMRsL7yjpCtFKU4aWYqn_OtsNaMT6Ck.jpg?width=960&crop=smart&auto=webp&v=enabled&s=8772e2a97e874eec54d7ca7e9fc2470bbeba69f8', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/c8Q-YSCdhku3VMRsL7yjpCtFKU4aWYqn_OtsNaMT6Ck.jpg?width=1080&crop=smart&auto=webp&v=enabled&s=517b64b0c72551a11588201e6386872f3bd5c915', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/c8Q-YSCdhku3VMRsL7yjpCtFKU4aWYqn_OtsNaMT6Ck.jpg?auto=webp&v=enabled&s=e67a13398c4cd031a800011e7cfc57d6c4f0e3ff', 'width': 1200}, 'variants': {}}]} |
||
If you STILL have not been able to get oobabooga to run in Windows, my native installation bandaid seems to be working for people. Maybe it will work for you too. | 1 | null | https://github.com/ClayShoaf/oobabooga-one-click-bandaid | clayshoaf | github.com | 1970-01-01T00:00:00 | 0 | {} | 1206gwp | false | null | t3_1206gwp | /r/LocalLLaMA/comments/1206gwp/if_you_still_have_not_been_able_to_get_oobabooga/ | false | false | default | 1 | null |
|
r/LocalLLaMA Subreddit Statistics | 1 | null | https://subredditstatistics.com/r/LocalLLaMA | rx01 | subredditstatistics.com | 1970-01-01T00:00:00 | 0 | {} | 1209ndu | false | null | t3_1209ndu | /r/LocalLLaMA/comments/1209ndu/rlocalllama_subreddit_statistics/ | false | false | default | 1 | null |
|
Finetuning to beat ChatGPT: It's all about communication & management, these are already solved problems | 32 | When I first got the researcher access to GPT3, before it was famous, it would only continue the prompt. It only took a moment, but it was a perspective shift to understand that if you write *"What's the capital of France?"* that the most obvious continuation is not the answer, but another question about geography. It's logical, but it's not expected.
I know everyone here knows this already, but this first perspective shift is the key to understanding how to finetune, and this is the key to beat both ChatGPT and Alpaca.
Humans already have an established understanding of how certain words ought to be responded to, in any particular context. Yet instead of training for this, OpenAI trained for "instruct" and Alpaca trained only questions and answers.
The correct and only way to train a general large language model that is accepting human language as an input, is to respond as a human would. Humans do not assume everything is a question, neither do they take everything as an instruction.
Why must it respond like a human? Because the expectation of that is already built into the language, and the structure of the language is based on these unsaid assumptions. For example, without other instruction, if I state a question I expect the answer. And also that by giving further instructions (e.g. give a wrong answer) that overrides the question. These rules exist, but they are not part of the language. Hence they must be trained. You simply cannot avoid the fact that the language itself only works on the assumption that the listener is able to infer the expectation and context.
Therefore, a general model should be trained on a range of responses to learn the ability to infer the expectation and context. Finetuning is not about training the model to answer questions, or follow instructions, but about training how it should react. How it should react is how you'd expect it to react.
Doing this is not difficult. This is not an unsolved problem, it's already solved. It's just a matter of training those assumptions.
What should not be done, is to train humans to talk differently to the language model. That's the wrong way around.
I intend to do this and make it freely available and opensource. I will make a large database of thousands of such prompts with potential responses. However, I could do with help. The type of help that would be most useful would be server admin type people. I used to run a AI-based company focused on document classification and layout analysis and I was originally a "programmer", so I understand how to manage, format training data, clean the data, and all that, but I'm not a server admin. I will need help from someone who can do the training etc. I will also need hardware, so if anyone volunteers that great, otherwise I can rent hardware for training on vast.ai.
This database of prompts could be used in future for training many different language models.
\-----------
It's worth noting that higher quality results can be obtained by avoiding templating the prompts and responses in the finetuning. Rather the prompt need only contain minimal labelling of what is user input and what is response.
The training examples should contain the ability to continue a conversation, which OpenAI proved to be the simple matter of just appending new questions to the end of the previous prompt up to a maximum size. Obviously then, training data should include such larger prompts containing multiple questions and answers. Ideally, it should not, however, be forced into a specific format.
You also have the philosophical matter of what to do with unanswerable questions. For example, how should a model that doesn't have access to the time answer "what is the time?" It could:
1. Say it doesn't know the time.
2. Explain that it's an AI and it doesn't have access to the time.
3. Explain how to get the time, e.g. check your phone, or go to time.com
Any of those 3 answers would be a fair "human" answer. It's more of a policy matter than a language/cultural matter for a question like this.
There are a few potential solutions: (1) choose a policy before training, (2) train the model to give all options, (3) ask the user for further context when needed. The perfect example of doing this wrong is "Hey, Google" on my Google speaker which insists on giving me unwanted information when asked a simple question.
\-----------
Example of how bad some of the Alpaca training data are:
`"instruction": "Develop a machine learning model to predict the price of a given stock.",`
`"input": "The historical daily price data of three stocks of a technology company, namely ABC stock, XYZ stock and PQR stock, from the past 5 years.",`
`"output": "The machine learning model developed will use the input historical stock prices data to predict the future prices of the mentioned stocks. The model will be trained with the given dataset to extract patterns and trends from the data to generate a more accurate prediction. The Jupyter Notebook will have code written in Python, which will take the input data, train and test the model, and generate the predicted prices."`
They're training it to pretend to help. *How to build a house? (1) Build house.* Thanks Alpaca!
\-----------
Finally, I also have the idea to train Llama to use tools (calculator, Internet access, etc.) either by training it to use a text-based application that has limited functionality (safer) or to train it to write bash files whenever it needs to access external functionality (would need its own environment.) The way to do this would be to train a logistical/manager model that communicates with more general models, very much like how a company works. I successfully used a design like this in a large-scale automated process in the past, and I believe the same can work here.
I will write a new post for this because it's a separate idea to the purpose of this post. Update, it's here: [https://www.reddit.com/r/LocalLLaMA/comments/120fw2u/training\_llama\_for\_tooluse\_via\_a/](https://www.reddit.com/r/LocalLLaMA/comments/120fw2u/training_llama_for_tooluse_via_a/) | null | https://www.reddit.com/r/LocalLLaMA/comments/120e7m7/finetuning_to_beat_chatgpt_its_all_about/ | Pan000 | self.LocalLLaMA | 2023-03-24T12:38:57 | 0 | {} | 120e7m7 | false | null | t3_120e7m7 | /r/LocalLLaMA/comments/120e7m7/finetuning_to_beat_chatgpt_its_all_about/ | false | false | self | 32 | null |
Whats the fastest implementation for a 24Gb GPU with a 16-core CPU? | 3 | I have been testing various implementation of Llama, like llama.cpp, Torch versions with and without Int8 quantization etc, but nothing seems particularly fast (> 8 seconds, even on 7B on GPU for 256 token generations).
Are there really fast implementations out yet? maybe ONNX compilations or something to speed up inferences? I'm looking for a sub-2-second generation of 256 tokens, for voice interaction. I can get that from the OpenAI API, but I want something local.
&#x200B;
Thanks! | null | https://www.reddit.com/r/LocalLLaMA/comments/120etwq/whats_the_fastest_implementation_for_a_24gb_gpu/ | Reddactor | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 120etwq | false | null | t3_120etwq | /r/LocalLLaMA/comments/120etwq/whats_the_fastest_implementation_for_a_24gb_gpu/ | false | false | self | 3 | null |
Would it be better for me to run it on gpu or cpu | 3 | I understood that there was two ways of running it, normally and on cpu with llama.cpp
I’ve a 2080 ti and a ryzen 5 2600 what do you think ?
Generally what are the advantages to run it on cpu ? | null | https://www.reddit.com/r/LocalLLaMA/comments/120eu0u/would_it_be_better_for_me_to_run_it_on_gpu_or_cpu/ | lord_baba | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 120eu0u | false | null | t3_120eu0u | /r/LocalLLaMA/comments/120eu0u/would_it_be_better_for_me_to_run_it_on_gpu_or_cpu/ | false | false | self | 3 | null |
Training Llama for tool-use via a manager/logistical model | 5 | My [earlier post](https://www.reddit.com/r/LocalLLaMA/comments/120e7m7/finetuning_to_beat_chatgpt_its_all_about/) was about training Llama to understand human-language context and expectations for how it should react to user input.
The next obvious stage is tool use, which is the use of calculator, Internet browsing, local search, etc.
And to do that requires stages, which requires logistics.
**Logistics / Management**
As I said in my earlier post, these are already solved problems. In this case it's just logistics. Automated AI logistics is something I was doing a lot of a few years back.
There would be multiple models, each with a specific job:
BASIC MODEL
A general model that is like the existing models. It replies to the user, based on their input. However this model also be trained to detect instances in which it should refer to its manager, for example if the user asked something complicated such as "design me a website".
If the basic model has trouble identifying what it should answer or refer to the manager, a diagnostician model can be used whose entire job is to analyze inputs and choose which model they're sent to. However, this adds additional layer of processing when it's not required for the majority of the usage, and so it doubles processing power needed for most tasks, hence why it's favorable to train the most commonly used model to also act as a diagnostician.
MANAGER MODEL
This model is trained to separate a problem into individual defined tasks and to do so within a template that can be parsed. For example, "design me a website" would result in it responding with questions for context, or if it has enough context, a list of tasks (choose title, design logo, create logo image, choose layout, write HTML & CSS, etc.)
Depending on how much it can do in one go, which we won't know until experimenting, it may or may not be necessary to have a separate model for analyzing the manager's plan. That would be a logistics model, which checks that the plan makes sense and determines the order it needs to be executed in any missing prerequisite information (e.g. design logo must be completed before create logo).
Further, it may be necessary to have a model that translates written instructions "design a logo of a cat 300x900 pixels in png format" to parseable instructions `"tool:stablediffusion loadmodel:logogen prompt:cat pipeto upscaler"` (obv just to get the idea). It's possible all this can be done within one manager model, or separate models may be needed.
PARSER / EXECUTER
The manager outputs parseable instructions on which it is trained which are then given to other models. This is parsed and executed in the order provided (some jobs can run parallel, some require first waiting for another to be complete).
Syntax and other error checking can be done directly by the executer on the received response from the appropriate model, before then sending it to Quality Control.
QUALITY CONTROLLER
This model is trained where the input is the prompt and information that were used to generate a multi-modal response, and all the media (as filenames), along with any other information such as errors. From this it'll be trained to look for issues. What it looks for and how it does it would be an evolving situation.
It may be that a individually trained quality control model is used for each tool. For example, one model checks that the HTML does indeed look like the prompt request. Another is trained on images of professional logos along with the prompt information (not yet possible with Llama but will be possible within a few months no doubt.)
DELIVERY MODEL
Trained on the original prompts, info & response from Quality Control, this is a fairly straightforward model that delivers the finished product to the user. Replies to this model go back to the Manager model. In the real world, the Delivery Model is the Manager's job but in this case it makes more sense to separate them so as to not dilute the Manager's training.
**More complicated tasks**
The crux of the issue is the quality of the training data for training the manager and the task-separating parts. In theory, any complexity product could be generated assuming the tools are available to do so.
The difficulty in the job of a manager, which I assume this model will also have difficulty with, is not forgetting something or knowing what you've forgotten. It becomes obvious in a business if you try to make a computer game but something was forgotten, e.g. you forgot to think about the music when designing the game.
It may be the whole issue comes down to training models that think about very specific issues that need to be solved. An example might be that the game background music should be shuffled, using a shuffling algorithm, which is different to randomly playing it. But why would it know that? It might know that. It might now. So far we've not tested the limits here. It might be that this always will rely on the user input, it might be that the manager model can be trained to think of everything. We'll find out, but either way, it'll still work. The question is only how much can be done without the user.
It's probably necessary to have the manager display the plan to the user to approve before executing it. It may be necessary to have stages of plans in different levels of detail, breaking down each task into smaller and smaller tasks until the entire job is specified onto the smallest level. Or it might not be.
**Tool use**
Tool use is fairly straightforward. In theory you could have it generate bash files, double check each bash file with a quality control model that checks the model does what the prompt asked for, check it for syntax errors, then run it. The output could be fed back into the model as a hidden variable, which the model would be trained to understand meant it should read it back.
For example: What's the time?
The model creates a bash file to output the time and tags it with a `#run` command and templated response `"The current time is [INSERT]"`. This is detected by a simple parsing of the result before displaying to the user. The bash file is checked by the separate quality control model. It's then run. The bash file code is deleted from the output and `[INSERT]` is replaced with the bash file output.
However, it'll be safer, easier to train complicated multi-layered tasks, and also to integrate Internet searching, etc. if there were a single interface that Llama were trained on with fixed abilities.
For example, a command-line application that can get the time, return a web page, can search on multiple search engines and Wikipedia and return the results, etc. It'd be the console interface that the language model was trained to use to communicate with the outside world.
I think ultimately it would be worth training a model to use an interface made for it rather than giving it broad access. Not necessarily because of safety, but because it'll be much easier to determine the quality of the outputs and it limits the model to using fixed resources available to it, which might make it produce better results.
The interface would be able to be updated with new features. That would require retraining and updating the model for all users, but that's not a problem.
*If anyone wants to help me make this an opensource reality, let me know.* | null | https://www.reddit.com/r/LocalLLaMA/comments/120fw2u/training_llama_for_tooluse_via_a/ | Pan000 | self.LocalLLaMA | 2023-03-24T10:25:45 | 0 | {} | 120fw2u | false | null | t3_120fw2u | /r/LocalLLaMA/comments/120fw2u/training_llama_for_tooluse_via_a/ | false | false | self | 5 | null |
How do I fine tune 4 bit or 8 bit models? | 0 | [removed] | null | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 120kojq | false | null | t3_120kojq | /r/LocalLLaMA/comments/120kojq/how_do_i_fine_tune_4_bit_or_8_bit_models/ | false | false | default | 0 | null |
||
Using LLaMA in interactive mode and reverse prompts? | 1 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/120lp6t/using_llama_in_interactive_mode_and_reverse/ | amiatt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 120lp6t | false | null | t3_120lp6t | /r/LocalLLaMA/comments/120lp6t/using_llama_in_interactive_mode_and_reverse/ | false | false | default | 1 | null |
Change Dalai model folder? | 1 | [removed] | null | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 120s66r | false | null | t3_120s66r | /r/LocalLLaMA/comments/120s66r/change_dalai_model_folder/ | false | false | default | 1 | null |
||
Keep your GPUs cool | 9 | May not apply to you people running gaming hardware, but for those of us running tesla's maybe without proper cooling, keep your GPUs cool.
I've been playing around with llama 30b 4bit on my m40. After loading the model for the first few prompts I'm generating about 8 tokens/second, a few prompts in my temps are up to 87c and I'm only getting 1-2 tokens/s.
&#x200B;
Anyways reminder for those who maybe forgetful like me, order a cooling shroud or something. I ordered a few off of ebay, so hopefully they work. But damn ebay takes a long time to ship to Canada. | null | https://www.reddit.com/r/LocalLLaMA/comments/120spmv/keep_your_gpus_cool/ | wind_dude | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 120spmv | false | null | t3_120spmv | /r/LocalLLaMA/comments/120spmv/keep_your_gpus_cool/ | false | false | self | 9 | null |
Which LLaMA model is best for my set up? | 1 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/120x6eh/which_llama_model_is_best_for_my_set_up/ | EnvironmentalAd3385 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 120x6eh | false | null | t3_120x6eh | /r/LocalLLaMA/comments/120x6eh/which_llama_model_is_best_for_my_set_up/ | false | false | default | 1 | null |
Testing out image recognition input techniques and outputs by modifying the sd_api_picture extension, using Oobabooga and LLaMA 13B in 4-bit mode | 20 | Just thought to share some various ways to use/change the existing image recognition and image generating extensions.
[https://imgur.com/a/KEuaywA](https://imgur.com/a/KEuaywA)
I was able to get the AI to identify the number and type of objects in an image, by means of telling the AI in advance and it waiting for me to sent it an image. Using LLaMA and my ChatGPT character card ([https://old.reddit.com/r/Oobabooga/comments/11qgwui/getting\_chatgpt\_type\_responses\_from\_llama/](https://old.reddit.com/r/Oobabooga/comments/11qgwui/getting_chatgpt_type_responses_from_llama/)) I can actually tell the AI that I'm going to send a picture and it responds appropriately and waits for me to send the image...wow!
I've also modified the script.py file for the sd\_api\_pictures extension for Oobabooga to get better picture responses. I essentially just deleted the default input messages to the image generating portion of the pipeline. The Image with the astronaut is using the standard script.py file, and the following images use my modified version, you can get here:
Google Drive link with, the Character Card, settings preset, example input image of vegetables, and modded script.py file for the sd_api_pictures extension:
https://drive.google.com/drive/folders/1KunfMezZeIyJsbh8uJa76BKauQvzTDPw | null | https://www.reddit.com/r/LocalLLaMA/comments/1211u41/testing_out_image_recognition_input_techniques/ | Inevitable-Start-653 | self.LocalLLaMA | 2023-03-25T15:32:30 | 0 | {} | 1211u41 | false | null | t3_1211u41 | /r/LocalLLaMA/comments/1211u41/testing_out_image_recognition_input_techniques/ | false | false | self | 20 | {'enabled': False, 'images': [{'id': 'dS3rdb6qhR5L7kujCCTYZo67gjIK9OkqcxuDOenNd1I', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/_DwFztnI2zXkr1SeelHmcKXZGtyX2nI6JXaZTxDsofE.jpg?width=108&crop=smart&auto=webp&v=enabled&s=f617f3c272c6b83ae4d52d9c5b67f69c10eda429', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/_DwFztnI2zXkr1SeelHmcKXZGtyX2nI6JXaZTxDsofE.jpg?width=216&crop=smart&auto=webp&v=enabled&s=24b5f992d2c82b272380864c4ed8824435ce36e4', 'width': 216}, {'height': 640, 'url': 'https://external-preview.redd.it/_DwFztnI2zXkr1SeelHmcKXZGtyX2nI6JXaZTxDsofE.jpg?width=320&crop=smart&auto=webp&v=enabled&s=51a3f64160753c22aa4446b896e5bc301580ef23', 'width': 320}, {'height': 1280, 'url': 'https://external-preview.redd.it/_DwFztnI2zXkr1SeelHmcKXZGtyX2nI6JXaZTxDsofE.jpg?width=640&crop=smart&auto=webp&v=enabled&s=464e24fc8dbaf1613ca5e31f522dd71fd702e869', 'width': 640}], 'source': {'height': 2120, 'url': 'https://external-preview.redd.it/_DwFztnI2zXkr1SeelHmcKXZGtyX2nI6JXaZTxDsofE.jpg?auto=webp&v=enabled&s=5d5e9472fa1e85c60165a3df52ff97ae42c750e3', 'width': 886}, 'variants': {}}]} |
Run llama.cpp with tea – without the installation pain! | 1 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/1216du7/run_llamacpp_with_tea_without_the_installation/ | mfts0 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1216du7 | false | null | t3_1216du7 | /r/LocalLLaMA/comments/1216du7/run_llamacpp_with_tea_without_the_installation/ | false | false | default | 1 | null |
Managed to get 4-bit Llama 7b running but it's generating random text. | 1 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/1219iq2/managed_to_get_4bit_llama_7b_running_but_its/ | MimeMagic | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1219iq2 | false | null | t3_1219iq2 | /r/LocalLLaMA/comments/1219iq2/managed_to_get_4bit_llama_7b_running_but_its/ | false | false | default | 1 | null |
Can't tell if the AI is repeating itself or actually being stubborn HAL? | 6 | null | Inevitable-Start-653 | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 121ad9h | false | null | t3_121ad9h | /r/LocalLLaMA/comments/121ad9h/cant_tell_if_the_ai_is_repeating_itself_or/ | false | false | 6 | {'enabled': True, 'images': [{'id': 'e7N4xnKzBmpMEWucb_7lM4bqYKvzkc5v61sBChQW1fc', 'resolutions': [{'height': 216, 'url': 'https://preview.redd.it/vmqsvwkiatpa1.png?width=108&crop=smart&auto=webp&v=enabled&s=28f2c84584097d2ff7cf860db3e38a62a18ff1bb', 'width': 108}, {'height': 432, 'url': 'https://preview.redd.it/vmqsvwkiatpa1.png?width=216&crop=smart&auto=webp&v=enabled&s=17d8199f6ee2128b8b9bd56bb629755e1507d7fc', 'width': 216}, {'height': 640, 'url': 'https://preview.redd.it/vmqsvwkiatpa1.png?width=320&crop=smart&auto=webp&v=enabled&s=dbb6541cfe3244f8bcb20ad7212ec06b836a3640', 'width': 320}, {'height': 1280, 'url': 'https://preview.redd.it/vmqsvwkiatpa1.png?width=640&crop=smart&auto=webp&v=enabled&s=afd929a5c15d30388d9b7f815bb976be650c5133', 'width': 640}], 'source': {'height': 6184, 'url': 'https://preview.redd.it/vmqsvwkiatpa1.png?auto=webp&v=enabled&s=cd3d5e6a009605d65e325540555f0af03517e88b', 'width': 816}, 'variants': {}}]} |
|||
Implementing Reflexion into LLaMA/Alpaca would be an really interesting project | 15 | null | https://www.reddit.com/r/MachineLearning/comments/1215dbl/r_reflexion_an_autonomous_agent_with_dynamic/ | jd_3d | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 121b1l5 | false | null | t3_121b1l5 | /r/LocalLLaMA/comments/121b1l5/implementing_reflexion_into_llamaalpaca_would_be/ | false | false | 15 | {'enabled': False, 'images': [{'id': 'q3evP6JeDpAC2MdSQHWYxnCYTqbJkElIQsLFqVSdkss', 'resolutions': [{'height': 63, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=108&crop=smart&auto=webp&v=enabled&s=586089b93aa59ebd86bb3b273ad1fb0c73e45ab7', 'width': 108}, {'height': 126, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=216&crop=smart&auto=webp&v=enabled&s=00869aa5692fb9c8aa11f48ed92bff8db4f47293', 'width': 216}, {'height': 186, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=320&crop=smart&auto=webp&v=enabled&s=72f6ae2c0800df8a56c3fc74afb033bf37cc16a9', 'width': 320}, {'height': 373, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=640&crop=smart&auto=webp&v=enabled&s=cfcb5f9f66743f2e26952e5edff4dfed984af692', 'width': 640}, {'height': 560, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=960&crop=smart&auto=webp&v=enabled&s=821ed287940b59a56b2643dcaf6a356ccfdc4eb5', 'width': 960}, {'height': 630, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?width=1080&crop=smart&auto=webp&v=enabled&s=f101972ffc7ec2e3eedefa45eaa677e4d9024520', 'width': 1080}], 'source': {'height': 700, 'url': 'https://external-preview.redd.it/0HhwdU6MKIAKjL9Y8-B_iH374a3NiPTy0ib8lmloRzA.jpg?auto=webp&v=enabled&s=757c00601aa4ffb984c87000927a0610d04c3845', 'width': 1200}, 'variants': {}}]} |
||
How to use LoRA (need instructions) | 3 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/121ehwj/how_to_use_lora_need_instructions/ | Famberlight | self.LocalLLaMA | 2023-03-25T08:30:11 | 0 | {} | 121ehwj | false | null | t3_121ehwj | /r/LocalLLaMA/comments/121ehwj/how_to_use_lora_need_instructions/ | false | false | default | 3 | null |
Need help installing alpaca on Android | 0 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/121g7yn/need_help_installing_alpaca_on_android/ | -2b2t- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 121g7yn | false | null | t3_121g7yn | /r/LocalLLaMA/comments/121g7yn/need_help_installing_alpaca_on_android/ | false | false | default | 0 | null |
Has anyone tried Dolly AI? | 0 | [deleted] | null | [deleted] | 2023-03-25T12:26:22 | 0 | {} | 121imtt | false | null | t3_121imtt | /r/LocalLLaMA/comments/121imtt/has_anyone_tried_dolly_ai/ | false | false | default | 0 | null |
||
Help installing LLaMA on ubuntu with Tesla M40 24GB | 1 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/121kdnc/help_installing_llama_on_ubuntu_with_tesla_m40/ | MageLD | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 121kdnc | false | null | t3_121kdnc | /r/LocalLLaMA/comments/121kdnc/help_installing_llama_on_ubuntu_with_tesla_m40/ | false | false | default | 1 | null |
Simulating Aristotle in Alpaca 7B, I used gpt4 to build the character profile, I've been surprised and just wanted to share. I have some more like this that I can share if you like it :D | 35 | null | psycholustmord | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 121nytl | false | null | t3_121nytl | /r/LocalLLaMA/comments/121nytl/simulating_aristotle_in_alpaca_7b_i_used_gpt4_to/ | false | false | 35 | {'enabled': True, 'images': [{'id': '8MdAsu39-EmfKrxvC9bjfAcatC8OSj-_OLidKAh7w6M', 'resolutions': [{'height': 84, 'url': 'https://preview.redd.it/a2wx8exgdwpa1.png?width=108&crop=smart&auto=webp&v=enabled&s=3bcecb428949ca686c3378dcbcb1e1c32afc8862', 'width': 108}, {'height': 168, 'url': 'https://preview.redd.it/a2wx8exgdwpa1.png?width=216&crop=smart&auto=webp&v=enabled&s=77f20c623c93040ccf88b61ee7b7ce02d723476c', 'width': 216}, {'height': 250, 'url': 'https://preview.redd.it/a2wx8exgdwpa1.png?width=320&crop=smart&auto=webp&v=enabled&s=76399a6d4a5777e5d331f936b401bad08ad7c9e4', 'width': 320}, {'height': 500, 'url': 'https://preview.redd.it/a2wx8exgdwpa1.png?width=640&crop=smart&auto=webp&v=enabled&s=aa2e377972c49954007514bad08f8e167739a8a7', 'width': 640}], 'source': {'height': 705, 'url': 'https://preview.redd.it/a2wx8exgdwpa1.png?auto=webp&v=enabled&s=b9575fcae2508deee18ab52349479f6588c9ca8c', 'width': 902}, 'variants': {}}]} |
|||
fastLLaMa, A python wrapper to run llama.cpp | 16 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/121pk4y/fastllama_a_python_wrapper_to_run_llamacpp/ | BriefCardiologist656 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 121pk4y | false | null | t3_121pk4y | /r/LocalLLaMA/comments/121pk4y/fastllama_a_python_wrapper_to_run_llamacpp/ | false | false | default | 16 | null |
Question on llama.cpp and webui (ram usage) | 2 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/121redn/question_on_llamacpp_and_webui_ram_usage/ | kozer1986 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 121redn | false | null | t3_121redn | /r/LocalLLaMA/comments/121redn/question_on_llamacpp_and_webui_ram_usage/ | false | false | default | 2 | null |
Out of memory with 13B on an RTX 3060? | 3 | I've been messing around with 13B on my CPU with some nice but very slow results.
I just upgraded my GPU to an RTX 3060 with 12GB thinking it would be enough to run this model (I recall having heard someone mention using a similar GPU for 30B), but now that I tried, I got an out of cuda memory error.
What could be the issue? Did I misinterpret the hardware requirements?
For what its worth, I'm running the program using a little hacked together python script I wrote, and using decapoda-research/llama-13b-hf (so, I guess not quantized as those in dalai? could that be the issue?). | null | https://www.reddit.com/r/LocalLLaMA/comments/121xqmv/out_of_memory_with_13b_on_an_rtx_3060/ | amiatt | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 121xqmv | false | null | t3_121xqmv | /r/LocalLLaMA/comments/121xqmv/out_of_memory_with_13b_on_an_rtx_3060/ | false | false | self | 3 | null |
Starting model? | 5 | Apologies for the basic questions but...
What is the Best model to start with?Llama or Alpaca? Is it best to start with 7B or if your system can handle it just start at the highest parameter you can run?
I have not really seen many head to head comparisons or where it is best to start out.
I have access to a AMD 32 core / 64 thread system with a 3080ti video card (12gb vram). 64gb ram.
Would you go with the 30B? | null | https://www.reddit.com/r/LocalLLaMA/comments/121y16f/starting_model/ | dave9199 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 121y16f | false | null | t3_121y16f | /r/LocalLLaMA/comments/121y16f/starting_model/ | false | false | self | 5 | null |
Can we create a megathread for cataloging all the projects and installation guides of Llama? | 37 | ***Disclaimer - The Title and Text of this post is created with the help of ChatGPT***
Hey everyone,
As Llama continues to grow and more projects and installation guides become available, it can be challenging to keep track of all of them. I'm sure many of you have seen multiple posts related to Llama, and it can be overwhelming to sort through them all.
Therefore, I propose that we create a megathread to collect and catalog all the projects and installation guides for Llama. This thread can serve as a one-stop-shop for anyone interested in learning more about Llama or installing it on their device.
I believe having a centralized thread for Llama projects and guides will help reduce clutter on the Subreddit and make it easier for users to find the information they need. It will also encourage more people to share their projects and guides, knowing that they have a dedicated space to do so.
What do you all think? Would you find a megathread useful? And if so, can we work together to create and maintain it?
Thanks for your time, and I look forward to hearing your thoughts. | null | https://www.reddit.com/r/LocalLLaMA/comments/121yfxl/can_we_create_a_megathread_for_cataloging_all_the/ | utkvishwas | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 121yfxl | false | null | t3_121yfxl | /r/LocalLLaMA/comments/121yfxl/can_we_create_a_megathread_for_cataloging_all_the/ | false | false | self | 37 | null |
End of preview. Expand
in Dataset Viewer.
r/LocalLLaMA posts
Posts from r/LocalLLaMA pulled up through Jan 6, 2025 with arctic-shift. Now you can check if your wonderfully thought out post hasn't already been asked 30x
Usage
For simple semantic search, try loading it in the vectorsearch-hub-datasets space:
- Downloads last month
- 18