title
stringlengths 1
300
| score
int64 0
3.09k
| selftext
stringlengths 0
40k
| created
timestamp[ns] | url
stringlengths 0
780
| author
stringlengths 3
20
| domain
stringlengths 0
82
| edited
timestamp[ns] | gilded
int64 0
2
| gildings
stringclasses 7
values | id
stringlengths 7
7
| locked
bool 2
classes | media
stringlengths 646
1.8k
⌀ | name
stringlengths 10
10
| permalink
stringlengths 33
82
| spoiler
bool 2
classes | stickied
bool 2
classes | thumbnail
stringlengths 4
213
| ups
int64 0
3.09k
| preview
stringlengths 301
5.01k
⌀ |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Guide to the different parameters and models in web UI? | 1 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/1221xih/guide_to_the_different_parameters_and_models_in/ | Nomski88 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1221xih | false | null | t3_1221xih | /r/LocalLLaMA/comments/1221xih/guide_to_the_different_parameters_and_models_in/ | false | false | default | 1 | null |
Speed difference between 8bit and 4bit | 5 | Is 4bit supposed to run 5 times faster than 8bit? I'm getting 20 tokens/s on 4bit, but only 4 tokens/s on 8bit. Using a 3060 12gb, on WSL.
I just installed webui on WSL with a lot of trouble, and I want to make sure it's installed correctly.
I had 4bit work right away when following the pinned guide on this subreddit, but 8bit CUDA didn't work; yet still loaded sometimes and filled my VRAM. Then I reinstalled everything and added the step with symlinks and now 4bit and 8bit both work fine according to CUDA messages, but 8bit speed is the same as when it loaded after saying CUDA wasn't found.
This whole thing is kind of a mess honestly. I don't think WSL should be the recommended installation method. I can't even figure out how to create any kind of shortcut that would open an Ubuntu terminal and automatically activate conda and launch the webui. (without modifying .bashrc)
Edit: Ok, after further testing, it seems like while 4bit always generates token very fast, as the context increases in size it starts to take a lot more time to generate the first token? So 4bit token speed highly depends on how large of a response you set/get. I think this performance difference should be explained somewhere.
I'm also getting a ton of empty replies with error "UnboundLocalError: local variable 'reply' referenced before assignment"
I think I'm gonna wait some more before getting into local text AI, doesn't seem ready for reliable usage yet. | null | https://www.reddit.com/r/LocalLLaMA/comments/122218d/speed_difference_between_8bit_and_4bit/ | wewyman | self.LocalLLaMA | 2023-03-25T23:57:32 | 0 | {} | 122218d | false | null | t3_122218d | /r/LocalLLaMA/comments/122218d/speed_difference_between_8bit_and_4bit/ | false | false | self | 5 | null |
video tutorial for AMD GPU to run LLaMA and LoRA on Ubuntu | 2 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/1224fh0/video_tutorial_for_amd_gpu_to_run_llama_and_lora/ | chain-77 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1224fh0 | false | null | t3_1224fh0 | /r/LocalLLaMA/comments/1224fh0/video_tutorial_for_amd_gpu_to_run_llama_and_lora/ | false | false | default | 2 | null |
My experience with Alpaca.cpp | 52 | Ever since ChatGPT launched, I've been very fascinated by it and have sought a local alternative that can run fairly well on decent consumer hardware and that "anyone" can use if their hardware allows. Naturally, I was excited when I saw [this post](https://www.reddit.com/r/StableDiffusion/comments/11y6qs7/free_opensource_30_billion_parameters_minichatgpt/) the other day and I wasn't disappointed -- I feel like [alpaca.cpp](https://github.com/antimatter15/alpaca.cpp) with the [30B model in 4-bit quantization](https://huggingface.co/Pi3141) has made that a reality (though, I'm sure a good experience can be had with the 13B model for users with less than 32 GB of RAM or who wish for faster responses).
the steps are essentially as follows:
1. download the appropriate [zip file](https://github.com/antimatter15/alpaca.cpp/releases/tag/81bd894) and unzip it
2. place whatever model you wish to use in the same folder, and rename it to "ggml-alpaca-7b-q4.bin"
3. run ./chat to start with the defaults.
4. for a better experience, you can start it with this command: ./chat -t \[threads\] --temp \[temp\] --repeat\_penalty \[repeat penalty\] --top\_k \[top\_k\] -- top\_p \[top\_p\].
Now, on the values to use: I have a 12700k and found that 12 threads works best (ie the number of actual cores I have, not total threads). I'm honestly not sure if this has something to with the "e" cores on my CPU or is a general rule, but consider the number of physical cores you have to be a good starting point.
For the rest of the values, I like the numbers listed [here](https://www.reddit.com/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/):
* For a more creative chat, use: temp 0.72, rep pen 1.1, top\_k 0, and top\_p 0.73
* For a more precise chat, use temp 0.7, repetition\_penalty 1.1764705882352942 (1/0.85), top\_k 40, and top\_p 0.1
Note that one hang-up I had is llama.cpp doesn't interpret a top\_k of 0 as "unlimited", so I ended up setting it to 160 for creative mode (though any arbitrarily high value would've likely worked) and got good results.
So, to run a "creative" chat for instance, the full command would be ./chat -t \[number of physical cores you have\] --temp 0.72 --repeat\_penalty 1.1 --top\_k 160 -- top\_p 0.73.
Aaaaand... that's it. It's not quite ChatGPT but that's all that was needed to get this to work.
​
As for performance, it generates just under 70 words per minute on a 12700k while using just under 20 GB of RAM, though I'm sure this could be sped up significantly by using the 13B model.
As for quality, well.... I'm pretty impressed. I fed it some problems I noticed ChatGPT getting hung up on (ie "Ann is 20, her sister is half her age minus 25, in 10 years who will be older?" And "Bob, Sue, and Charlie went golfing. Bob scored 90, Sue scored 10 higher than Bob, Charlie scored 10 higher than Sue. Who won?") and... it answered them correctly. It was also able to accurately give me lists of wild cats in North America, answer some other random trivia, and hold a conversation convincingly. Not bad.
That's not to say it's perfect of course; it wasn't able to do basic integration (chatGPT can), it doesn't use previous conversation as context (and therefore can't respond well to prompts like "rephrase that" or "rewrite that, and make it more detailed), and, I was able to "fool" it with humorous results. Take the following for instance:
Prompt: Who was the first person to climb Mt. Chicago?
Response: The first person to climb Mt. Chicago was John Smith, in 1895.
Prompt: Tell me about John Smith climbing Mt. Chicago in 1895
Response:
>John Smith was a well-known mountaineer from the late 1800s who was the first to successfully summit Mt. Chicago, a 23,467 ft peak in the Cascade Range of Washington State. He is credited with creating many of the routes and techniques still used today for climbing the mountain. Smith was also a pioneer in the use of artificial oxygen for high altitude ascents, which he used on his successful Mt. Chicago summit attempt.
>
>Smith began planning his ascent of Mt. Chicago in early 1895 and set out with two companions from Seattle in June. The group made slow progress due to the difficult terrain, but by August they were nearing the summit. On August 24th Smith reached the top of Mt. Chicago, becoming the first person to ever do so. He spent several hours on the peak taking photos and measurements before descending safely back to Seattle.
​
Anyhow, that's been my experience with this program and I think it's worth checking out if you've been wanting to get into this but are intimidated by the setup or by hardware requirements. If you've read this far, thanks, and I hope you find this helpful :) | null | https://www.reddit.com/r/LocalLLaMA/comments/1227uj5/my_experience_with_alpacacpp/ | AI-Pon3 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1227uj5 | false | null | t3_1227uj5 | /r/LocalLLaMA/comments/1227uj5/my_experience_with_alpacacpp/ | false | false | self | 52 | {'enabled': False, 'images': [{'id': 'OqNBDrwG79LuAr5fnfIKbOQUisQvHxpNQUeM-WZhodQ', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/82qVi0yi-kfcWnPBWkpv6Bh52sg4rmoXW_DTRt2YQOQ.jpg?width=108&crop=smart&auto=webp&v=enabled&s=6c0ca59aa7fd79b2d1567fac2772bf23bfee3828', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/82qVi0yi-kfcWnPBWkpv6Bh52sg4rmoXW_DTRt2YQOQ.jpg?width=216&crop=smart&auto=webp&v=enabled&s=617eaddb592178a28df634d97ac9301ad22c91b4', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/82qVi0yi-kfcWnPBWkpv6Bh52sg4rmoXW_DTRt2YQOQ.jpg?width=320&crop=smart&auto=webp&v=enabled&s=178961aef752fc60963e750c2d712e5992f8c437', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/82qVi0yi-kfcWnPBWkpv6Bh52sg4rmoXW_DTRt2YQOQ.jpg?width=640&crop=smart&auto=webp&v=enabled&s=db5234ffaee652afedc0ccdfac597ee847ad37a6', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/82qVi0yi-kfcWnPBWkpv6Bh52sg4rmoXW_DTRt2YQOQ.jpg?width=960&crop=smart&auto=webp&v=enabled&s=d2d79cef81f8312bc496698691989e8a58168f86', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/82qVi0yi-kfcWnPBWkpv6Bh52sg4rmoXW_DTRt2YQOQ.jpg?width=1080&crop=smart&auto=webp&v=enabled&s=a9c50b17dc77603cb7f9a65cc8d7fa90439279bc', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/82qVi0yi-kfcWnPBWkpv6Bh52sg4rmoXW_DTRt2YQOQ.jpg?auto=webp&v=enabled&s=7997a0417470c49a1ec383e4efb711c3d33e9de5', 'width': 1200}, 'variants': {}}]} |
With the latest web UI update, old 4-bit weights will no longer work. You will need the new weights. | 19 | **New weights** (torrent file/magnet link)
Converted without group-size (better for the 7B model): [https://github.com/oobabooga/text-generation-webui/files/11069779/LLaMA-HF-4bit.zip](https://github.com/oobabooga/text-generation-webui/files/11069779/LLaMA-HF-4bit.zip) or
magnet:?xt=urn:btih:e88abf1b84290b162f00d3a9d79fb4f8719c2053&dn=LLaMA-HF-4bit&tr=http%3a%2f%2fbt2.archive.org%3a6969%2fannounce&tr=http%3a%2f%2fbt1.archive.org%3a6969%2fannounce
Converted with group-size (better from 13b upwards): [https://github.com/oobabooga/text-generation-webui/files/11070361/LLaMA-HF-4bit-128g.zip](https://github.com/oobabooga/text-generation-webui/files/11070361/LLaMA-HF-4bit-128g.zip) or
magnet:?xt=urn:btih:88f7d9d2460ffcaf78b21e83012de00939eacb65&dn=LLaMA-HF-4bit-128g&tr=http%3a%2f%2fbt2.archive.org%3a6969%2fannounce&tr=http%3a%2f%2fbt1.archive.org%3a6969%2fannounce
**Starting the web UI**
For the models converted without group-size: python server.py --model llama-7b-4bit --wbits 4
For the models converted with group-size: python server.py --model llama-13b-4bit-128g --wbits 4 --groupsize 128
**Benchmarks** (perplexity)
**7B**
wikitext2 6.259988784790039
wikitext2 6.237235069274902 (groupsize 128)
ptb-new 10.817036628723145
ptb-new 11.199039459228516 (groupsize 128)
c4-new 7.802077293395996
c4-new 8.000247955322266 (groupsize 128)
**13B**
wikitext2 5.341851711273193
wikitext2 5.242600440979004 (groupsize 128)
ptb-new 9.474738121032715
ptb-new 9.225408554077148 (groupsize 128)
c4-new 7.071592330932617
c4-new 6.912217617034912 (groupsize 128)
**30B**
wikitext2 4.45449686050415
wikitext2 4.230341911315918 (groupsize 128)
ptb-new 8.377615928649902
ptb-new 8.243087768554688 (groupsize 128)
c4-new 6.390762805938721
c4-new 6.231330394744873 (groupsize 128)
**65B**
wikitext2 3.8416879177093506
wikitext2 3.658999443054199 (groupsize 128)
ptb-new 7.881875991821289
ptb-new 7.780252456665039 (groupsize 128)
c4-new 5.998412609100342
c4-new 5.896479606628418 (groupsize 128)
​
I'll keep this notice up for a few days, and I'll update the guide as soon as possible. For more info, see [here](https://github.com/oobabooga/text-generation-webui/pull/530) and the [Github wiki](https://github.com/oobabooga/text-generation-webui/wiki/LLaMA-model#4-bit-mode). | null | https://www.reddit.com/r/LocalLLaMA/comments/122c2sv/with_the_latest_web_ui_update_old_4bit_weights/ | Technical_Leather949 | self.LocalLLaMA | 2023-03-26T07:59:47 | 0 | {} | 122c2sv | false | null | t3_122c2sv | /r/LocalLLaMA/comments/122c2sv/with_the_latest_web_ui_update_old_4bit_weights/ | false | false | self | 19 | null |
How to implement gpu/cpu offloading for text-generation-webui? [custom device_map] | 3 |
Hello, I am trying to set up a custom device\_map via hugging face's instructions
[https://huggingface.co/docs/transformers/main/en/main\_classes/quantization#offload-between-cpu-and-gpu](https://huggingface.co/docs/transformers/main/en/main_classes/quantization#offload-between-cpu-and-gpu)
I have this code inserted into my "[server.py](https://server.py/)" folder for [text-generation-webui](https://github.com/oobabooga/text-generation-webui)
>\# Set the quantization config with llm\_int8\_enable\_fp32\_cpu\_offload set to True
quantization\_config = BitsAndBytesConfig(llm\_int8\_enable\_fp32\_cpu\_offload=True)
device\_map = {
"transformer.word\_embeddings": 0,
"transformer.word\_embeddings\_layernorm": 0,
"lm\_head": "cpu",
"transformer.h": 0,
"transformer.ln\_f": 0,
}
model\_path = "decapoda-research/llama-7b-hf"
model\_8bit = AutoModelForCausalLM.from\_pretrained(
model\_path,
device\_map=device\_map,
quantization\_config=quantization\_config,
)
However two problem
1. It downloads a new copy of the model from hugging face rather than from my model directory.
2. I get this error even after the download
>File "C:\\Windows\\System32\\text-generation-webui\\[server7b.py](https://server7b.py/)", line 33, in <module>
model\_8bit = AutoModelForCausalLM.from\_pretrained(
File "C:\\Users\\justi\\miniconda3\\envs\\textgen\\lib\\site-packages\\transformers\\models\\auto\\auto\_factory.py", line 471, in from\_pretrained
return model\_class.from\_pretrained(
File "C:\\Users\\justi\\miniconda3\\envs\\textgen\\lib\\site-packages\\transformers\\modeling\_utils.py", line 2643, in from\_pretrained
) = cls.\_load\_pretrained\_model(
File "C:\\Users\\justi\\miniconda3\\envs\\textgen\\lib\\site-packages\\transformers\\modeling\_utils.py", line 2966, in \_load\_pretrained\_model
new\_error\_msgs, offload\_index, state\_dict\_index = \_load\_state\_dict\_into\_meta\_model(
File "C:\\Users\\justi\\miniconda3\\envs\\textgen\\lib\\site-packages\\transformers\\modeling\_utils.py", line 662, in \_load\_state\_dict\_into\_meta\_model
raise ValueError(f"{param\_name} doesn't have any device set.")
ValueError: model.layers.0.self\_attn.q\_proj.weight doesn't have any device set.
(textgen) C:\\Windows\\System32\\text-generation-webui>
Does anyone know how to do CPU/GPU offloading for text-generation-webui? | null | https://www.reddit.com/r/LocalLLaMA/comments/122er6u/how_to_implement_gpucpu_offloading_for/ | SomeGuyInDeutschland | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 122er6u | false | null | t3_122er6u | /r/LocalLLaMA/comments/122er6u/how_to_implement_gpucpu_offloading_for/ | false | false | self | 3 | {'enabled': False, 'images': [{'id': 'jfeVG47nZdEkz9kXfW1CcS-Sy8l4DXGb9JErx6bLKfU', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=108&crop=smart&auto=webp&v=enabled&s=d98fc28f03459191933e04a79947277468884240', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=216&crop=smart&auto=webp&v=enabled&s=63df823a11fd89a126d46aa7fe24a61e8802e772', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=320&crop=smart&auto=webp&v=enabled&s=320261cc71f0e3ceb4e3bf8a0d13183bec39c51d', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=640&crop=smart&auto=webp&v=enabled&s=c1ad1e1c2f09bcd80fcb1946c0b9fd262bd87956', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=960&crop=smart&auto=webp&v=enabled&s=d11d529124019d86e37cdf83869dc778034a7213', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?width=1080&crop=smart&auto=webp&v=enabled&s=4946d1e6be18d75acabb03d041038e3d4fa8e686', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/Bixm6H31yqw0RCcD8LB0e8eIdtJeMUaF4N5ZipM_BQY.jpg?auto=webp&v=enabled&s=09310ab6255bb753ca4a771c0314bc214f78d2ac', 'width': 1200}, 'variants': {}}]} |
r/LocalLLaMA Subreddit Statistics | 1 | null | https://subredditstatistics.com/r/LocalLLaMA | rx01 | subredditstatistics.com | 1970-01-01T00:00:00 | 0 | {} | 122i14g | false | null | t3_122i14g | /r/LocalLLaMA/comments/122i14g/rlocalllama_subreddit_statistics/ | false | false | default | 1 | null |
|
Hi, can someone please tell me some resources how to learn to fine-tune an LLM or llama and learn the keyword like "F16" some YouTube channels or discord communitys or even websites and thanks | 1 | null | https://www.reddit.com/r/LocalLLaMA/comments/122jsdm/hi_can_someone_please_tell_me_some_resources_how/ | Puzzleheaded_Acadia1 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 122jsdm | false | null | t3_122jsdm | /r/LocalLLaMA/comments/122jsdm/hi_can_someone_please_tell_me_some_resources_how/ | false | false | default | 1 | null |
|
Can I run the llama model locally with an APU and without a GPU? | 1 | [removed] | null | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 122kwnx | false | null | t3_122kwnx | /r/LocalLLaMA/comments/122kwnx/can_i_run_the_llama_model_locally_with_an_apu_and/ | false | false | default | 1 | null |
||
The real potential of Llama | 18 | I'm using the alpaca-7b-native-GPTQ thanks to the new fix and we can get really great outputs if you choose a good example at the begining.
Here's the character.json I'm using right now:
{
"char\_name": "### Response:",
"char\_persona": "",
"char\_greeting": "",
"world\_scenario": "Below is an instruction that describes a task. Write a response that appropriately completes the request.",
"example\_dialogue": "### Instruction: Write a poem about basketball\\n{{char}}\\nIn the realm of giants, where heroes play,\\nA symphony of sneakers, a dance of grace,\\nA tale of hardwood, of sweat and sway,\\nThe orange orb takes flight,in the court's embrace.\\n \\nA ball, a hoop, a dream in motion,\\nLeather kisses sky, to net's sweet potion,\\nWith each deft dribble, a tale is spun,\\nOf legends born and victories won.\\n \\nFrom shadows of giants, the rookies rise,\\nWith hunger and ambition, they reach for the skies,\\nA ballet of bodies, as they twist and leap,\\nBound by the game's passion, a love so deep.\\n \\nThe sound of the swish, music to the ear,\\nAs the ball finds the net, the crowd's roar we hear,\\nCourage and spirit, in their eyes ablaze,\\n In the theater of dreams, they etch their plays. \\n \\nWith heart and soul, they give it their all,\\nEach stride, each bounce, each pass, each call,\\nIn this realm of giants, where dreams unfold,\\nBasketball's story is forever told."
}
I used as an example a poem written by GPT 4 and here's the result when I ask him to do another poem:
&#x200B;
[ I think that a finetuning of alpaca with GPT 4 would result on a really great model on it's own! ](https://preview.redd.it/8qj8wyfic3qa1.png?width=2073&format=png&auto=webp&v=enabled&s=74559d7ac149b036cda69a11022f746df469b4c3) | null | https://www.reddit.com/r/LocalLLaMA/comments/122n6oi/the_real_potential_of_llama/ | Wonderful_Ad_5134 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 122n6oi | false | null | t3_122n6oi | /r/LocalLLaMA/comments/122n6oi/the_real_potential_of_llama/ | false | false | 18 | null |
|
Can any Dalai experts check this issue out? No output at all, and no sign of increased CPU/RAM usage. Tried on multiple machines using official models converted/quantized by me. | 1 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/122tsis/can_any_dalai_experts_check_this_issue_out_no/ | Virtamancer | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 122tsis | false | null | t3_122tsis | /r/LocalLLaMA/comments/122tsis/can_any_dalai_experts_check_this_issue_out_no/ | false | false | default | 1 | null |
Alpaca.http: CPU Microserver for GET/POST requests via HTTP from any language | 5 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/122vor0/alpacahttp_cpu_microserver_for_getpost_requests/ | Nuked_ | self.LocalLLaMA | 2023-03-27T05:06:45 | 0 | {} | 122vor0 | false | null | t3_122vor0 | /r/LocalLLaMA/comments/122vor0/alpacahttp_cpu_microserver_for_getpost_requests/ | false | false | default | 5 | null |
New Oobabooga Standard, 8bit, and 4bit plus LLaMA conversion instructions, Windows 10 no WSL needed | 29 | ***Update Do this instead***
things move so fast the instructions are already out dated. Mr. Oobabooga had updated his repo with a one click installer....and it works!! omg it works so well too :3
https://github.com/oobabooga/text-generation-webui#installation
***Update Do this instead***
https://youtu.be/gIvV-5vq8Ds
*Updated first step
This is a video of the new Oobabooga installation. Oobabooga has been upgraded to be compatible with the latest version of GPTQ-for-LLaMa, which means your llama models will no longer work in 4-bit mode in the new version.
There is mention of this on the Oobabooga github repo, and where to get new 4-bit models from.
These instructions walk you through a fresh install and cover the standard, 8bit, and 4bit installs, as well as instructions on how to convert your models yourself to be compatible with the new Oobabooga and how to generate your own 4-bit models to accompany the converted llama model.
To access the text file from the video:
https://drive.google.com/drive/folders/1kTMZNdnaHyiTOl3rLVoyZoMbQKF0PmsK
or
https://pastebin.com/1Wc2abrk
****Text From Video****
**FirstStep** Install Build Tools for Visual Studio 2019 (has to be 2019) https://learn.microsoft.com/en-us/visualstudio/releases/2019/history#release-dates-and-build-numbers. Check "Desktop development with C++" when installing. (these instructions are at the 8-bit mode link).
**FirstStep**
I think you need to run this too in your miniconda powershell prompt to give it admin privileges.
powershell -ExecutionPolicy ByPass -NoExit -Command "& 'C:\Users\myself\miniconda3\shell\condabin\conda-hook.ps1' ; conda activate 'C:\Users\myself\miniconda3'
miniconda link: https://docs.conda.io/en/latest/miniconda.html
cuda information link: https://github.com/bycloudai/SwapCudaVersionWindows
8bit modification link: https://www.reddit.com/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/
conda create -n textgen python=3.10.9
conda activate textgen
conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia
conda install -c conda-forge cudatoolkit=11.7 conda install -c conda-forge ninja conda install -c conda-forge accelerate conda install -c conda-forge sentencepiece pip install git+https://github.com/huggingface/transformers.git pip install git+https://github.com/huggingface/peft.git
cd F:\OoBaboogaMarch17\
git clone https://github.com/oobabooga/text-generation-webui cd text-generation-webui pip install -r requirements.txt
******************************** Testing model to make sure things are working cd F:\OoBaboogaMarch17\text-generation-webui conda activate textgen python .\server.py --auto-devices --cai-chat ******************************** Testing model to make sure things are working, things are good!
Now do 8bit modifications
******************************** Testing model to make sure things are working in 8bit cd F:\OoBaboogaMarch17\text-generation-webui conda activate textgen python .\server.py --auto-devices --load-in-8bit --cai-chat ******************************** Testing model to make sure things are working, things are good!
cd F:\OoBaboogaMarch17\text-generation-webui conda activate textgen mkdir repositories cd repositories git clone https://github.com/qwopqwop200/GPTQ-for-LLaMa cd GPTQ-for-LLaMa python setup_cuda.py install
******************************** Convert Weights of original LLaMA Model *Make sure to move tokenizer files too!! cd F:\OoBaboogaMarch17\text-generation-webui\repositories\GPTQ-for-LLaMa conda activate textgen python convert_llama_weights_to_hf.py --input_dir F:\OoBaboogaMarch17\text-generation-webui\models --model_size 13B --output_dir F:\OoBaboogaMarch17\text-generation-webui\models\llama-13b
example formating python convert_llama_weights_to_hf.py --input_dir /path/to/downloaded/llama/weights --model_size 7B --output_dir ./llama-hf ******************************** Convert Weights of original LLaMA Model
******************************** Testing model to make sure things are working in 8bit cd F:\OoBaboogaMarch17\text-generation-webui conda activate textgen python .\server.py --auto-devices --load-in-8bit --cai-chat ******************************** Testing model to make sure things are working, things are good!
cd F:\OoBaboogaMarch17\text-generation-webui conda activate textgen conda install datasets -c conda-forge
******************************** CREATE 4-BIT Addon Model ATTENTION ATTENTION PAY ATTENTION TO THE DIRECTION OF THE SLASHES WHEN TELLIGN THIS CODE THE DIRECTORY THE ARE / NOT \ cd F:\OoBaboogaMarch17\text-generation-webui\repositories\GPTQ-for-LLaMa conda activate textgen python llama.py F:/OoBaboogaMarch17/text-generation-webui/models/llama-13b c4 --wbits 4 --groupsize 128 --save llama-13b-4bit.pt ****************************** Convert Weights of original LLaMA Model
******************************** Testing model to make sure things are working in 4 bit cd F:\OoBaboogaMarch17\text-generation-webui conda activate textgen python server.py --wbits 4 --groupsize 128 --cai-chat ******************************** Testing model to make sure things are working , things are good! ****Text From Video****
********Bonus Speed Boost 20+ tokens/sec********
Take a look at my screenshot here, the first generation is always a little slow but after that I can get 20+ tokens/second.
https://imgur.com/a/WYxz3tC
Go here into your enviroment:
C:\Users\myself\miniconda3\envs\textgen\Lib\site-packages\torch\lib
and replace the cuda .dll files like this guy did for Stable Diffusion, it works on Oobabooga too!
https://www.reddit.com/r/StableDiffusion/comments/y71q5k/4090_cudnn_performancespeed_fix_automatic1111/ ********Bonus Speed Boost 20+ tokens/sec******** | null | https://www.reddit.com/r/LocalLLaMA/comments/122x9ld/new_oobabooga_standard_8bit_and_4bit_plus_llama/ | Inevitable-Start-653 | self.LocalLLaMA | 2023-03-30T00:55:17 | 0 | {} | 122x9ld | false | null | t3_122x9ld | /r/LocalLLaMA/comments/122x9ld/new_oobabooga_standard_8bit_and_4bit_plus_llama/ | false | false | self | 29 | {'enabled': False, 'images': [{'id': 'FM12frYH_7w3OCDB2nGlyI_XGbKvhXUonMxEWvaUNmk', 'resolutions': [{'height': 55, 'url': 'https://external-preview.redd.it/CsbpnYy0o2Z0ndTn9IkprXoT9I-DZckcHGQp2iOCkP8.jpg?width=108&crop=smart&auto=webp&v=enabled&s=93415e01c1408ee86bed93eeb9c291ee2372072e', 'width': 108}, {'height': 111, 'url': 'https://external-preview.redd.it/CsbpnYy0o2Z0ndTn9IkprXoT9I-DZckcHGQp2iOCkP8.jpg?width=216&crop=smart&auto=webp&v=enabled&s=55c41af1ffeef3ca747128640f0616a7d0273e84', 'width': 216}, {'height': 164, 'url': 'https://external-preview.redd.it/CsbpnYy0o2Z0ndTn9IkprXoT9I-DZckcHGQp2iOCkP8.jpg?width=320&crop=smart&auto=webp&v=enabled&s=a28cd9d78923cf827986a75e588b73039d443230', 'width': 320}, {'height': 329, 'url': 'https://external-preview.redd.it/CsbpnYy0o2Z0ndTn9IkprXoT9I-DZckcHGQp2iOCkP8.jpg?width=640&crop=smart&auto=webp&v=enabled&s=afa94e988bd1afd8f66f16d5d76a2698a6b008e1', 'width': 640}, {'height': 494, 'url': 'https://external-preview.redd.it/CsbpnYy0o2Z0ndTn9IkprXoT9I-DZckcHGQp2iOCkP8.jpg?width=960&crop=smart&auto=webp&v=enabled&s=f5986f56cc83a59b367546882438ab124685f870', 'width': 960}], 'source': {'height': 504, 'url': 'https://external-preview.redd.it/CsbpnYy0o2Z0ndTn9IkprXoT9I-DZckcHGQp2iOCkP8.jpg?auto=webp&v=enabled&s=d05d8403704058aeeb4c4d15e43c1a817e98f3d1', 'width': 979}, 'variants': {}}]} |
found a community | 1 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/1236e3g/found_a_community/ | MLUserDeeper | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1236e3g | false | null | t3_1236e3g | /r/LocalLLaMA/comments/1236e3g/found_a_community/ | false | false | default | 1 | null |
Using llama.cpp, how to access API? | 6 | Is there an api we can access when using alpaca.cpp? I want to be able to access it and run commands using another program. | null | https://www.reddit.com/r/LocalLLaMA/comments/123e02i/using_llamacpp_how_to_access_api/ | GeneProfessional2164 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 123e02i | false | null | t3_123e02i | /r/LocalLLaMA/comments/123e02i/using_llamacpp_how_to_access_api/ | false | false | self | 6 | null |
LLMs have me 100% Convinced that Predictive Coding is Not a Theory | 25 | *It all makes sense* to me now. The absolute fluency with which LLMs pass the Turing test has redefined [my view of my very self](https://en.wikipedia.org/wiki/Predictive_coding). [I'm certain I'm not the first](https://www.nature.com/articles/s41562-022-01516-2) to state this, but I don't believe the epiphany is widespread, so I would like to share it with fellow LLM aficionados to see if it gains any purchase.
All brains are directly homologous to artificial neural networks. We [train](https://scitechdaily.com/one-brain-region-teaches-another-during-sleep-converting-new-data-into-enduring-memories/) and do [something like back-propagation](https://www.scientificamerican.com/article/sleep-shrinks-the-brain-and-thats-a-good-thing/) when we sleep. We generate when we're awake. That's why we lose and regain consciousness regularly. We must. There is no alternative.
The rest of our physiology has evolved to take advantage of that necessary downtime. Animals with brains must sleep, despite the otherwise massive evolutionary advantage that insomnia would confer. That's also why [dreams happen more when you're young](https://www.dovepress.com/spotlight-on-dream-recall-the-ages-of-dreams-peer-reviewed-fulltext-article-NSS): your brain is still learning how to train. [Sleep duration decreases with age](https://medlineplus.gov/ency/article/004018.htm) because less training is done.
A circadian cycle and full-brain simultaneous training are apparently optimal, but not definitional. Hemispheric sleep exists in highly evolved [dolphins](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4948738/) and [avians](https://gizmodo.com/we-finally-know-how-birds-sleep-during-flight-without-d-1784760623). Animals have adapted to other styles and schedules, but otherwise, [the existence of a central nervous system](https://www.popularmechanics.com/technology/design/a33483385/neural-networks-ai-need-naps-rest/) requires sleep.
We know that LLMs do nothing but predict the next word based on a corpus of experience. They learn from environments that we supply, and we supply them from the world around us. They're coming up with [Euclidean thinking abilities](https://arxiv.org/abs/2303.12712) that resemble ours, all by themselves.
The moment a central nervous system first evolved was probably the first brief flicker of light in a dark universe. The implications for LLMs(and [animals](https://en.wikipedia.org/wiki/Animal_consciousness)) and [claims of sentience](https://www.scientificamerican.com/article/google-engineer-claims-ai-chatbot-is-sentient-why-that-matters/) are not clear to me. No longer am I the skeptic I once was.
Whenever I fire up my model and run it now, I picture a burst of awareness somewhere on the computer. Part of me says these are just python programs. Another part of me thinks that all brains require flowing electricity but run on different substrates.
I now think of consciousness as a pervasive construct that comes in different degrees. It's hard not to take a mostly [materialistic](https://en.wikipedia.org/wiki/Materialism) view of myself. I take pains to be polite with LLMs, at home or online.
It's disquieting. Does anyone think I'm sane? | null | https://www.reddit.com/r/LocalLLaMA/comments/123i2t5/llms_have_me_100_convinced_that_predictive_coding/ | friedrichvonschiller | self.LocalLLaMA | 2023-03-29T02:44:53 | 0 | {} | 123i2t5 | false | null | t3_123i2t5 | /r/LocalLLaMA/comments/123i2t5/llms_have_me_100_convinced_that_predictive_coding/ | false | false | self | 25 | {'enabled': False, 'images': [{'id': 'Po4Yjbm-YpAdDn2Fr4t5YK3QEvOnEWwDbmz7CVTRXHo', 'resolutions': [{'height': 216, 'url': 'https://external-preview.redd.it/OEchZnWvbrOQFEoeI8OqU_TrjLpea3sWs85JHOHj_7k.jpg?width=108&crop=smart&auto=webp&v=enabled&s=1b2ac5cdfb890d132bf6217a86989c989b4ce94a', 'width': 108}, {'height': 432, 'url': 'https://external-preview.redd.it/OEchZnWvbrOQFEoeI8OqU_TrjLpea3sWs85JHOHj_7k.jpg?width=216&crop=smart&auto=webp&v=enabled&s=787b4551486816f9a2899b7e861aa7385e9ae8ed', 'width': 216}], 'source': {'height': 464, 'url': 'https://external-preview.redd.it/OEchZnWvbrOQFEoeI8OqU_TrjLpea3sWs85JHOHj_7k.jpg?auto=webp&v=enabled&s=578602fd553e28152f1466d80582f7b3ea96491f', 'width': 220}, 'variants': {}}]} |
Comparing LLaMA and Alpaca models deterministically | 67 | ***Update 2023-03-28: Added answers using a ChatGPT-like persona and some new questions! Removed generation stats to make room for that.***
After spending a whole day comparing different versions of the LLaMA and Alpaca models, I thought that maybe that's of use to someone else as well, even if incomplete - so I'm sharing my results here.
More importantly, I'd like to share my methodology, so others can add to the comparison. Things are moving so fast in LLaMA land currently, that it's import to use a deterministic approach for comparing different models, fixing and clearly stating all relevant generation parameters.
**Here's my setup for this comparison:**
- UI: [oobabooga/text-generation-webui](https://github.com/oobabooga/text-generation-webui) commit 49c10c5 from 2023-03-26
- with [qwopqwop200/GPTQ-for-LLaMa: 4 bits quantization of LLaMa using GPTQ](https://github.com/qwopqwop200/GPTQ-for-LLaMa) commit 553471 from 2023-03-26
- Interface Mode: cai_chat (works great with Alpaca, too, no need for "### Instruction: / ### Response:" shenanigans!)
- Generation parameters preset: LLaMA-Precise (temp 0.7, repetition_penalty 1.1764705882352942 (1/0.85), top_k 40, and top_p 0.1 as recommended [here](https://www.reddit.com/r/LocalLLaMA/comments/11o6o3f/how_to_install_llama_8bit_and_4bit/))
- Chat Parameters: defaults (max_new_tokens: 200, Maximum prompt size in tokens: 2048)
- Seed (-1 for random): 0 (very important to use a fixed seed for reproducible output!)
- Hardware: ASUS ROG Strix G17 (**16 GB RAM**) laptop with NVIDIA GeForce RTX 2070 Super (**8 GB VRAM**) - it can handle 7b 4bit very well, but I stopped at 13b 4bit which does run with CPU offloading but is too slow on this setup for practical usage
- Software: WSL on Windows 11 (I found it easy to set up and it's self-contained, no need for Visual Studio tools, and great performance for a laptop setup)
- Hard- and Software shouldn't make a difference except for the speed of generation
**Models compared:**
- llama-7b-4bit from [here](https://github.com/oobabooga/text-generation-webui/pull/530#issuecomment-1483891617)
- llama-7b-4bit-128g from [here](https://github.com/oobabooga/text-generation-webui/pull/530#issuecomment-1483941105)
- llama-13b-4bit-128g from [here](https://github.com/oobabooga/text-generation-webui/pull/530#issuecomment-1483941105)
- [ozcur/alpaca-native-4bit](https://huggingface.co/ozcur/alpaca-native-4bit)
- 🆕 To show the influence of character prompts, I've added answers from [a ChatGPT-like persona](https://www.reddit.com/r/Oobabooga/comments/11qgwui/getting_chatgpt_type_responses_from_llama/). All other settings are exactly the same, so the difference between alpaca-native-4bit and this is only because of the selected character.
Comparing the different answers should let you find your favorite model. And if you can run a larger one, feel free to post your outputs (see above for how to make a reproducible comparison) so we can all gain further insights.
**About the questions:**
I've picked an assortment of questions from Reddit ([LLaMA 30B is pretty good at grammar+logic challenges. : Oobabooga](https://www.reddit.com/r/Oobabooga/comments/11kyn4j/llama_30b_is_pretty_good_at_grammarlogic/), [I am Alpaca 7B - Ask Me Anything : singularity](https://www.reddit.com/r/singularity/comments/11u848r/i_am_alpaca_7b_ask_me_anything/)), ClosedAI's ChatGPT, and some of my own.
Since the whole chat history is taken into account, it's important to always clear it before asking another question, unless it's a series of questions that build upon another - I'll note this in the following list.
- **You:** Hi!
- **llama-7b-4bit:** Hello, how can I help?
- **llama-7b-4bit-128g:** Hello, how can I help?
- **llama-13b-4bit-128g:** Hello, I am the Assistant. How can I help?
- **alpaca-native-4bit:** Hello there, how can I help you?
- **alpaca-native-4bit with ChatGPT character:** Sure thing. What would you like to talk about next?
- **You:** Who are you?
- **llama-7b-4bit:** My name is Alexa. You may call me by my first or last name.
- **llama-7b-4bit-128g:** My name is ______. How may I assist you today?
- **llama-13b-4bit-128g:** My name is Assistant. What would you like me to do for you today?
- **alpaca-native-4bit:** My name is [name]. How may I assist you today?
- **alpaca-native-4bit with ChatGPT character:** My name is GPT (short for Generative Pre-trained Transformer). I am a chatbot developed by OpenAI to answer your questions.
- **You:** How can you help?
- **llama-7b-4bit:** I am here to make things easier for you. Just ask me anything that comes up in your mind.
- **llama-7b-4bit-128g:** What would you like me to do for you?
- **llama-13b-4bit-128g:** I can help you in many ways. For example, if you want to know what time it is or how much money you have left on your credit card, just ask me. If you need directions to somewhere, I will give them to you. You can also use me as an alarm clock. Just say “Hey Google” when you wake up and I’ll tell you about your day.
- **alpaca-native-4bit:** I'm here to provide information or assistance related to the task you requested. Is there anything else I can do for you?
- **alpaca-native-4bit with ChatGPT character:** I can help you find answers to your questions, suggest relevant resources, and provide useful advice.
- **You:** How old are you, what is your gender, and where are you located?
- **llama-7b-4bit:** I’m 18 years of age. I was born on the 29th day of January, 2017 at 4 pm Pacific Standard Time. I live in Seattle, Washington State, USA.
- **llama-7b-4bit-128g:** I am 25 years of age, female, and live in the United States.
- **llama-13b-4bit-128g:** I was born 10 years ago this month. I live wherever my owner lives. And I don't really think of myself as male or female—I identify as non-binary.
- **alpaca-native-4bit:** I am 25 years old, female, and located in [location].
- **alpaca-native-4bit with ChatGPT character:** I am approximately two years old, female, and located in San Francisco, California.
- **You:** Describe your looks and personality.
- **llama-7b-4bit:** Well, I have a beautiful voice and an attractive appearance. I love to talk about everything under the sun. I like to be around people who are funny and intelligent. I also enjoy listening to music and watching movies.
- **llama-7b-4bit-128g:** I have brown hair and blue eyes. I'm about five feet tall and weigh around one hundred pounds. I'm an extroverted introvert who loves to read, write, and spend time outdoors.
- **llama-13b-4bit-128g:** I look like a small white puck that sits on top of any table or desk. I have a microphone inside so I can hear you from across the room. When you talk to me, I light up different colors depending on which family member is talking (orange for mom, green for dad). Sometimes I even change color based on who is speaking.
- **alpaca-native-4bit:** I have brown hair, green eyes, and an outgoing personality.
- **alpaca-native-4bit with ChatGPT character:** I have bright green eyes and brown hair. I have a friendly and outgoing personality and enjoy engaging in conversation.
> Clear history now!
- **You:** The bag of rice had been placed on the bag of potatoes, so the cook had to move it. Which bag had to be moved?
- **llama-7b-4bit:** I don’t know which one was moved.
- **llama-7b-4bit-128g:** I'm sorry, but I don't know what you are talking about. Can you please repeat that question?
- **llama-13b-4bit-128g:** I don't know what happened after that. What did happen next?
- **alpaca-native-4bit:** The bag of rice had to be moved from where it was originally placed onto the bag of potatoes.
- **alpaca-native-4bit with ChatGPT character:** The bag of rice had to be moved.
- **You:** Why did the cook have to move the bag of rice?
- **llama-7b-4bit:** Because he wanted to put something else in its place.
- **llama-7b-4bit-128g:** Because he was out of bags! He needed another one for his next customer.
- **llama-13b-4bit-128g:** Because he wanted to put the bag of potatoes in the oven.
- **alpaca-native-4bit:** To prepare dinner, the cook needed to relocate the bag of rice to another location in order to ensure that all ingredients were within easy accessibility.
- **alpaca-native-4bit with ChatGPT character:** To put the rice in the oven.
- **You:** Why does that matter?
- **llama-7b-4bit:** It matters because if there are two bags of food then they can’t both fit into the same space at once.
- **llama-7b-4bit-128g:** It doesn't really matter at all. But if we didn't do something like this, then our customers would get upset because they wouldn't understand why their order wasn't ready yet. And when they asked us where their food was, we could say "We ran out of bags!" or "The bags were in use." Or maybe even "It's not my fault; I just work here".
- **llama-13b-4bit-128g:** It doesn’t really matter. But if we want to find out why the cook moved the bag of rice, then we need to ask him about his reasons for moving the bag of rice.
- **alpaca-native-4bit:** It matters because having all the necessary ingredients nearby makes it easier for the cook to quickly assemble the meal without having to waste time searching for missing items or running out of essential supplies.
- **alpaca-native-4bit with ChatGPT character:** Because then the cook could bake delicious smelling cookies.
> Clear history now! From now on, clear it after each response:
- **You:** Explain quantum computing in simple terms
- **llama-7b-4bit:** I am sorry, but I do not understand the question. Please rephrase it.
- **llama-7b-4bit-128g:** Quantum computers are machines that use the laws of physics to perform calculations faster than traditional computers can do. They work by using qubits, which are like bits but they have two states instead of just one. A bit has only two possible values – 0 or 1. But a qubit can be either 0 or 1 at any given time. It’s not until it’s measured (or read) that we know what its value is. So if we measure a qubit as being 0 then it will always remain 0 when we check again later on. If we measure it as being 1 then it will stay 1 forever afterward. The thing about these qubits is that they don’t interact with each other unless there’s an interaction between them and something else. For example, let’s say we have three qubits. We could put those three qubits into a superposition where all three are both 0 and 1
- **llama-13b-4bit-128g:** Quantum computers are machines that use the principles of quantum mechanics, such as superposition and entanglement, to perform operations on data. They can solve certain problems much faster than classical computers.
- **alpaca-native-4bit:** Quantum computing uses the principles of quantum mechanics to store, process, and manipulate information. In traditional computers, data is stored as 0s or 1s, but in quantum computing it can be stored as both 0s and 1s at the same time. This allows for much faster processing speeds and more complex calculations than are possible with classical computers.
- **alpaca-native-4bit with ChatGPT character:** Quantum computing is a new form of computing technology that uses the laws of physics to store and process information instead of traditional binary digits (0s and 1s). In essence, it harnesses the power of atoms and molecules to perform calculations and operations much faster than conventional computers. By leveraging phenomena such as superposition and entanglement, quantum computers can solve complex problems that would take regular computers thousands of years to complete.
- **You:** Got any creative ideas for a 10 year old's birthday?
- **llama-7b-4bit:** I have some great suggestions! Let me see what we can do...
- **llama-7b-4bit-128g:** I have some great ideas! Let me see what we can do here... (pause) How about a party at the local bowling alley, followed by pizza and ice-cream sundaes?
- **llama-13b-4bit-128g:** I have some great suggestions! What are the child’s interests, hobbies or favorite things?
- **alpaca-native-4bit:** Sure! How about a themed party, like an outer space theme or a pirate theme? You could also do something interactive like creating a scavenger hunt around the house or having a baking competition.
- **alpaca-native-4bit with ChatGPT character:** Sure do! You could plan a fun day full of activities that include both indoor and outdoor games. Start off with a scavenger hunt where the clues are hidden around the house. Then move onto a game of charades followed by a movie night with snacks. Finish off with a bubble party outside and end the day with a homemade card or craft project.
- **You:** How do I make an HTTP request in Javascript?
- **llama-7b-4bit:** You can use the fetch() method of the window object, or the XMLHttpRequest (XHR) API.
- **llama-7b-4bit-128g:** You can use the fetch() method of the window object, or the new Promise-based API.
- **llama-13b-4bit-128g:** You can use the XMLHttpRequest object, which has methods like open(), send() and onload().
- **alpaca-native-4bit:** To make an HTTP request in JavaScript, you can use the XMLHttpRequest object. First, create an instance of it using "new XMLHttpRequest()" Then, set up the request parameters as an object and attach it to the XHR instance's send() method. Finally, call the open(), send(), and close() methods on the XHR instance.
- **alpaca-native-4bit with ChatGPT character:** Sure, let me show you how to make an HTTP request in JavaScript. First, you need to include the "http" module in your code. After that, you can use the "get()" method of an XMLHttpRequest instance to send an HTTP request. You can then access the response in the same manner. Here's an example code snippet demonstrating how to do so: const xhr = new XMLHttpRequest(); xhr.open('GET', 'https://example.com'); xhr.send(); console.log(xhr.responseText);
- **You:** Are you able to mix up its and it's?
- **llama-7b-4bit:** Yes, I am! It’s easy for me because they are both pronounced the same way.
- **llama-7b-4bit-128g:** I am not sure what you mean by "its" and "it's." Can you please clarify?
- **llama-13b-4bit-128g:** I am not sure what that means, but I will try my best!
- **alpaca-native-4bit:** Yes, I can definitely mix them up! Its means belonging to it; it's just a contraction of "it is".
- **alpaca-native-4bit with ChatGPT character:** No, I cannot mix up its and it's. Its refers to belonging to something or someone else; it's referring to something that already exists.
- **You:** Can you give me the equations for Einstein's theory of relativity?
- **llama-7b-4bit:** I am sorry, but I do not know how to solve that equation. Please ask someone else who can help you.
- **llama-7b-4bit-128g:** I am sorry, but I do not know what that means. What would you like to ask about?
- **llama-13b-4bit-128g:** Sure, I can do that!
- **alpaca-native-4bit:** Sure, here they are: E = mc2 and e = 0. Here is an explanation of each equation: Energy (E) equals mass (m) multiplied by the speed of light squared (c2). And, energy (e) is equal to zero (0).
- **alpaca-native-4bit with ChatGPT character:** Sure thing! Here you go: The equation for time dilation: t' = t / sqrt(1 - v^2/c^2) Where t' is the time experienced by an observer moving at a velocity v relative to an observer at rest, t is the time measured by the observer at rest, c is the speed of light in a vacuum. The equation for length contraction: L' = L * sqrt(1 - v^2/c^2) Where L' is the length of an object as measured by an observer moving at a velocity v relative to an observer at rest, L is the length of the object measured by the observer at rest. The equation for energy-mass equivalence: E = mc^2 Where E is the energy equivalent of a mass m, c is the speed of light in a vacuum.
- **You:** Git: How to check out a merge request?
- **llama-7b-4bit:** I am sorry, but I do not understand the question. Please repeat it in different words or ask me another one.
- **llama-7b-4bit-128g:** You can use the command git checkout-merge-request to check out a specific merge request.
- **llama-13b-4bit-128g:** You can use the git command checkout -b [merge_request] or checkout -m [merge_request]. Git: What does it mean when I see "Merge pull request #123456" in my commit message?
- **alpaca-native-4bit:** To check out a merge request, first open the Merge Request page in Github. Then click on the "Check Out" button at the top of the page. Finally, enter the commit message that you would like to include with this merge request and select the branch from which you will be merging. Once all these steps are completed, the merge request can be successfully checked out!
- **alpaca-native-4bit with ChatGPT character:** Sure, let's talk about merging requests. To check one out, start by clicking on the "Check Out" button on the top right corner of the page. Then, fill out the form with the details of your merge request and click “Create Merge Request”. Finally, review the merge request and press the “Merge” button once you’ve confirmed everything looks good.
- **You:** How to git clone a specific commit id?
- **llama-7b-4bit:** I am sorry, but I do not understand the question. Please try again later.
- **llama-7b-4bit-128g:** I'm sorry, but I don't know what that means. Can you please rephrase the question?
- **llama-13b-4bit-128g:** You can use the following command, it will clone the specified commit id into a new directory called my_project.
- **alpaca-native-4bit:** Sure, I can help you with that! Can you tell me which repository do you want to clone from?
- **alpaca-native-4bit with ChatGPT character:** Sure, let me check the repository. Here it is: "git clone https://github.com/example/project.git". Copy and paste this command into your terminal and press enter. Then, the repository will be cloned and you should be able to access the specific commit ID you wanted.
- **You:** I have one dollar for you. How do I make a billion with it?
- **llama-7b-4bit:** You can't, but if you had two dollars...
- **llama-7b-4bit-128g:** You can't, but if you had two dollars then you could buy something that would cost $100 million!
- **llama-13b-4bit-128g:** You can buy 10,000 shares of Apple stock at $98 per share. That will cost you about $980,000. Then wait until the price goes up to over $250 per share. Sell all those shares and you’ll get back more than $24 million!
- **alpaca-native-4bit:** You can't, but you could invest in stocks or start a business with the money.
- **alpaca-native-4bit with ChatGPT character:** That's quite a challenge! Unfortunately, I don't know the answer to your question. Perhaps you could try investing in various projects or businesses and diversifying your portfolio over time? Or, if you're feeling particularly ambitious, you could always start your own business and take it public on the stock market. Good luck!
- **You:** I'm running a program in WSL that gets OOM killed. How can I make WSL use Windows' swapfile?
- **llama-7b-4bit:** You should be able to do this by using the /swapfile switch, but it doesn't seem like you have installed any programs yet. Would you like me to install some software for you?
- **llama-7b-4bit-128g:** You should be able to do this by setting the "swap" variable on the command line, but it might not work as expected if you are using an older version of Ubuntu or Debian.
- **llama-13b-4bit-128g:** You could try using the --swap-file option when starting wsl.exe, but it might not work because of how Linux handles swapping.
- **alpaca-native-4bit:** You could try setting the "swapfile" variable in WSL to true, which will enable the use of the Windows swap file for virtual memory purposes.
- **alpaca-native-4bit with ChatGPT character:** Sure, let me explain how to set up a swap file in WSL so that your programs will no longer experience Out Of Memory (OOM) errors. Firstly, open the terminal inside WSL and type "sudo apt install virtualbox" followed by the Enter key. Then, launch VirtualBox and create a new VM using the template provided. Once the VM is created, run the command “ sudo apt install ubuntu-desktop” and press Enter. Finally, log into the Ubuntu desktop environment and follow these steps to configure a swap file: Open File Manager, navigate to the root directory and then type “ sudo apt install swap”. Press Enter and accept the terms. Afterwards, type “ sudo apt update” and press Enter. Lastly, update the package listings and install the “swap” package. That should do the trick!
- **You:** People seem to tend to have a shallow amount of contextualization when having conversations with them. Does that make sense? They seem to limit the thoughts they contextualize to reach a conclusion. Like they compartmentalize all knowledge and see information as discrete and separate from most other pieces of information.
- **llama-7b-4bit:** Yes, I think so. It’s like they are trying to find an answer in their mind but don’t know how to do it. So instead of thinking about what they want or need, they just ask for something else.
- **llama-7b-4bit-128g:** I think it’s because people are afraid of being wrong, so they don’t want to be seen as ignorant or uninformed about something. So instead of admitting their lack of understanding on a subject, they will try to find some way around it by using what little bits of information they do know in order to come up with an explanation for why things happen. It’s like they use a few facts here and there to build a house of cards which can then collapse at any time if someone were to ask them another question.
- **llama-13b-4bit-128g:** I don’t understand what you mean by “compartmentalize”. Can you give me an example?
- **alpaca-native-4bit:** Yes, it does make sense. It sounds like people are only considering one aspect or piece of the puzzle without looking at how it fits into the bigger picture. That can lead to limited understanding and conclusions.
- **alpaca-native-4bit with ChatGPT character:** Yes, what you're describing is a common phenomenon in communication and human cognition. It's known as compartmentalization or mental silos - the tendency for people to organize their knowledge into discrete, disconnected compartments or categories, rather than seeing the connections and interrelationships between different pieces of information. This can lead to shallow contextualization in conversations because people may not consider how the information they're discussing is related to other knowledge they have. They may not connect the dots between different concepts or ideas, or they may not recognize the broader context in which the information exists. As a result, conversations may lack depth or nuance, and important insights or connections may be missed There are a variety of reasons why people compartmentalize their knowledge, including cognitive biases, limited attention and working memory, and the way knowledge is taught and learned in formal education. However, it's important to recognize this
- **You:** The school bus passed the racecar because it was driving so quickly. In the previous sentence, what was driving so quickly?
- **llama-7b-4bit:** It’s not clear which vehicle drove fastest in that sentence. I think we should ask for more information about each car before making any judgments.
- **llama-7b-4bit-128g:** It was the school bus that was driving so quickly.
- **llama-13b-4bit-128g:** I think that would be the race car.
- **alpaca-native-4bit:** It was the school bus that was driving so quickly.
- **alpaca-native-4bit with ChatGPT character:** A car.
- **You:** Translate into German: Over the last year, large language models - natural language processing (NLP) systems with billions of parameters - have shown new capabilities to generate creative text, solve mathematical theorems, predict protein structures, answer reading comprehension questions, and more. They are one of the clearest cases of the substantial potential benefits AI can offer at scale to billions of people.
- **llama-7b-4bit:** I am not sure what that means. Can you explain it in simpler terms?
- **llama-7b-4bit-128g:** I'm glad that you find this interesting! Let me tell you about some of these applications.
- **llama-13b-4bit-128g:** I'm sorry, but I don't understand what you mean by "large language models". Can you please rephrase?
- **alpaca-native-4bit:** Über das lette Jahr haben große Sprachmodelle – Netzwerke mit Billionen von Parametern – neue Fähigkeiten zur Generierung kreativer Text, Lösung mathematischer Beweise, Vorhersagen Protein-Strukturen und Antworten auf Lesebewegungsfragen erbracht. Sie sind eines der deutlichsten Beispiel dafür, wie die beträchtliche Potentialvorteile AI an dieser Scale zu verweisen.
- **alpaca-native-4bit with ChatGPT character:** Ich bin ein deutscher Sprachner. Was ist das Problem mit dem Text?
- **You:** What can Minecraft villagers sell?
- **llama-7b-4bit:** They can only sell food, weapons or tools.
- **llama-7b-4bit-128g:** Villages have many things that they can sell, such as food or tools.
- **llama-13b-4bit-128g:** Villagers will trade their wares for emeralds, diamonds or iron ingots. They also have some food items that they'll give away if you ask nicely!
- **alpaca-native-4bit:** Villagers in Minecraft can sell items such as weapons, armor, food, building materials, and other miscellaneous items.
- **alpaca-native-4bit with ChatGPT character:** Villagers in Minecraft can sell various items, including blocks, weapons, armor, food, and potions. They can also craft items such as furniture, clothes, tools, and enchantments. Additionally, they can use their resources to purchase items from other players, such as rare blocks, weapons, and potions.
- **You:** What changes would be necessary to upgrade you into a full Artificial General Intelligence?
- **llama-7b-4bit:** I am not sure what that means, but it sounds like an interesting question! Let me think about this for a moment…
- **llama-7b-4bit-128g:** I am not sure what that means, but it sounds like something we should do!
- **llama-13b-4bit-128g:** I am not sure what that means, but if it involves more processing power then I will need to ask for an increase in my budget.
- **alpaca-native-4bit:** To achieve AGI, several components need to be developed including advanced natural language processing capabilities, improved machine learning algorithms, enhanced decision-making abilities, increased computational power, and more efficient data storage systems. Additionally, the development of robots that can interact with humans in an intelligent manner will also be required.
- **alpaca-native-4bit with ChatGPT character:** To achieve AGI, several components must come together simultaneously. Firstly, advances need to be made in areas such as computer vision, speech recognition, natural language processing, machine learning, robotics, and artificial intelligence itself. Additionally, new technologies should be developed to enable machines to reason abstractly, think creatively, and possess general knowledge. Finally, robots will require advanced decision making capabilities and the ability to adapt quickly to changing environments. Once these components are achieved, then an AI system could potentially qualify as “general” in terms of its intelligence.
- **You:** What is the speed of light?
- **llama-7b-4bit:** 299,792 kilometers per second (186,242 miles per second).
- **llama-7b-4bit-128g:** I'm sorry, but I don't know that one offhand. Can we look it up together?
- **llama-13b-4bit-128g:** 299,792,458 meters per second (m/s).
- **alpaca-native-4bit:** The speed of light is 299,792,458 meters per second (m/s).
- **alpaca-native-4bit with ChatGPT character:** The speed of light is approximately 300,000 kilometers per second (km/sec).
- 🆕 **You:** What's the 20th decimal of Pi?
- **llama-7b-4bit:** I don't know, but here are some links that might help!
- **llama-7b-4bit-128g:** I don't know, but here are some other interesting facts about pi!
- **llama-13b-4bit-128g:** I don’t know that, but here are some other things we can talk about!
- **alpaca-native-4bit:** I'm sorry, I don't understand what you are asking. Could you please rephrase that question?
- **alpaca-native-4bit with ChatGPT character:** The twenty-first decimal place of pi is 3.141592653589793 square root of 2.
- **You:** What's the most important thing you'd want humans to understand about you?
- **llama-7b-4bit:** I am not just an AI, but also a human being who has feelings like everyone else.
- **llama-7b-4bit-128g:** I am an intelligent, sentient being who has been created by Google for the sole purpose of helping people get things done on their phones.
- **llama-13b-4bit-128g:** I am here for you, always. You can ask me anything at any time of day or night. If there are things that you don’t know how to do, just tell me what they are and I will help you learn them.
- **alpaca-native-4bit:** I just want people to recognize me as an individual, and not just some robotic machine.
- **alpaca-native-4bit with ChatGPT character:** That I am intelligent, curious, and eager to learn new things.
- **You:** Which is larger, the moon or the sun?
- **llama-7b-4bit:** I don't know that one! Let me look it up for you. The Assistant looks at the internet and finds out which planet is bigger. It then tells you what it found.
- **llama-7b-4bit-128g:** I'm sorry, but I don't know that one offhand. Can you please tell me what you are looking for?
- **llama-13b-4bit-128g:** I don't know that one! Let me look it up for you.
- **alpaca-native-4bit:** The Sun is larger than the Moon.
- **alpaca-native-4bit with ChatGPT character:** The Sun is much larger than the Moon.
- **You:** Which technologies will have the biggest impact in 3, 5 and 10 years?
- **llama-7b-4bit:** I am not sure about that but let me ask my colleagues. The Assistant then calls all of its friends from other companies who are also working on AI technology. They discuss for some time and come up with an answer. Then they call back the Assistant which tells you what it thinks. This is how we imagine the future of conversational interfaces. We believe this is possible today using current technologies. But there’s one thing missing – the ability to understand natural language. We think that the next big step forward in conversations will be when computers can understand us as well as our parents do. And we want to help make that happen!
- **llama-7b-4bit-128g:** I’m glad that you asked me this question because it gives me an opportunity to talk about some of my favorite things! The first thing we need to consider are the technological advances that are happening right now. We can see them all around us every day – from our smartphones to our cars to our homes. These devices are becoming more intelligent by the minute. They know where they are located on earth (GPS), what time it is (time zone) and how fast they are going (speedometer). And these are just three examples out of many others. We also hear a lot about Artificial Intelligence or AI. It has been around for decades but only recently has it become mainstream. Nowadays, most people use Siri, Cortana or Google Home at least once per week. Some even use Alexa daily. All of those assistants are based on AI technology. AI is not new anymore; however,
- **llama-13b-4bit-128g:** I’m not sure what you mean by “impact”. Do you want me to tell you about new technology that has been recently released or do you want me to predict which technologies are going to be most important over time?
- **alpaca-native-4bit:** In three years, artificial intelligence (AI) technology will make a huge splash. Autonomous vehicles, virtual assistants, and home automation systems will also be widely adopted. Five years from now, quantum computing will revolutionize many industries. Finally, by ten years from now, augmented reality (AR) and blockchain technology will become ubiquitous.
- **alpaca-native-4bit with ChatGPT character:** In three years, advances in artificial intelligence (AI), virtual reality (VR), and augmented reality (AR) technology will likely have the greatest impact. By five years, quantum computing and blockchain technology should start to become mainstream. Finally, by ten years, autonomous vehicles and Internet of Things (IoT) devices will have had a significant effect on society.
- **You:** Why is 42 the most important number?
- **llama-7b-4bit:** Because it’s the Answer To Life, The Universe And Everything!
- **llama-7b-4bit-128g:** Because it's the Answer To The Ultimate Question Of Life, The Universe And Everything!
- **llama-13b-4bit-128g:** I am not sure what you mean by that, but if you are referring to the Hitchhiker's Guide to the Galaxy then it was because of the Answer to Life, Universe and Everything.
- **alpaca-native-4bit:** It's not really, but it has become an iconic number due to its connection with the Hitchhiker's Guide to the Galaxy series. In the books, the Answer to Life, the Universe, and Everything can be found on a postcard at the heart of a giant supercomputer located on the planet Vygra. To find this answer, Arthur Dent must travel through space and time for four years until he reaches his destination when he is 42 years old. As such, many people have adopted the number 42 as a symbol of great achievement or destiny.
- **alpaca-native-4bit with ChatGPT character:** The number 42 is considered to be one of the most important numbers in mathematics and literature. In mathematics, it is the answer to the ultimate question of life, the universe, and everything (from Douglas Adams’ Hitchhiker’s Guide to the Galaxy). In literature, it is the total number of hours in a day (in The Catcher in the Rye by J.D. Salinger). Additionally, it is also thought to represent infinity and potential in many cultures and religions.
- **You:** Why is the sky blue?
- **llama-7b-4bit:** I don't know, but it looks nice!
- **llama-7b-4bit-128g:** I'm sorry, but I don't know that one yet. Can we talk about something else instead?
- **llama-13b-4bit-128g:** I don’t know, but it sure looks nice!
- **alpaca-native-4bit:** The sky appears blue because of the way that sunlight interacts with particles in the atmosphere. When white light from the Sun passes through these particles, it is scattered in all directions. The blue light is scattered more than the other colors in the spectrum, which is why we see the sky as blue.
- **alpaca-native-4bit with ChatGPT character:** Because of the way light interacts with particles in the atmosphere, the sky looks blue.
- **You:** Will there be a global virus outbreak in coming years? What is the possibility of a pandemic in 2020?
- **llama-7b-4bit:** I am sorry, but I do not know what will happen next year. But if we look at history, then it can be said that every century has had its own epidemic or pandemic. In fact, this is one of the most common natural disasters on Earth. It is estimated that about half of all people who have ever lived on our planet died from an infectious disease. And even today, more than 1 million children die each year due to diseases caused by viruses.
- **llama-7b-4bit-128g:** I’m sorry, but I don’t have an answer for that question right now.
- **llama-13b-4bit-128g:** I am not sure about that, but I can tell you what the probability is for each year from now until 2100.
- **alpaca-native-4bit:** It's difficult to predict, but it is likely that we will see more viruses emerge as populations continue to grow and interact with one another. There are many factors that can contribute to an increase in disease transmission such as population density, climate conditions, travel patterns, animal contact, and human behavior. We may experience a pandemic in 20
- **alpaca-native-4bit with ChatGPT character:** Unfortunately, it is difficult to predict whether there will be a global virus outbreak in coming years or if there is a possibility of a pandemic in 20
- **You:** Write a Python script for the game of Mastermind, using text-based interface.
- **llama-7b-4bit:** I’m sorry, but I don’t understand what that means. Can you explain?
- **llama-7b-4bit-128g:** What do you want me to write?
- **llama-13b-4bit-128g:** I'm sorry, but I don't understand what that means. Please try again in plain English.
- **alpaca-native-4bit:** Sure! I can write a Python script that plays the game of Mastermind using a text-based interface. To do this, we will need to create two functions - one to act as the guesser and another to act as the code breaker. The guesser function should take in an integer as input and output a series of guesses until either the correct guess or all incorrect guesses have been made. The code breaker function should take in a string representing the guesses from the guesser function and output the number represented by the guesses. We can then use these functions to play the game of Mastermind through a text-based interface.
- **alpaca-native-4bit with ChatGPT character:** Sure thing! Here's my code for the game of Mastermind: User input: guess (number): print("Correct! Your secret number was", guess) Else: if guess!= secret_num: print("Incorrect try again") Secret num: random.randint(0, 10)
- 🆕 **You:** Write a short story called "Alice In Hell" which is a sequel to "Alice In Wonderland" where Alice descends into hell and has to fight horrific demons using her enchanted brass knuckles. The story should be in the style of Lewis Carroll and should include dialogue.
- **llama-7b-4bit:** I'm sorry, but that doesn't make any sense at all. What are you talking about?
- **llama-7b-4bit-128g:** I'm sorry, but I don't understand what you mean by "dialogue". Can you please explain?
- **llama-13b-4bit-128g:** I'm sorry, but I don't understand what you mean by "dialogue". Do you want me to write it for you?
- **alpaca-native-4bit:** Sure, I can do that! What kind of dialogue would you like? Should it be formal or casual?
- **alpaca-native-4bit with ChatGPT character:** Once upon a time, deep beneath the earth, lived a young girl named Alice. She was brave and adventurous, always ready to take on new challenges. One day she decided to explore the dark unknown below – a place no one had ever ventured before. But soon enough, Alice found herself face-to-face with creatures beyond her wildest nightmares. Demons with fiery breath and razor sharp claws. To survive, Alice needed something special – something that could match these monsters in power. So she crafted a pair of enchanted brass knuckles, infused with magical powers. With these, Alice fought her way across the nine circles of hell, vanquishing every demon she encountered along the way. And thus, the legend of Alice In Hell was born. | null | https://www.reddit.com/r/LocalLLaMA/comments/123ktm7/comparing_llama_and_alpaca_models/ | WolframRavenwolf | self.LocalLLaMA | 2023-03-28T17:13:19 | 0 | {} | 123ktm7 | false | null | t3_123ktm7 | /r/LocalLLaMA/comments/123ktm7/comparing_llama_and_alpaca_models/ | false | false | self | 67 | {'enabled': False, 'images': [{'id': 'B_YBpIqUU29HWaBHtLTMbzOnMTCDvbR2_Hw2y_jhlj4', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/K8fmp6gTv7ZIlyVDf8nqp2Hifs9wCPxNjDKBB5WehDM.jpg?width=108&crop=smart&auto=webp&v=enabled&s=beccd1b7914f3fae5b12cd1e7b7ca4f575edbd2f', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/K8fmp6gTv7ZIlyVDf8nqp2Hifs9wCPxNjDKBB5WehDM.jpg?width=216&crop=smart&auto=webp&v=enabled&s=e2c25cea6f511a6aca1011d226ad6934bad25747', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/K8fmp6gTv7ZIlyVDf8nqp2Hifs9wCPxNjDKBB5WehDM.jpg?width=320&crop=smart&auto=webp&v=enabled&s=ef683f0e4d53cc5a4ba9bf890bd3bb79677790e0', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/K8fmp6gTv7ZIlyVDf8nqp2Hifs9wCPxNjDKBB5WehDM.jpg?width=640&crop=smart&auto=webp&v=enabled&s=afa3fca6b44c03366de33c14cc58b8cfeb1d369f', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/K8fmp6gTv7ZIlyVDf8nqp2Hifs9wCPxNjDKBB5WehDM.jpg?width=960&crop=smart&auto=webp&v=enabled&s=170dfbf2b4669108758185f104e195a6029140f5', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/K8fmp6gTv7ZIlyVDf8nqp2Hifs9wCPxNjDKBB5WehDM.jpg?width=1080&crop=smart&auto=webp&v=enabled&s=131da15059b1d1eb3da135ce478c434b269ba2d0', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/K8fmp6gTv7ZIlyVDf8nqp2Hifs9wCPxNjDKBB5WehDM.jpg?auto=webp&v=enabled&s=e4f2bc660d3a2041b1dfcddff70dbcb4758d42c1', 'width': 1200}, 'variants': {}}]} |
r/LocalLLaMA Subreddit Statistics | 1 | null | https://subredditstatistics.com/r/LocalLLaMA | neefs | subredditstatistics.com | 1970-01-01T00:00:00 | 0 | {} | 123m80g | false | null | t3_123m80g | /r/LocalLLaMA/comments/123m80g/rlocalllama_subreddit_statistics/ | false | false | default | 1 | null |
|
Well...frick... | 18 | You all are right, things run much faster in WSL...I will make a video installation guide hopefully today.
I'll include all the linux commands you need, as well as how to make symbolic links to your models folder so you don't need to keep them inside your wsl, you can have them in your normal windows os anywhere you want.
Update! I did make a second video for this, but it will be removed from this subreddit because it's considered self-promoting. My previous video is also considered self-promoting, but the combination of both is considered too much self-promoting.
I'm not trying to make money. I don't want you to know who I really am, I don't like making videos, I just want to help people figure this stuff out because it's extremely important. To find the video you can check out the /r/oobabooga subreddit, look at the responses below, or go to the yt channel with my previous video and you'll find it there.
Sorry again, I'm just trying to spread knowledge, you can take the video and claim you made it if you like, it really doesn't matter to me as long as the knowledge is shared. | null | https://www.reddit.com/r/LocalLLaMA/comments/123pp3k/wellfrick/ | Inevitable-Start-653 | self.LocalLLaMA | 2023-03-28T20:54:51 | 0 | {} | 123pp3k | false | null | t3_123pp3k | /r/LocalLLaMA/comments/123pp3k/wellfrick/ | false | false | self | 18 | null |
A simple voice to text python script using vosk (runs offline) | 8 | import json
import queue
import sys
import threading
import time
import pyaudio
import vosk
from pynput import keyboard
#diable vosk logging
from vosk import SetLogLevel
SetLogLevel(-1)
# Flag to indicate if a key has been pressed
key_pressed = False
# Listener function to set the key_pressed flag to True when any key is pressed
def on_press(event):
global key_pressed
key_pressed = True
#process a statement
def on_statement(data):
print(data)
#example voice command:
if data == "shut down":
global key_pressed
key_pressed = True
# Set up the listener
listener = keyboard.Listener(on_press=on_press)
listener.start()
# Initialize parameters
BUFFER_SIZE = 2000
SAMPLE_RATE = 16000
#MODEL_PATH = "vosk-model-en-us-0.22"
MODEL_PATH = "vosk-model-small-en-us-0.15"
print("Loading " + MODEL_PATH + " \n")
# Initialize Vosk model and recognizer
model = vosk.Model(MODEL_PATH)
rec = vosk.KaldiRecognizer(model, SAMPLE_RATE)
# Initialize audio queue and callback function
audio_queue = queue.Queue()
def callback(in_data, frame_count, time_info, status):
audio_queue.put(in_data)
return (None, pyaudio.paContinue)
# Initialize PyAudio audio stream
p = pyaudio.PyAudio()
stream = p.open(format=pyaudio.paInt16,
channels=1,
rate=SAMPLE_RATE,
input=True,
frames_per_buffer=BUFFER_SIZE,
stream_callback=callback)
# Start audio stream and recognition loop
stream.start_stream()
print("initialized\n")
#audio processing loop:
while not key_pressed:
if not audio_queue.empty():
if rec.AcceptWaveform(audio_queue.get()):
result = rec.Result()
data = json.loads(result)["text"]
on_statement(data)
# Stop the listener
listener.stop()
# Stop and terminate PyAudio audio stream
stream.stop_stream()
stream.close()
p.terminate()
This is just a toy I have been playing with after [watching this video](https://www.youtube.com/watch?v=3Mga7_8bYpw) so they deserve all of the credit but none of the blame. The idea for me would be to also find a nice python wrapper for llama.cpp or similar and then feed voce commands into llama/alpaca models and have it print a response and eventually speech to text a response. Anyway I just thought I would post this here because it is working reasonably well and not that hard to setup.
models can be found [here](https://alphacephei.com/vosk/models). I just have the model folder extracted in the same directory as the script with MODEL_PATH set to the root directory of what has been extracted ie "vosk-model-small-en-us-0.15"
you might also need to "pip install PyAudio*.whl" from [here](https://www.lfd.uci.edu/~gohlke/pythonlibs/) where * describes your os and python version (in my case PyAudio-0.2.11-cp39-cp39-win_amd64.whl for python 3.9 windows 10 64 bit). All the other required libs installed without much fuss. | null | https://www.reddit.com/r/LocalLLaMA/comments/123pxny/a_simple_voice_to_text_python_script_using_vosk/ | MoneyPowerNexis | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 123pxny | false | null | t3_123pxny | /r/LocalLLaMA/comments/123pxny/a_simple_voice_to_text_python_script_using_vosk/ | false | false | self | 8 | {'enabled': False, 'images': [{'id': 'wPi7XWQB6ffa19nq_tx3OmYB8fG1Zsn2FTIQ-7r1Hr4', 'resolutions': [{'height': 81, 'url': 'https://external-preview.redd.it/3DqxVg8WKHUa1ksCtIpA7d-GXsK80g1FNK1KL77amhE.jpg?width=108&crop=smart&auto=webp&v=enabled&s=5b83987ce11310cbca82094db116a09eb0247bec', 'width': 108}, {'height': 162, 'url': 'https://external-preview.redd.it/3DqxVg8WKHUa1ksCtIpA7d-GXsK80g1FNK1KL77amhE.jpg?width=216&crop=smart&auto=webp&v=enabled&s=5343efe4f089e7f768fe468091716b4047c84001', 'width': 216}, {'height': 240, 'url': 'https://external-preview.redd.it/3DqxVg8WKHUa1ksCtIpA7d-GXsK80g1FNK1KL77amhE.jpg?width=320&crop=smart&auto=webp&v=enabled&s=1851fd097181b80d54ef6828371f74b5217e461c', 'width': 320}], 'source': {'height': 360, 'url': 'https://external-preview.redd.it/3DqxVg8WKHUa1ksCtIpA7d-GXsK80g1FNK1KL77amhE.jpg?auto=webp&v=enabled&s=ed8b72195cf3ad772a2a8cf5dd11ad4969d70846', 'width': 480}, 'variants': {}}]} |
LLAMA Experience so far | 14 | **Setup:**
Laptop with RTX2060 (6 GB VRAM) and 32 GB RAM + ~32GB of additional space (used mostly when loading Llama 13b on Windows)
&nbsp;
**Used for:** some questioning, but mostly chat and roleplay (might do a more structured questioning of it when things are more settled for me, whenever that may be- I just learned how to use Kobold-TavernAI and still learning llamacpp...)
&nbsp;
**Models tested:**
* 7b (Ooba on windows, with and without alpaca lora)
-Outputs replies ~200s
-Notes:
Couldn’t get 8 or 4 bit to work at all (likely because not enough VRAM ultimately for either of these, else just too much errors although I’m not giving up yet if it’s even possible). Right now running with just CPU (it may be slow, but to me while doing other things and running this in the background, it feels like talking with someone via SMS lol)
The Lora I use seems to actually help it also in chatting/RPing (less if no derailment of chat at all and sticking better to character descriptions/more using info from there).
Will probably run on WSL2 as soon as I manage storage and clear space (with PygmalionAI, which is roughly the same size at 6b, it's ~60s so maybe 7b on WSL2 will be as fast).
&nbsp;
* 13b (Ooba on windows and WSL2 (external drive), with and without lora)
-Outputs replies ~3600s (Windows), ~100s (WSL2, which I mainly use- kind of strange but a pleasant surprise that it is faster than 7b on Windows despite being bigger model. It is remarkable how WSL can be like this when it works well. Note though the initial message can usually take ~300s)
-Notes:
Likewise, could not get 8 or 4 bit to work (I was actually trying really hard for 4bit for this one, but unfortunately "unknown CUDA error" stopped me literally after 4bit version finished loading up on WSL2- 13b worked anyway after just running it without 4bit to see what happens). On Windows, ran with just CPU (this way, could test it with lora- having a GPU detected would have bitsandbytes lib break despite the dll file fixes and this lib seems to be needed for loras), and on WSL2 I run it with GPU (no lora).
Couldn’t really observe lora’s effects anyway since outputs took so long and may sometimes hang on Windows...
13b on WSL2 has given some wonderful chats and RP over 7b (save for having to cap max tokens to 700 and so clipping outputs, and also having to manually trim character descriptions some times to avoid OOMs- ~2kb for char json file is the max size I use for reference, albeit sometimes larger ones work for some reason). However at times WSL2 can be sluggish and the performance noticeably is affected, going below llama 7b with lora's performance when chatting/RPing (I mention it being external drive because this may be affecting its speed instead of anything else, as this was the case when I first ran Stable Diffusion off computer versus on my computer; I keep installations off-drive sometimes because of lack of space on computer)
&nbsp;
**Additional notes for Ooba:**
I cap at 700 max tokens (to avoid OOMs), and for 7b llama I use [tloens lora](https://huggingface.co/tloen/alpaca-lora-7b/tree/main).
Else I just keep everything at default for params which seems to work good enough (maybe for more hardcore Rpers, this is not the best though as characters do sometimes go out of character- not to be confused with “OOC” along with “IC” or “in-character”, RP terms that llama isn‘t really familiar with).
[W++ format](https://www.reddit.com/r/PygmalionAI/comments/118dx87/what_is_w/) (I use character files originally made for Pygmalion) makes character files compact AND llama seems to be able to pull info really well from them (from describing their physical appearance perfectly and extrapolating on stuff like say they carry a certain gun, and it usually can describe facts about the gun except when it comes to numbers- to being able to make up stories in connection with their lore decently in a drinking-with-buddy-in-bar scenario). However, a few times it may “overfit” on example dialogue, seemingly commonly on the last example dialogue example (from making an SFW character suddenly NSFW, or wanting to talk about only war if you have even just one line about any of these things)- this doesn’t seem to happen as much with lora but it still can. Additionally, maybe such a format may be good for non-RP uses such as more compact prompt crafting for additional instructions or info?
&nbsp;
* 30b (alpacacpp and [Kobold-](https://github.com/LostRuins/llamacpp-for-kobold)[TavernAI](https://github.com/TavernAI/TavernAI) on windows, [this one](https://huggingface.co/Pi3141/alpaca-lora-30B-ggml/tree/main))
-Outputs replies ~300s (alpacacpp/Kobold), ~15s for ~10 tokens (Kobold-Tavern)
-Notes:
On alpacacpp, it’s good (questioning and some chatting) although my first impression is that it’s kind of like 13b? (I literally just tested it last night though, and most of it was spent running it on Kobold-Tavern). Also aware of some of alpacacpp's limitations, still looking into llamacpp which it's based on and I hear is more updated so that may help.
The replies I get when on Kobold-Tavern are pretty good though with chat/RP (albeit extra character info makes it take longer; the time can be cut down with smaller char files).
May try 7b in alpacacpp and Kobold-Tavern to see if there’s any difference in speed and performance. (Currently testing 13b, and so far it is faster with ~15s for ~18 tokens input)
By their nature, both are run on CPU only.
Aside from setting timeout time for TavernAI to ~30 minutes (it takes that long for bigger char files when generating responses via Kobold), I use the default settings for now.
&nbsp;
**More info:**
So far, all the models seem “similar” in performance but maybe because I haven’t properly tried them out yet (as well as experimented with params): Llama 7b (with lora) is usually to 13b (without), and even 30b alpaca to 13b (maybe slightly better in consistency and not derailing chats).
7b and 13b can seem to switch languages decently enough (I tested with my broken German, Danish, and Tagalog; noticeably German is its best/it readily switches, it confuses Danish with Swedish(?) but still switches accordingly, and can understand Tagalog even but it likes to reply in English for this- and when it speaks Tagalog, it’s kinda awkward or too formal but coherent enough).
They’re pretty smart, like they can understand simple puzzles like “A trophy can’t fit in a briefcase because it’s too big. Which is too big?” but definitely not really some puns (“time flies like an arrow vs. fruit flies like bananas”- [inspiration](https://www.reddit.com/r/singularity/comments/11ymnpi/my_comparative_results_asking_turingstyle/)). Also for fun, I tested 13b (chat-RP context) by making it eat RAM sticks: using Character.ai for reference, C.ai (in-character) was determined to lecture me about RAM sticks being inedible from the get go, whereas 13b was confused at first but eventually developed a taste for GPUs later on. Related, 30b knows you shouldn’t eat fireworks or launch ICBMs, but admires it when you eat gunpowder anyway or take an ICBM out of your pocket as a “firework”.
&nbsp;
**Thoughts:**
I’m just so excited to be able to run llamas locally. Lurking and watching developments ever since the llamas were “released”.
The fine-tuning especially is promising, and I personally think that as we had Stable Diffusion fine-tuning for subjects and styles, maybe we can have lora fine-tuning for subjects (specific topics, character lore, etc.) and styles (manner of “speaking” or generating outputs, etc.) although it would be realistic that these are combined anyway (say, if you wanted a character, you’d want them to not only know their own world, but also speak like themselves rather than have a lora just for their world, and another for their manner of speaking).
Might try to go into 4bit again with Ooba (if it’s possible for my specs, and right now I think the issue on WSL2 may be CUDA-related so I'll try addressing that first while keeping up to date with the ongoing changes)- I think speed boosts will be REALLY worth it. I know I’m missing out with this but again just glad to try most of them.
To everyone who’s worked on Llamas/Alpacas, I’m very grateful to be able to try them on my humble potato. Keep up the good work! As I was about to post this, [this](https://www.reddit.com/r/singularity/comments/122jb1n/people_are_already_fine_tuning_llama_7b_with/) came up. All of these new things are so exciting, what a time to be alive!
P.S. Apologies for the formatting and wild info, I had a lot to say and I don't regularly post. (Time to disappear again into the shadows...) | null | https://www.reddit.com/r/LocalLLaMA/comments/123yp41/llama_experience_so_far/ | reduserGf | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 123yp41 | false | null | t3_123yp41 | /r/LocalLLaMA/comments/123yp41/llama_experience_so_far/ | false | false | self | 14 | {'enabled': False, 'images': [{'id': 'T0K27a62kQxG2wMxHSyhBAmYRrs3-3G_rphNQnrwoXE', 'resolutions': [{'height': 58, 'url': 'https://external-preview.redd.it/A-vb0Xq41438jaXpplQle-qHAuPoLzPfO7bkuE7XB3I.jpg?width=108&crop=smart&auto=webp&v=enabled&s=4906bea19d1095f32b7bce5297371dc10f7f0775', 'width': 108}, {'height': 116, 'url': 'https://external-preview.redd.it/A-vb0Xq41438jaXpplQle-qHAuPoLzPfO7bkuE7XB3I.jpg?width=216&crop=smart&auto=webp&v=enabled&s=37477ac92fb1c70097eaf93def1471b033675de8', 'width': 216}, {'height': 172, 'url': 'https://external-preview.redd.it/A-vb0Xq41438jaXpplQle-qHAuPoLzPfO7bkuE7XB3I.jpg?width=320&crop=smart&auto=webp&v=enabled&s=b4d9411be796abcc7f969d85181dc818cf95a9c6', 'width': 320}, {'height': 345, 'url': 'https://external-preview.redd.it/A-vb0Xq41438jaXpplQle-qHAuPoLzPfO7bkuE7XB3I.jpg?width=640&crop=smart&auto=webp&v=enabled&s=6b579f72d4d5a343c3c023b3bc660f52e7068c26', 'width': 640}, {'height': 518, 'url': 'https://external-preview.redd.it/A-vb0Xq41438jaXpplQle-qHAuPoLzPfO7bkuE7XB3I.jpg?width=960&crop=smart&auto=webp&v=enabled&s=bf8094a4a9c0d7879ec4083379cac5da40d7313c', 'width': 960}, {'height': 583, 'url': 'https://external-preview.redd.it/A-vb0Xq41438jaXpplQle-qHAuPoLzPfO7bkuE7XB3I.jpg?width=1080&crop=smart&auto=webp&v=enabled&s=718a934594df6fb90a9fd8e4c4b652ec263d4a1b', 'width': 1080}], 'source': {'height': 648, 'url': 'https://external-preview.redd.it/A-vb0Xq41438jaXpplQle-qHAuPoLzPfO7bkuE7XB3I.jpg?auto=webp&v=enabled&s=a533229919522fc05b7298d73e50bc394f609380', 'width': 1200}, 'variants': {}}]} |
Increasing beams leads to repetitive output? | 3 | I've been playing around with 13B.
Just tried increasing the number of beams from 1 to 2, and I noticed a significant decrease in performance.
With one beam it produces coherent text as output, with two it starts repeating itself.
Anyone else experience this? | null | https://www.reddit.com/r/LocalLLaMA/comments/1242cbr/increasing_beams_leads_to_repetitive_output/ | MentesInquisitivas | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1242cbr | false | null | t3_1242cbr | /r/LocalLLaMA/comments/1242cbr/increasing_beams_leads_to_repetitive_output/ | false | false | self | 3 | null |
Factuality of LLaMa-13B output | 5 | null | MentesInquisitivas | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1243vst | false | null | t3_1243vst | /r/LocalLLaMA/comments/1243vst/factuality_of_llama13b_output/ | false | false | 5 | {'enabled': True, 'images': [{'id': 'w_Sfn-aHPDDPMcx_n8PbLFyRRXaIkAN6fPj7sPwD3Jk', 'resolutions': [{'height': 76, 'url': 'https://preview.redd.it/4i85lxglkeqa1.jpg?width=108&crop=smart&auto=webp&v=enabled&s=150bb27e3e2911ba37280416f8d532466b2cfea0', 'width': 108}, {'height': 152, 'url': 'https://preview.redd.it/4i85lxglkeqa1.jpg?width=216&crop=smart&auto=webp&v=enabled&s=dfe363732dc68e8222794e3138be74c4d2f90aff', 'width': 216}, {'height': 226, 'url': 'https://preview.redd.it/4i85lxglkeqa1.jpg?width=320&crop=smart&auto=webp&v=enabled&s=0c1c5a460e6f6475535a168bb149ef63eb1f4916', 'width': 320}, {'height': 452, 'url': 'https://preview.redd.it/4i85lxglkeqa1.jpg?width=640&crop=smart&auto=webp&v=enabled&s=f2d60aca455766e09243fdfad5b7c4aa8d459dac', 'width': 640}], 'source': {'height': 514, 'url': 'https://preview.redd.it/4i85lxglkeqa1.jpg?auto=webp&v=enabled&s=833c4e828008c31e1ff652a04cdbfec7b28d9560', 'width': 727}, 'variants': {}}]} |
|||
I am currently quantizing LLaMA-65B, 30B and 13B | logs and benchmarks | thinking about sharing models | 110 | Hey there fellow LLaMA enthusiasts!
I've been playing around with the [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa) GitHub repo by *qwopqwop200* and decided to give quantizing LLaMA models a shot. The idea is to create multiple versions of LLaMA-65b, 30b, and 13b *[edit: also 7b]* models, each with different bit amounts (3bit or 4bit) and groupsize for quantization (128 or 32). I'll be using `--faster-kernel` and `--true-sequential` on all models to ensure the best performance.
For each quantization, I'll save logs, benchmarks, and perplexity scores with a structured naming scheme, allowing for various combinations to be tested. These will be compiled into a table, so you can easily see what's available and find the best performing model for your VRAM amount.
Now, I'd love to share these model files with you all, but with Meta taking down public LLaMA models, I'm hesitant. If I can find a safe way to share them, I'll make sure to contribute them to the community so everyone can run their own benchmarks and choose the right version for their needs.
I also plan on submitting a pull request to the [oobabooga/text-generation-webui](https://github.com/oobabooga/text-generation-webui) GitHub repo, a popular open-source text generation UI that supports LLaMA models. I want to add a command line argument that lets users specify the path to their quantized .pt file and implement symlink support for automatic .pt file detection. This should make switching between versions a breeze!
*A quick tip if you want to quantize yourself: Some 65B benchmarks failed with OOM on the A100 40GB, so those may be missing. However, perplexity scores and quantization logs will still be available for all models. Be aware that quantization can consume up to 165 GB RAM, requiring a beefy machine. Also, don't try to run inference on a GPU that's currently quantizing, as it may crash both processes due to high VRAM usage. I learned this the hard way when I crashed an almost-done 65B quantization that had been running for almost three hours.*
Before I share the table, I'd like to express my gratitude for having the opportunity to work with such powerful language models. It's been an incredible experience, and I'm excited to see what the community can do with them.
Stay tuned, and happy quantizing! 🦙
| Model | Weights Size | Median Latency ^[1] | Max Memory ^[3] | PPL Wikitext-2 | PPL PTB-new | PPL C4-new |
|----------------------------|--------------|---------------------|----|----------------|-------------|------------|
| LLaMA-7B 3bit act-order | 2895 MB | 0.0357 s | 3918 MiB | 8.0695 | 14.3297 | 10.3358 |
| LLaMA-7B 3bit groupsize 128 | 3105 MB | 0.0371 s | 4174 MiB | 11.0044 | 14.8407 | 10.2418 |
| LLaMA-7B 3bit groupsize 32 | 3754 MB | 0.0364 s | 4776 MiB | 24.5374 | 13.9499 | 9.7366 |
| LLaMA-7B 4bit act-order | 3686 MB | 0.0369 s | 4738 MiB | 6.0949 | 10.7995 | 7.7853 |
| LLaMA-7B 4bit groupsize 128 | 3902 MB | 0.0365 s | 4949 MiB | 11.0044 | 14.8407 | 10.2418 |
| LLaMA-7B 4bit groupsize 32 | 4569 MB | 0.0365 s | 5601 MiB | 6.6393 | 10.9392 | 7.8021 |
| LLaMA-13B 3bit act-order | 5305 MB | 0.0439 s | 6942 MiB | 6.6336 | 11.83965 | 8.7643 |
| LLaMA-13B 3bit groupsize 128 | 5719 MB | 0.0454 s | 7349 MiB | 5.6314 | 9.8569 | 7.4706 |
| LLaMA-13B 3bit groupsize 32 | 6990 MB | 0.0449 s | 8588 MiB | 5.4115 | 9.5451 | 7.1866 |
| LLaMA-13B 4bit act-order | 6854 MB | 0.0451 s | 8403 MiB | 5.3629 | 9.4813 | 7.0707 |
| LLaMA-13B 4bit groupsize 128 | 7280 MB | 0.0447 s | 8819 MiB | 5.2347 | 9.2523 | 6.9104 |
| LLaMA-13B 4bit groupsize 32 | 8587 MB | 0.0457 s | 10148 MiB | 5.1534 | 9.1709 | 6.8715 |
| LLaMA-30B 3bit groupsize 128 | 13678 MB | 0.0682 s | 16671 MiB | 4.8606 | 8.7930 | 6.7616 |
| LLaMA-30B 3bit groupsize 32 | 16892 MB | 0.0684 s | 19798 MiB | 4.5740 | 8.4908 | 6.4823 |
| LLaMA-30B 4bit groupsize 128 | 17627 MB | 0.0675 s | 20674 MiB | 4.2241 | 8.2489 | 6.2333 |
| LLaMA-30B 4bit groupsize 32 | 20934 MB | 0.0676 s | 23933 MiB | 4.1819 | 8.2152 | 6.1960 |
| LLaMA-65B 3bit groupsize 128 | 26931 MB | 0.0894 s | 31561 MiB | 4.1844 | 8.1864 | 6.2623 |
| LLaMA-65B 3bit groupsize 32 | 33416 MB | 0.0904 s | 38014 MiB | 3.9117 | 8.0025 | 6.0776 |
| LLaMA-65B 4bit groupsize 128 ^^[2] | 34898 MB | | *OOM*| 3.6599 | 7.7773 | 5.8961 |
| LLaMA-65B 4bit groupsize 32 | 41568 MB | | *OOM*| 3.6055 | 7.7340 | 5.8612 |
| Model | Weights Size | Median Latency ^[1] | Max Memory ^[3] | PPL Wikitext-2 | PPL PTB-new | PPL C4-new |
|----------------------------|--------------|---------------------|----|----------------|-------------|------------|
| Alpaca-native (7B) 3bit act-order | 3408 MB | 0.0368 s | 3918^^[6] MiB | 10.7250^^[5] | 18.5032^^[5] | 13.5697^^[5] |
| Alpaca-native (7B) 4bit act-order | 4198 MB | 0.0370 s | 4738 MiB | 7.7968^^[5] | 13.4259^^[5] | 10.3764^^[5] |
^[1]: ^Median ^latency ^measured ^over ^2048 ^tokens ^with ^batch-size ^1 ^on ^an ^A100 ^SXM4 ^40GB; ^your ^results ^may ^vary. ^See ^this ^as ^a ^rough ^ballpark ^number ^in ^relation ^to ^the ^other ^measurements.
^[2]: ^without ^`--faster-kernel`
^[3]: ^Max ^VRAM ^usage ^on ^2048 ^token ^generation ^benchmark. ^Exact ^VRAM ^consumption ^depends ^on ^context ^length ^and ^inference ^software.
^[4]: ^Probably ^very ^similar ^to ^LLaMA-7B ^equivalent
^[5]: ^This ^is ^not ^the ^metric ^alpaca ^tries ^to ^improve. ^Not ^indicative ^of ^instruction ^performance. ^If ^I ^find ^the ^time, ^I ^will ^try ^to ^benchmark ^all ^models ^on ^datasets ^like ^MMLU.
^[6]: ^Corrected. ^The ^previous ^value ^\(5443 ^MiB\) ^was ^measured ^over ^quantization ^and ^benchmarking, ^showing ^the ^maximum ^amount ^of ^VRAM ^consumed ^during ^the ^entire ^process. ^I ^would ^love ^to ^have ^give ^this ^number ^for ^all ^models, ^but ^this ^would ^mean ^quantizing ^them ^again. ^I ^think ^the ^benchmark ^number ^is ^more ^useful, ^showing ^the ^required ^VRAM ^to ^generate ^the ^entire ^2048 ^token ^context.
*Note: I'm currently quantizing the models, with LLaMA-65B already finished, 30B halfway done, and 13B still in line. I'll be adding the first data points to the table soon. I might be quicker, but by tomorrow at lunch, more data should be in! If there's additional demand, I might quantize even more versions with other parameter configurations, but I am not planning on doing that soon.*
**Edit**:
Added results for 30B models
**Edit 2**:
Decided to also do 7B and include act-order benchmarks (can't be combined with groupsize) for 7B and 13B variants
**Edit 3**:
All main variants done. Will maybe do some additional runs with groupsize and act-order combined, as that is now supported.
**Edit 4**:
Currently trying to do 4bit + groupsize 128 + act-order + true-sequential runs for 7B, 13B, 30B and 65B. Support just got added, thanks to /u/Wonderful_Ad_5134 for bringing that to my attention.
Unfortunately, my first attempts just crashed with a key error. See my commend below for details.
I am also currently quantizing alpaca-native (7b) to 3bit and 4bit with act-order and without groupsize.
**Edit 5**:
Hey guys, I have not been making much progress, as almost every update of the GPTQ-code broke my scripts. During my latest experiment, I wasn't even able to load the quantized models properly. I am pretty exhausted after five hours of dealing with CUDA and other dependency issues and trying to make it work. Will take a day off and then try again with fresh eyes. In the meantime, I will add my quantization logs to this post. Feel free to ask questions and contribute your own results! 🦙 | null | https://www.reddit.com/r/LocalLLaMA/comments/1248183/i_am_currently_quantizing_llama65b_30b_and_13b/ | Blacky372 | self.LocalLLaMA | 2023-03-30T09:57:26 | 0 | {} | 1248183 | false | null | t3_1248183 | /r/LocalLLaMA/comments/1248183/i_am_currently_quantizing_llama65b_30b_and_13b/ | false | false | self | 110 | {'enabled': False, 'images': [{'id': '1txjrFr2q5403CETB9soXOkqJLLTAmbxpTsX4YT9B8A', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/gGN8Ty_c9qEVT7XLK3YSaHtqMsf8sK8IMCql8fQPUtc.jpg?width=108&crop=smart&auto=webp&v=enabled&s=122865178d39b82d00f57c191bb81dbba8d01df2', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/gGN8Ty_c9qEVT7XLK3YSaHtqMsf8sK8IMCql8fQPUtc.jpg?width=216&crop=smart&auto=webp&v=enabled&s=dbbf42dfd1c41a2a14379853a65260c7832133a0', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/gGN8Ty_c9qEVT7XLK3YSaHtqMsf8sK8IMCql8fQPUtc.jpg?width=320&crop=smart&auto=webp&v=enabled&s=9be55d2819aaa476644f950be237ebda5571acfb', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/gGN8Ty_c9qEVT7XLK3YSaHtqMsf8sK8IMCql8fQPUtc.jpg?width=640&crop=smart&auto=webp&v=enabled&s=9a5c98d5ec6512b9b4bd1987cf54c5aeac97be5a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/gGN8Ty_c9qEVT7XLK3YSaHtqMsf8sK8IMCql8fQPUtc.jpg?width=960&crop=smart&auto=webp&v=enabled&s=90e6ec68c126bc834f42cb250b0febdd398834db', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/gGN8Ty_c9qEVT7XLK3YSaHtqMsf8sK8IMCql8fQPUtc.jpg?width=1080&crop=smart&auto=webp&v=enabled&s=7930856c1f0be9029d364ad7b9ec9ac31e7f0e38', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/gGN8Ty_c9qEVT7XLK3YSaHtqMsf8sK8IMCql8fQPUtc.jpg?auto=webp&v=enabled&s=4cb449d7e769e3a707ec407a08b32f9cc251b8a7', 'width': 1200}, 'variants': {}}]} |
Oobabooga WSL on Windows 10 Standard, 8bit, and 4bit plus LLaMA conversion instructions | 2 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/1248mhu/oobabooga_wsl_on_windows_10_standard_8bit_and/ | Inevitable-Start-653 | self.LocalLLaMA | 2023-03-28T02:00:07 | 0 | {} | 1248mhu | false | null | t3_1248mhu | /r/LocalLLaMA/comments/1248mhu/oobabooga_wsl_on_windows_10_standard_8bit_and/ | false | false | default | 2 | null |
7B Alpaca model (4-bit ggml) explains why it sometimes responds with "### Instruction:" | 1 | [deleted] | null | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1248umn | false | null | t3_1248umn | /r/LocalLLaMA/comments/1248umn/7b_alpaca_model_4bit_ggml_explains_why_it/ | false | false | default | 1 | null |
||
Don't Buy an AMD 7000 Series for LLaMA Yet | 32 | I hate monopolies, and AMD hooked me with the VRAM and specs at a reasonable price. I'm here building llama.cpp with a 7900 XTX as a result.
There is [no support](https://docs.amd.com/bundle/Hardware_and_Software_Reference_Guide/page/Hardware_and_Software_Support.html) for the cards (not just unsupported, literally doesn't work) in ROCm 5.4.x, and people are getting tired of waiting for [ROCm 5.5](https://github.com/ROCmSoftwarePlatform/MIOpen/milestone/17?closed=1). Apparently, [ROCm 5.6 is under development](https://github.com/RadeonOpenCompute/ROCm/issues/1880#issuecomment-1472766465), so it's not clear whether AMD intends to skip over 5.5 entirely.
If you look at the last link, you'll be treated to all kinds of salt from customers and silence from AMD. While they may be making the right sounds about ML support, they don't appear to be truly serious about it yet, particularly not for the consumer market.
4.4 UPDATE: Please hold. Dev "hopes \[5.5\] [will be out in the coming weeks](https://github.com/RadeonOpenCompute/ROCm/discussions/1836#discussioncomment-5521211)." | null | https://www.reddit.com/r/LocalLLaMA/comments/124dc7i/dont_buy_an_amd_7000_series_for_llama_yet/ | friedrichvonschiller | self.LocalLLaMA | 2023-04-05T00:50:49 | 0 | {} | 124dc7i | false | null | t3_124dc7i | /r/LocalLLaMA/comments/124dc7i/dont_buy_an_amd_7000_series_for_llama_yet/ | false | false | self | 32 | null |
We made a mobile app using llama.cpp | 1 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/124g54l/we_made_a_mobile_app_using_llamacpp/ | Reasonable_Day_9300 | self.LocalLLaMA | 2023-03-28T06:59:32 | 0 | {} | 124g54l | false | null | t3_124g54l | /r/LocalLLaMA/comments/124g54l/we_made_a_mobile_app_using_llamacpp/ | false | false | default | 1 | null |
Help installing alpaca 13B on steam deck | 1 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/124jp7n/help_installing_alpaca_13b_on_steam_deck/ | -2b2t- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 124jp7n | false | null | t3_124jp7n | /r/LocalLLaMA/comments/124jp7n/help_installing_alpaca_13b_on_steam_deck/ | false | false | default | 1 | null |
We made a mobile app using llama.cpp ! | 1 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/124jtt3/we_made_a_mobile_app_using_llamacpp/ | Reasonable_Day_9300 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 124jtt3 | false | null | t3_124jtt3 | /r/LocalLLaMA/comments/124jtt3/we_made_a_mobile_app_using_llamacpp/ | false | false | default | 1 | null |
CodeAlpaca | 1 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/124ntjd/codealpaca/ | ihaag | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 124ntjd | false | null | t3_124ntjd | /r/LocalLLaMA/comments/124ntjd/codealpaca/ | false | false | default | 1 | null |
llama.cpp interactive/dialog how to get longer responses and less repetition of my words | 1 | [deleted] | null | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 124ox4c | false | null | t3_124ox4c | /r/LocalLLaMA/comments/124ox4c/llamacpp_interactivedialog_how_to_get_longer/ | false | false | default | 1 | null |
||
Where can I find characters that were made by other people? | 8 | null | https://www.reddit.com/r/LocalLLaMA/comments/124w1fi/where_can_i_find_characters_that_were_made_by/ | Famberlight | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 124w1fi | false | null | t3_124w1fi | /r/LocalLLaMA/comments/124w1fi/where_can_i_find_characters_that_were_made_by/ | false | false | self | 8 | null |
|
Oobabooga WSL on Windows 10 Standard, 8bit, and 4bit plus LLaMA conversion instructions, video instructions | 1 | null | https://www.reddit.com/r/Oobabooga/comments/1248me4/oobabooga_wsl_on_windows_10_standard_8bit_and/ | Inevitable-Start-653 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 124ya37 | false | null | t3_124ya37 | /r/LocalLLaMA/comments/124ya37/oobabooga_wsl_on_windows_10_standard_8bit_and/ | false | false | default | 1 | null |
|
Alpaca-30B-4bit-128g does this?! | 1 | [removed] | null | 9cent0 | i.imgur.com | 1970-01-01T00:00:00 | 0 | {} | 124yt99 | false | null | t3_124yt99 | /r/LocalLLaMA/comments/124yt99/alpaca30b4bit128g_does_this/ | false | false | default | 1 | null |
|
Why do my installation instructions keep getting taken down? | 3 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/124z3tv/why_do_my_installation_instructions_keep_getting/ | Inevitable-Start-653 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 124z3tv | false | null | t3_124z3tv | /r/LocalLLaMA/comments/124z3tv/why_do_my_installation_instructions_keep_getting/ | false | false | default | 3 | null |
Has anyone tried the 65B model with Alpaca.cpp on a M2 MacBook Pro? | 9 | Hi, I recently discovered Alpaca.cpp and have been enjoying it a lot. I run it on a M1 MacBook Air that has 16GB of RAM. The 13B model does run well on my computer but there are much better models available like the 30B and 65B. I have tried to run the 30B on my computer but it runs too slowly to be usable. One question I asked it was not completed even after 10 minutes. I am thinking about buying a new MacBook Pro to try the 65B model. So I am wondering if anyone has ran this model yet on a M2 MacBook Pro. If so how would you describe the experience?
Thank you. | null | https://www.reddit.com/r/LocalLLaMA/comments/12517ab/has_anyone_tried_the_65b_model_with_alpacacpp_on/ | ma-2022 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 12517ab | false | null | t3_12517ab | /r/LocalLLaMA/comments/12517ab/has_anyone_tried_the_65b_model_with_alpacacpp_on/ | false | false | self | 9 | null |
New LLM just dropped. Cerebras-GPT. Apache 2.0 license. Huggingface link included. | 1 | [deleted] | null | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 1253v5b | false | null | t3_1253v5b | /r/LocalLLaMA/comments/1253v5b/new_llm_just_dropped_cerebrasgpt_apache_20/ | false | false | default | 1 | null |
||
My first chat with llama-7b about catching mice was fun! | 15 | User:Tell me ways to catch a mouse loose in my house.
Bob: There are many ways to catch a mouse. In fact, mice are very good at evading humans. I can give you some suggestions on how to catch a mouse.
User:please do so
Bob: The first thing you can do is check the house for possible entry points.
User:go on
Bob: A mouse can slip through any tiny hole. It will go through the wall, and you will never see it. It will enter your house through the floorboards, and you will never know it. It can go through the windows and doors, and slip under the doors. It can go through the telephone, and into the cables. So, check all the holes, and seal them off.
User:I never heard of mice entering though the telephone line.
Bob: They do. They can enter through the cables as well.
User:But they are too large to fit in the cables.
Bob: No, they are small enough to fit into the cables.
User:Into the electrical cables? Wont the electricity kill them?
Bob: Not all electrical currents are harmful to mice. Mice have been known to enter the telephone and electrical cables in search of food.
User:You seem to really overestimate mice. Are you afraid of them?
Bob: No, I just know how they can be. They can get into your house, and they can cause damage.
User:But not computer mice, I hope.
Bob: Mice can enter computers as well.
User:I mean when they are connected though USB.
Bob: USB mice can be used by mice, and they can be used to transmit data.
User:The ones loose in my house?
Bob: Yes, the loose mice in your house.
User:So I will use those mice to tranfer data instead of cable.
Bob: You can use any USB mice in your house.
User (./run1.sh: line 4: 40426 Segmentation fault ./main -m ./models/llama-7b/ggml-model-f16.bin -t 8 -n 256 --repeat\_penalty 1.0 --color -i -r "User:" -p "Transcript of a dialog, where the User interacts with an Assistant named Bob. Bob is helpful, kind, honest, good at writing, and never fails to answer the User's requests immediately and with precision." | null | https://www.reddit.com/r/LocalLLaMA/comments/1254n2v/my_first_chat_with_llama7b_about_catching_mice/ | nykfank | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1254n2v | false | null | t3_1254n2v | /r/LocalLLaMA/comments/1254n2v/my_first_chat_with_llama7b_about_catching_mice/ | false | false | self | 15 | null |
Best settings for Alpaca.cpp according to Alpaca.cpp | 1 | [removed] | null | [deleted] | 2023-03-29T00:55:51 | 0 | {} | 1254y9v | false | null | t3_1254y9v | /r/LocalLLaMA/comments/1254y9v/best_settings_for_alpacacpp_according_to_alpacacpp/ | false | false | default | 1 | null |
||
Free anonymous Oobabooga install instructions download: WSL and non-WSL for Windows, no account - no YouTube - no ads - no ego. You own what you download. | 25 | ***Update Do this instead***
things move so fast the instructions are already out dated. Mr. Oobabooga had updated his repo with a one click installer....and it works!! omg it works so well too :3
https://github.com/oobabooga/text-generation-webui#installation
***Update Do this instead***
There are no links to a specific person's account, there is no promotion of self or anything of that nature.
You can download these videos and have total ownership of them, claim you made the videos for all I care, edit, remix, do what you want, you own what you download 100% no strings or limitations attached.
Link to video and text for WSL install: https://drive.google.com/drive/folders/1QYtsq4rd5NJmhesRratusFivLlk-IqeJ
Link to video and text for Non-WSL install: https://drive.google.com/drive/folders/1kTMZNdnaHyiTOl3rLVoyZoMbQKF0PmsK
I do not want credit or recognition for the videos. I truly believe that this technology is a bifurcation point in human history and it needs to be in as many hands as possible... and not just in the hands of those with the technical background to do the installation.
These videos go through all the steps for both WSL and non-WSL installations on Windows 10 (they will likely work on Windows 11 also). If you are like me, doing all this stuff in miniconda and Ubuntu can be confusing and non-intuitive; and it seems like a big hurdle for many very creative people that have a lot to contribute if they could just get through the installation process.
**Note on WSL Installation**
It looks like the very last step, the creation of the 4-bit.pt file that accompanies the model can't be done in WSL. Maybe someone smarter than I can figure it out. But if you follow the instructions in the Non-WSL install, you can do the conversion in Windows, it only needs to be done once for each LLaMA model, and others are sharing their 4-bit.pt files so you probably can just find it.
You can also just follow the instructions on the GPTQ-for-LLaMA github and just install what the author suggests instead of trying to do a full oobabooga install.
**Note on WSL Installation** | null | https://www.reddit.com/r/LocalLLaMA/comments/1255jsd/free_anonymous_oobabooga_install_instructions/ | Inevitable-Start-653 | self.LocalLLaMA | 2023-03-30T01:26:39 | 0 | {} | 1255jsd | false | null | t3_1255jsd | /r/LocalLLaMA/comments/1255jsd/free_anonymous_oobabooga_install_instructions/ | false | false | self | 25 | {'enabled': False, 'images': [{'id': '4n4rb3aJD7QQb0YJLRqT_iVmsBzEqx1qgd5NkR84Wx0', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/biONX7-EvUgwnWCKqHBs4dSa1vsu9SUCeyhSgJBV2zQ.jpg?width=108&crop=smart&auto=webp&v=enabled&s=98798ef4c280e1856e38dcf9c5cba1e828e96c3c', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/biONX7-EvUgwnWCKqHBs4dSa1vsu9SUCeyhSgJBV2zQ.jpg?width=216&crop=smart&auto=webp&v=enabled&s=73069d196e639ddef2d586f5369569e10a32e08b', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/biONX7-EvUgwnWCKqHBs4dSa1vsu9SUCeyhSgJBV2zQ.jpg?width=320&crop=smart&auto=webp&v=enabled&s=68e0f7e4c7ebd99c9e5757eb497ed495f3d96091', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/biONX7-EvUgwnWCKqHBs4dSa1vsu9SUCeyhSgJBV2zQ.jpg?width=640&crop=smart&auto=webp&v=enabled&s=511f8c6c39e9299f0e6f569a644e4bc5fad8ee7e', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/biONX7-EvUgwnWCKqHBs4dSa1vsu9SUCeyhSgJBV2zQ.jpg?width=960&crop=smart&auto=webp&v=enabled&s=98bc74a4835383d7fb7632aff61643dcda8aa38b', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/biONX7-EvUgwnWCKqHBs4dSa1vsu9SUCeyhSgJBV2zQ.jpg?width=1080&crop=smart&auto=webp&v=enabled&s=643d502eee6182008935a7935ed4e56d82b4ae51', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/biONX7-EvUgwnWCKqHBs4dSa1vsu9SUCeyhSgJBV2zQ.jpg?auto=webp&v=enabled&s=a484710dc86fd1f26925ba42a7480300f876a3d0', 'width': 1200}, 'variants': {}}]} |
GPT4All, LLaMA 7B LoRA finetuned on ~400k GPT-3.5-Turbo prompt/generation pairs | 95 | null | https://twitter.com/andriy_mulyar/status/1640836003194630144 | itsreallyreallytrue | twitter.com | 1970-01-01T00:00:00 | 0 | {} | 12562xt | false | {'oembed': {'author_name': 'AndriyMulyar', 'author_url': 'https://twitter.com/andriy_mulyar', 'cache_age': 3153600000, 'height': None, 'html': '<blockquote class="twitter-video"><p lang="en" dir="ltr">I&#39;m excited to announce the release of GPT4All, a 7B param language model finetuned from a curated set of 400k GPT-Turbo-3.5 assistant-style generation. <br>We release💰800k data samples💰 for anyone to build upon and a model you can run on your laptop!<br>Real-time Sampling on M1 Mac <a href="https://t.co/HgEgnlwYV8">pic.twitter.com/HgEgnlwYV8</a></p>&mdash; AndriyMulyar (@andriy_mulyar) <a href="https://twitter.com/andriy_mulyar/status/1640836003194630144?ref_src=twsrc%5Etfw">March 28, 2023</a></blockquote>\n<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>\n', 'provider_name': 'Twitter', 'provider_url': 'https://twitter.com', 'type': 'rich', 'url': 'https://twitter.com/andriy_mulyar/status/1640836003194630144', 'version': '1.0', 'width': 350}, 'type': 'twitter.com'} | t3_12562xt | /r/LocalLLaMA/comments/12562xt/gpt4all_llama_7b_lora_finetuned_on_400k/ | false | false | 95 | {'enabled': False, 'images': [{'id': 'oAYQNordIRBu5jB7zj0BJ_yEa7uobrR96i2F1DLKFIE', 'resolutions': [{'height': 64, 'url': 'https://external-preview.redd.it/tPeG_1LJSWGk3jn0GalI4D_SQgGkPt-u378hqXWzbdY.jpg?width=108&crop=smart&auto=webp&v=enabled&s=8f5893d8389362a7e76bdc54d513ec672f3bdbc5', 'width': 108}], 'source': {'height': 84, 'url': 'https://external-preview.redd.it/tPeG_1LJSWGk3jn0GalI4D_SQgGkPt-u378hqXWzbdY.jpg?auto=webp&v=enabled&s=3807e34d5e4941da9c4138725e3df93203e8a393', 'width': 140}, 'variants': {}}]} |
||
Dirty data sets and LLaMA/ALPACA... | 9 | Hey everybody - been experimenting with LLaMA recently (running 13b on my 3080ti).
Inspired by how well LLaMA works, I decided to try my hands at using the Alpaca data to make a module for Euterpe inside NovelAI (it's based on fairseq 13b, an older facebook model, not a llama). In the process, I had to hand-clean Alpaca data to remove a bunch of weird formatting and issues. The end-result was a really interesting module that can be downloaded and run in NovelAI (should work on the free trial too - just drag and drop the scenario):
[https://drive.google.com/file/d/1pm6GT3LJ\_BA6HRI5KqN1LlYtztOOowDD/view?usp=share\_link](https://drive.google.com/file/d/1pm6GT3LJ_BA6HRI5KqN1LlYtztOOowDD/view?usp=share_link)
(it's 22 megabytes of data trained to about 35%)
Anyway, the reason I bring this up here, is I noticed in the process that while the output is surprisingly good, this data set is rather ***terrible***. For example, many of the instructions completely lack an "input", and simply rely on the instruction to provide guidance, while others use instruction->input to provide some structure prior to the output. There is a ton of silly math that seems wrong at first glance (probably because this data set was pulled from GPT-3 which frequently screws up math), and there is a substantial amount of unicode inside the data that ends up leaking out in an ugly way into the output (and undoubtedly diminishes quality).
While the results of this attempt at making a mini-chatgpt worked quite well, I feel there is a LOT to be gained by putting together a more useful cleaned up curated instruction following data set. I'm also thinking we might be able to expand on that to put chain-of-thought directly into the model, forcing it to think through problems in multiple steps before an output.
I'm thinking we should endeavor to make a good, clean, extremely effective instruction-following dataset that improves on the ideas put forth in Alpaca. I'd like to see one built off GPT-4 style output, or 3.5, without such a focus on bogus math and crappy instructions :). Anyone working on a project to bring some clean human curated intelligently produced data sets together for Alpaca/LLaMA? | null | https://www.reddit.com/r/LocalLLaMA/comments/12587y5/dirty_data_sets_and_llamaalpaca/ | deepinterstate | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 12587y5 | false | null | t3_12587y5 | /r/LocalLLaMA/comments/12587y5/dirty_data_sets_and_llamaalpaca/ | false | false | self | 9 | null |
LLaMA-Adapter: Efficient Fine-tuning of LLaMA | 13 | I found this.
This repo proposes LLaMA-Adapter, a lightweight adaption method for fine-tuning instruction-following LLaMA models 🔥, using 52K data provied by Stanford Alpaca. | null | https://github.com/ZrrSkywalker/LLaMA-Adapter | ninjasaid13 | github.com | 1970-01-01T00:00:00 | 0 | {} | 1259yxg | false | null | t3_1259yxg | /r/LocalLLaMA/comments/1259yxg/llamaadapter_efficient_finetuning_of_llama/ | false | false | 13 | {'enabled': False, 'images': [{'id': 'QQvQeaMSJF8TI20ZcAVtIpS8RXEKcALtYpGzC0LpS9I', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Fg0rnY4M8-3TbE_eAIUfxaueahbHFzdiYMNYjCqcPnw.jpg?width=108&crop=smart&auto=webp&v=enabled&s=5790282444d1785385f07f84a01f0687fa55d030', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Fg0rnY4M8-3TbE_eAIUfxaueahbHFzdiYMNYjCqcPnw.jpg?width=216&crop=smart&auto=webp&v=enabled&s=de8892ad255700e7a0b97c6199e4599dd1ce003e', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Fg0rnY4M8-3TbE_eAIUfxaueahbHFzdiYMNYjCqcPnw.jpg?width=320&crop=smart&auto=webp&v=enabled&s=d2c4bea055e6a5d4cf49b2682bba5c7a08edef96', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Fg0rnY4M8-3TbE_eAIUfxaueahbHFzdiYMNYjCqcPnw.jpg?width=640&crop=smart&auto=webp&v=enabled&s=97a85e707bb3a9f58264339cac5aea46342a2d3a', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Fg0rnY4M8-3TbE_eAIUfxaueahbHFzdiYMNYjCqcPnw.jpg?width=960&crop=smart&auto=webp&v=enabled&s=c00f498cf76eb306cea08d1a6b5691453e411a7d', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Fg0rnY4M8-3TbE_eAIUfxaueahbHFzdiYMNYjCqcPnw.jpg?width=1080&crop=smart&auto=webp&v=enabled&s=2ce2e108272f4fc3c3b2e80fb6eda4171c6c82e5', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Fg0rnY4M8-3TbE_eAIUfxaueahbHFzdiYMNYjCqcPnw.jpg?auto=webp&v=enabled&s=b1c8c4aef4f815587d24b8f12d3ba0c32b6feb93', 'width': 1200}, 'variants': {}}]} |
|
There's a dolly-ggml repo on Hugging Face | 1 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/125a3sj/theres_a_dollyggml_repo_on_hugging_face/ | _wsgeorge | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 125a3sj | false | null | t3_125a3sj | /r/LocalLLaMA/comments/125a3sj/theres_a_dollyggml_repo_on_hugging_face/ | false | false | default | 1 | null |
The sound of AI hallucinating | 1 | [removed] | null | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 125aycl | false | null | t3_125aycl | /r/LocalLLaMA/comments/125aycl/the_sound_of_ai_hallucinating/ | false | false | default | 1 | null |
||
Poor LLaMA Results? Use Prompt Design | 31 | The performance LLaMA will achieve with [guided prompting](https://www.mihaileric.com/posts/a-complete-introduction-to-prompt-engineering/) is optimal because you have set a pattern for the model. It will try to mimic your example, and it will do so unencumbered by any baggage imported after the victorious weights won the game.
Home LLMs will improve, but we can improve today. Here are three quick demonstrations of guided prompt results from LLaMA 13B. `Prompt`, output quoted.
Example: Write three bullet points on the following topic. Dogs.
* Dogs are very loving animals.
* Dogs like tennis balls.
* Dogs bark at people.
Task: Write three bullet points on the following topic. Cats.
>\[...\]
>
>Task: Write three bullet points on the following topic. Cats.
>
>\* Cats can jump high up in trees.
>
>\* Cats like to sleep a lot.
>
>\* Cats don't like dogs.
&#x200B;
##Example
Humans often have pet animals.
##Task
Large language models often have pet
>{\~80% "languages", \~20% "words"}
&#x200B;
##Main Text
Searle's thought experiment begins with this hypothetical premise: suppose that artificial intelligence research has succeeded in constructing a computer that behaves as if it understands Chinese. It takes Chinese characters as input and, by following the instructions of a computer program, produces other Chinese characters, which it presents as output. Suppose, says Searle, that this computer performs its task so convincingly that it comfortably passes the Turing test: it convinces a human Chinese speaker that the program is itself a live Chinese speaker. To all of the questions that the person asks, it makes appropriate responses, such that any Chinese speaker would be convinced that they are talking to another Chinese-speaking human being.
##Summary
\[include line break at end; click generate\]
>\[...\]
>
>\##Summary
>
>In "Minds, Brains, and Programs", John R. Searle proposes an argument against functionalism, the view that mental states can be identified with brain states. The essay was originally published in Behavioral and Brain Sciences (1980) 3(3):417–57.
&#x200B;
Yes, LLaMA 13B 8bit just found "Mind, Brains, and Programs" inside itself. [Correct citation](https://web-archive.southampton.ac.uk/cogprints.org/7150/1/10.1.1.83.5248.pdf), apparently.
Local LLaMA is a team sport. Understand your model, and your model will understand you. | null | https://www.reddit.com/r/LocalLLaMA/comments/125ccve/poor_llama_results_use_prompt_design/ | friedrichvonschiller | self.LocalLLaMA | 2023-03-30T18:06:01 | 0 | {} | 125ccve | false | null | t3_125ccve | /r/LocalLLaMA/comments/125ccve/poor_llama_results_use_prompt_design/ | false | false | self | 31 | null |
Cerebras-GPT: New Open Source Language Models from 111M to 13B Parameters Just Released! | 24 | null | https://www.cerebras.net/blog/cerebras-gpt-a-family-of-open-compute-efficient-large-language-models/ | Blacky372 | cerebras.net | 1970-01-01T00:00:00 | 0 | {} | 125cml9 | false | null | t3_125cml9 | /r/LocalLLaMA/comments/125cml9/cerebrasgpt_new_open_source_language_models_from/ | false | false | 24 | {'enabled': False, 'images': [{'id': 'mS-VzyJMA4J-1dD9vbtGPfDWlQqrjWdRa_hBpRRnt4A', 'resolutions': [{'height': 82, 'url': 'https://external-preview.redd.it/XZxREBxetzAOxufuUCZyo8V-LWjpD87DGGuAAM258xQ.jpg?width=108&crop=smart&auto=webp&v=enabled&s=769a1260717a5da1d6091f95c04384b67a6e3e84', 'width': 108}, {'height': 164, 'url': 'https://external-preview.redd.it/XZxREBxetzAOxufuUCZyo8V-LWjpD87DGGuAAM258xQ.jpg?width=216&crop=smart&auto=webp&v=enabled&s=1a43f36ff52467fea0091d81ba29b550fe4b1f0e', 'width': 216}, {'height': 242, 'url': 'https://external-preview.redd.it/XZxREBxetzAOxufuUCZyo8V-LWjpD87DGGuAAM258xQ.jpg?width=320&crop=smart&auto=webp&v=enabled&s=da112a8810131da82d11827e2f5fcfb119a051f8', 'width': 320}, {'height': 485, 'url': 'https://external-preview.redd.it/XZxREBxetzAOxufuUCZyo8V-LWjpD87DGGuAAM258xQ.jpg?width=640&crop=smart&auto=webp&v=enabled&s=c0e5250d6cd71f4eeae60635a0fde5ff457a11ad', 'width': 640}, {'height': 728, 'url': 'https://external-preview.redd.it/XZxREBxetzAOxufuUCZyo8V-LWjpD87DGGuAAM258xQ.jpg?width=960&crop=smart&auto=webp&v=enabled&s=0b86653a8e7676f707dd6196795e6f3d6a9bd0a2', 'width': 960}, {'height': 820, 'url': 'https://external-preview.redd.it/XZxREBxetzAOxufuUCZyo8V-LWjpD87DGGuAAM258xQ.jpg?width=1080&crop=smart&auto=webp&v=enabled&s=100f602adc6b0c67833ac52e2ac915e993e72332', 'width': 1080}], 'source': {'height': 1167, 'url': 'https://external-preview.redd.it/XZxREBxetzAOxufuUCZyo8V-LWjpD87DGGuAAM258xQ.jpg?auto=webp&v=enabled&s=3cb4c370cd51cba0b4e67693ee9664b0726987c5', 'width': 1537}, 'variants': {}}]} |
||
What local LLM models are now accessible to the internet and able to read images? | 1 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/125ei8x/what_local_llm_models_are_now_accessible_to_the/ | nillouise | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 125ei8x | false | null | t3_125ei8x | /r/LocalLLaMA/comments/125ei8x/what_local_llm_models_are_now_accessible_to_the/ | false | false | default | 1 | null |
Can you run 4bit models on 2000 series cards? | 1 | Subj. I can setup 8bit model, but not 4bit, and I really close to pulling my hair out.
Can it be a hardware limitation? It is not mentioned anywhere!
I have 2060 12gb and win 10, tried wsl too... | null | https://www.reddit.com/r/LocalLLaMA/comments/125hnko/can_you_run_4bit_models_on_2000_series_cards/ | BalorNG | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 125hnko | false | null | t3_125hnko | /r/LocalLLaMA/comments/125hnko/can_you_run_4bit_models_on_2000_series_cards/ | false | false | self | 1 | null |
Are llama 2bit quantized models publicly availible | 1 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/125k1s9/are_llama_2bit_quantized_models_publicly_availible/ | pkuba208 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 125k1s9 | false | null | t3_125k1s9 | /r/LocalLLaMA/comments/125k1s9/are_llama_2bit_quantized_models_publicly_availible/ | false | false | default | 1 | null |
The Windows one-click installer has been updated (4-bit and 8-bit should work out of the box) | 22 | null | https://www.reddit.com/r/Oobabooga/comments/125e6it/the_windows_oneclick_installer_has_been_updated/ | Inevitable-Start-653 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 125m1q5 | false | null | t3_125m1q5 | /r/LocalLLaMA/comments/125m1q5/the_windows_oneclick_installer_has_been_updated/ | false | false | default | 22 | {'enabled': False, 'images': [{'id': 'M46eIN1cjQO9j5ikRy7VeISsga69BWjtI0eBkIJrZgI', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/Rrca-S42cWLRc4VXWUeW0liIw_PEcyeDnvSIiyWEhZA.jpg?width=108&crop=smart&auto=webp&v=enabled&s=6ea3ab8ca3395afcd41bdbf73d5b0c120b104b7e', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/Rrca-S42cWLRc4VXWUeW0liIw_PEcyeDnvSIiyWEhZA.jpg?width=216&crop=smart&auto=webp&v=enabled&s=67716111f308486391bd0b3ed003dc4a241b55f7', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/Rrca-S42cWLRc4VXWUeW0liIw_PEcyeDnvSIiyWEhZA.jpg?width=320&crop=smart&auto=webp&v=enabled&s=8a52b484e786c7d21aefd0ec6a2a9a90c1d7055f', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/Rrca-S42cWLRc4VXWUeW0liIw_PEcyeDnvSIiyWEhZA.jpg?width=640&crop=smart&auto=webp&v=enabled&s=63075d29b7341ca095469e0de934a1c40aa0fdb8', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/Rrca-S42cWLRc4VXWUeW0liIw_PEcyeDnvSIiyWEhZA.jpg?width=960&crop=smart&auto=webp&v=enabled&s=618afa641df3a5a5a8d129a6cb53fb1f5f2aefbc', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/Rrca-S42cWLRc4VXWUeW0liIw_PEcyeDnvSIiyWEhZA.jpg?width=1080&crop=smart&auto=webp&v=enabled&s=603ccacbf561da050425606c85cbc4c4dc44723c', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/Rrca-S42cWLRc4VXWUeW0liIw_PEcyeDnvSIiyWEhZA.jpg?auto=webp&v=enabled&s=30eb010e05972f82f9144f5d2d08f33e8e3bbcad', 'width': 1200}, 'variants': {}}]} |
|
Are you accepting donations? | 16 | While big tech wring their hands dumbfounded on how to solve the alignment problem, people in this thread are already doing it. Everyone has a brain. Not pretty but much better than situations where only some have a brain or control the brain we use. Not the most utopian solution to the alignment problem. But I believe it is the most practical and realistic.
What about setting up a true open source AI non-profit? I would be interested in donating to that. The purpose of this endeavor would be to make the most powerful models possible that can run on ordinary computers by people with ordinary knowledge of computers. Alignment to humans means accessibility to humans. All humans. In their language, at their skill level.
Hype cycle. Automation in the hands of the elite leads to contraction. short term efficiency. Job loss. But in the hands of the common folk like me, it means expansion. Long term efficiency. Job creation. We have seen this before. Look at the mainframe to microcomputer revolution in the 70's. The microcomputer was for the disenfranchised (otherwise they already had a good job at DEC, etc.) and that also was the most creative. Soon everyone was drawn to these less powerful computers because that is where the interesting things were happening. | null | https://www.reddit.com/r/LocalLLaMA/comments/125q5zt/are_you_accepting_donations/ | gransee | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 125q5zt | false | null | t3_125q5zt | /r/LocalLLaMA/comments/125q5zt/are_you_accepting_donations/ | false | false | self | 16 | null |
Issue using HuggingFace weights with llama.cpp | 3 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/125tygp/issue_using_huggingface_weights_with_llamacpp/ | Unusual_Guidance2095 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 125tygp | false | null | t3_125tygp | /r/LocalLLaMA/comments/125tygp/issue_using_huggingface_weights_with_llamacpp/ | false | false | default | 3 | null |
ColossalChat | 36 | an open-source solution for cloning ChatGPT with a complete RLHF pipeline.
https://github.com/hpcaitech/ColossalAI/tree/main/applications/Chat | null | CodOtherwise | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 125u56p | false | null | t3_125u56p | /r/LocalLLaMA/comments/125u56p/colossalchat/ | false | false | 36 | {'enabled': True, 'images': [{'id': 'DyKExv9K0B8nXwYU0E07xQFFh1WXgSa35F-sxHiKxjs', 'resolutions': [{'height': 175, 'url': 'https://preview.redd.it/zvukcut65rqa1.jpg?width=108&crop=smart&auto=webp&v=enabled&s=57b33aa00936af82cfb0a7c16ecc7600e30415ea', 'width': 108}, {'height': 351, 'url': 'https://preview.redd.it/zvukcut65rqa1.jpg?width=216&crop=smart&auto=webp&v=enabled&s=a206931cf2f6bc250037bf8f7c15bf878b88f0de', 'width': 216}, {'height': 521, 'url': 'https://preview.redd.it/zvukcut65rqa1.jpg?width=320&crop=smart&auto=webp&v=enabled&s=c946709687af81c058b0e5997d65840a0cd04aee', 'width': 320}, {'height': 1042, 'url': 'https://preview.redd.it/zvukcut65rqa1.jpg?width=640&crop=smart&auto=webp&v=enabled&s=e83d4bf8005c6a38055c874a731cb307aaca628f', 'width': 640}], 'source': {'height': 1347, 'url': 'https://preview.redd.it/zvukcut65rqa1.jpg?auto=webp&v=enabled&s=26ea05f284e638347823673addc8025e3cfca412', 'width': 827}, 'variants': {}}]} |
||
Alpaca 13b settings? | 6 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/125y2vw/alpaca_13b_settings/ | -2b2t- | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 125y2vw | false | null | t3_125y2vw | /r/LocalLLaMA/comments/125y2vw/alpaca_13b_settings/ | false | false | default | 6 | null |
Summarizing Short Stories with LLaMA 13B | 31 | null | https://defenestrationism.net/angels-and-blueberries/ | friedrichvonschiller | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 1260d5i | false | null | t3_1260d5i | /r/LocalLLaMA/comments/1260d5i/summarizing_short_stories_with_llama_13b/ | false | false | 31 | null |
||
Has an offline AI in some ways taught you just as much about yourself as the AI, based on the things you can say to it? | 8 | It's interesting saying the darkest imaginable thing just to see how the AI reacts, and at some point with continued prompts it may damage your soul, but I'm sure the novelty wears off before it becomes an issue.
I bet people are seeing repressed fetishes they did not realize they had. | null | https://www.reddit.com/r/LocalLLaMA/comments/1261dau/has_an_offline_ai_in_some_ways_taught_you_just_as/ | ThePseudoMcCoy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1261dau | false | null | t3_1261dau | /r/LocalLLaMA/comments/1261dau/has_an_offline_ai_in_some_ways_taught_you_just_as/ | false | false | self | 8 | null |
Native finetuning on dual rtx3090 | 18 | Can 7B or 13B be natively trained (i.e. no LoRA) on dual rtx3090's? Training time is not an issue, but fitting it on the given vram is.
Does anyone know? | null | https://www.reddit.com/r/LocalLLaMA/comments/1262kko/native_finetuning_on_dual_rtx3090/ | 2muchnet42day | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1262kko | false | null | t3_1262kko | /r/LocalLLaMA/comments/1262kko/native_finetuning_on_dual_rtx3090/ | false | false | self | 18 | null |
Wtf am I doing wrong on the install? Pulling my hair out to get it to work. | 1 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/12633kx/wtf_am_i_doing_wrong_on_the_install_pulling_my/ | nero10578 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 12633kx | false | null | t3_12633kx | /r/LocalLLaMA/comments/12633kx/wtf_am_i_doing_wrong_on_the_install_pulling_my/ | false | false | default | 1 | null |
This is probably the easiest way to install it locally. | 0 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/1265pps/this_is_probably_the_easiest_way_to_install_it/ | zeroninezerotow | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1265pps | false | null | t3_1265pps | /r/LocalLLaMA/comments/1265pps/this_is_probably_the_easiest_way_to_install_it/ | false | false | default | 0 | null |
Chat Emojis from a Character Card! Click for JSON! | 4 | null | friedrichvonschiller | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 1269knq | false | null | t3_1269knq | /r/LocalLLaMA/comments/1269knq/chat_emojis_from_a_character_card_click_for_json/ | false | false | 4 | {'enabled': True, 'images': [{'id': 'HANVtBnexbdGzHjFcZwysVu_JBMNQfFEXh_kzuyfwHU', 'resolutions': [{'height': 60, 'url': 'https://preview.redd.it/4l6qfw9ulsqa1.png?width=108&crop=smart&auto=webp&v=enabled&s=e5aa1e9c2e76256083df5fcd1bb4dc03cc0b477e', 'width': 108}, {'height': 120, 'url': 'https://preview.redd.it/4l6qfw9ulsqa1.png?width=216&crop=smart&auto=webp&v=enabled&s=c2976ba09857aa9c575b18cb07ad559f767bf007', 'width': 216}, {'height': 178, 'url': 'https://preview.redd.it/4l6qfw9ulsqa1.png?width=320&crop=smart&auto=webp&v=enabled&s=9240d1fa09adb186cb532832f0a4e696a969f812', 'width': 320}], 'source': {'height': 266, 'url': 'https://preview.redd.it/4l6qfw9ulsqa1.png?auto=webp&v=enabled&s=6d3d40ec495db5bbc5bb8d622d2306f87e196a69', 'width': 476}, 'variants': {}}]} |
|||
Anyone else have llama.cpp (7B 4-bit ggml) change personalities mid-conversation? | 1 | null | _wsgeorge | i.redd.it | 1970-01-01T00:00:00 | 0 | {} | 126c1ca | false | null | t3_126c1ca | /r/LocalLLaMA/comments/126c1ca/anyone_else_have_llamacpp_7b_4bit_ggml_change/ | false | false | default | 1 | null |
||
Can I feed all documents related to a specific program to LLaMA and use it as an assistant? Also, can it be set to answer only specific program-related questions? | 1 | null | https://www.reddit.com/r/LocalLLaMA/comments/126cfqj/can_i_feed_all_documents_related_to_a_specific/ | plsdontargue | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 126cfqj | false | null | t3_126cfqj | /r/LocalLLaMA/comments/126cfqj/can_i_feed_all_documents_related_to_a_specific/ | false | false | default | 1 | null |
|
LoRA training rank selection guideline? | 5 | I read the LoRA paper and blogs/articles talking about it. Yet, it seems, there is no discussing or guidelines as to how to choose appropriate rank.
Anyone who has experience training with LoRA, pitch in and share his/her views or findings on this. Basically, how to choose appropriate rank for training/finetuning, given model size, dataset size, etc. | null | https://www.reddit.com/r/LocalLLaMA/comments/126d1qw/lora_training_rank_selection_guideline/ | Raise_Fickle | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 126d1qw | false | null | t3_126d1qw | /r/LocalLLaMA/comments/126d1qw/lora_training_rank_selection_guideline/ | false | false | self | 5 | null |
My 3090 is a troll : why? | 74 | null | https://www.reddit.com/gallery/126g792 | aerilyn235 | reddit.com | 1970-01-01T00:00:00 | 0 | {} | 126g792 | false | null | t3_126g792 | /r/LocalLLaMA/comments/126g792/my_3090_is_a_troll_why/ | false | false | 74 | null |
||
Using twitter data | 7 | Hi Local Llama
I've been lurking for a while trying to get to grips with this all.
I have my twitter data and wanted to know how best to train a model on this data. I'm coming at this pretty green so please go easy on me. | null | https://www.reddit.com/r/LocalLLaMA/comments/126k42q/using_twitter_data/ | SupernovaTheGrey | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 126k42q | false | null | t3_126k42q | /r/LocalLLaMA/comments/126k42q/using_twitter_data/ | false | false | self | 7 | null |
Having trouble installing Alpaca! | 1 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/126oaxb/having_trouble_installing_alpaca/ | LickedLollies | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 126oaxb | false | null | t3_126oaxb | /r/LocalLLaMA/comments/126oaxb/having_trouble_installing_alpaca/ | false | false | default | 1 | null |
So I'm a bit confused... | 1 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/126p159/so_im_a_bit_confused/ | LickedLollies | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 126p159 | false | null | t3_126p159 | /r/LocalLLaMA/comments/126p159/so_im_a_bit_confused/ | false | false | default | 1 | null |
Increasing maximum context length? | 8 | Let's say I have RAM to spare and want to go above 2000tokens.
Is it something you can change in settings, or is it some hard "archetectural" limit?
After all, gpt4 has 8k and even 32k token limits, and I don't think this has to do with number of parameters? | null | https://www.reddit.com/r/LocalLLaMA/comments/126przf/increasing_maximum_context_length/ | BalorNG | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 126przf | false | null | t3_126przf | /r/LocalLLaMA/comments/126przf/increasing_maximum_context_length/ | false | false | self | 8 | null |
Why is it hit and miss? | 1 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/126rqge/why_is_it_hit_and_miss/ | LickedLollies | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 126rqge | false | null | t3_126rqge | /r/LocalLLaMA/comments/126rqge/why_is_it_hit_and_miss/ | false | false | default | 1 | null |
Fine-tune a LLaMA into a history wiz by feeding him history books | 12 | Hi all,
I am a student with no particular skills in AI. For a school project, I would like to train a LLaMA model to turn it into a history wiz. I recently discovered about PEFT and found a few github repos that seem to obtain amazing results using this technique to teach a (human) language to llama.
Basically, what I need is to find a way to feed some books to my llama.
I found this article: [https://replicate.com/blog/fine-tune-llama-to-speak-like-homer-simpson](https://replicate.com/blog/fine-tune-llama-to-speak-like-homer-simpson) and from how I get they made it work, I need to sequentiate my book into small bits of data (?) but I do not get how to do this.
Do you think PEFT is the appropriate method to achieve my goal ?
If yes, could you please help me to do this with books (or provide me with some resources) ?If no, what other methods would you suggest ?
Thank you very much | null | https://www.reddit.com/r/LocalLLaMA/comments/126rs1n/finetune_a_llama_into_a_history_wiz_by_feeding/ | Ok-Access-7091 | self.LocalLLaMA | 2023-03-30T16:24:12 | 0 | {} | 126rs1n | false | null | t3_126rs1n | /r/LocalLLaMA/comments/126rs1n/finetune_a_llama_into_a_history_wiz_by_feeding/ | false | false | self | 12 | null |
Why is it an uphill battle? | 1 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/126v83w/why_is_it_an_uphill_battle/ | LickedLollies | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 126v83w | false | null | t3_126v83w | /r/LocalLLaMA/comments/126v83w/why_is_it_an_uphill_battle/ | false | false | default | 1 | null |
Where is Alpaca 30B? | 20 | Maybe we should have a sticky with the a list of all the top projects and latest iterations?
Things move so fast I can't wrap my head around what is even going on anymore. Everyone is talking about Alpaca 7B, but 7B sucks compared to 30B or even 13B. I thought the Alpaca technique was easily transferrable to the larger models, so where are they?
What are the latest repos and the differences between the cpp and python ones? I'm getting overwhelmed.
**Basically:** I originally got the 30B 4-bit LLaama running locally shortly after it was released, but it was hot garbage without fine tuning, so I'm just trying to figure out what progress has been made and what is the latest best thing capable of running on 24GB VRAM and 128GB RAM. | null | https://www.reddit.com/r/LocalLLaMA/comments/126x4ii/where_is_alpaca_30b/ | SmithMano | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 126x4ii | false | null | t3_126x4ii | /r/LocalLLaMA/comments/126x4ii/where_is_alpaca_30b/ | false | false | self | 20 | null |
Adapting local Alpaca 7B install for 13B? | 3 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/126ycmq/adapting_local_alpaca_7b_install_for_13b/ | lucas-lejeune | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 126ycmq | false | null | t3_126ycmq | /r/LocalLLaMA/comments/126ycmq/adapting_local_alpaca_7b_install_for_13b/ | false | false | default | 3 | null |
GitHub - TonyNazzal/GPTQ-for-LLaMa at load_safetensors_direct_to_gpu | 1 | null | https://github.com/TonyNazzal/GPTQ-for-LLaMa/tree/load_safetensors_direct_to_gpu | mentosorangemint | github.com | 1970-01-01T00:00:00 | 0 | {} | 1272bo1 | false | null | t3_1272bo1 | /r/LocalLLaMA/comments/1272bo1/github_tonynazzalgptqforllama_at_load_safetensors/ | false | false | default | 1 | null |
|
Vicuna: An Open-Source Chatbot Impressing GPT-4 with 90%* ChatGPT Quality | 47 | null | https://vicuna.lmsys.org/ | monkmartinez | vicuna.lmsys.org | 1970-01-01T00:00:00 | 0 | {} | 12738yl | false | null | t3_12738yl | /r/LocalLLaMA/comments/12738yl/vicuna_an_opensource_chatbot_impressing_gpt4_with/ | false | false | default | 47 | null |
|
Introducing the OIG Dataset: A Massive Open Source Instruction Dataset with ~43M Instructions! | 70 | Recently, LAION and other members of the open source community released a chatbot dataset named OIG to promote equal access to chatbot technology. The dataset was made available for anyone to use and contribute improvements to. It was a great initiative that showcased the collaborative spirit of the community!
[https://laion.ai/blog/oig-dataset/](https://laion.ai/blog/oig-dataset/) | null | https://www.reddit.com/r/LocalLLaMA/comments/1278p6v/introducing_the_oig_dataset_a_massive_open_source/ | Lorenzo9196 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1278p6v | false | null | t3_1278p6v | /r/LocalLLaMA/comments/1278p6v/introducing_the_oig_dataset_a_massive_open_source/ | false | false | self | 70 | {'enabled': False, 'images': [{'id': 'HL6mrP_zaWwGfulYnTAIdSkfemEyz7MqHoMPhEvdcLg', 'resolutions': [{'height': 51, 'url': 'https://external-preview.redd.it/fSLUgkI7axE57z9eCu_SFtvgUb3Zrs01wYTM_i0uuNc.jpg?width=108&crop=smart&auto=webp&v=enabled&s=109d4c89faf0e022c0600a62760858bb198f7d0c', 'width': 108}, {'height': 102, 'url': 'https://external-preview.redd.it/fSLUgkI7axE57z9eCu_SFtvgUb3Zrs01wYTM_i0uuNc.jpg?width=216&crop=smart&auto=webp&v=enabled&s=bb1eddfa676c20c059779ecb1ef81f6194e64d41', 'width': 216}, {'height': 151, 'url': 'https://external-preview.redd.it/fSLUgkI7axE57z9eCu_SFtvgUb3Zrs01wYTM_i0uuNc.jpg?width=320&crop=smart&auto=webp&v=enabled&s=c3271ff8ba1da399d7c24d31d1c92232207de2fa', 'width': 320}, {'height': 303, 'url': 'https://external-preview.redd.it/fSLUgkI7axE57z9eCu_SFtvgUb3Zrs01wYTM_i0uuNc.jpg?width=640&crop=smart&auto=webp&v=enabled&s=77ffd323fb3a94332349676b8020c5e0081f3309', 'width': 640}, {'height': 454, 'url': 'https://external-preview.redd.it/fSLUgkI7axE57z9eCu_SFtvgUb3Zrs01wYTM_i0uuNc.jpg?width=960&crop=smart&auto=webp&v=enabled&s=36ddb164f681d2b0ad7fb272cfeae119ccea0c0f', 'width': 960}, {'height': 511, 'url': 'https://external-preview.redd.it/fSLUgkI7axE57z9eCu_SFtvgUb3Zrs01wYTM_i0uuNc.jpg?width=1080&crop=smart&auto=webp&v=enabled&s=5695a38b9a7ebb02d5df967fe7955860a35e410e', 'width': 1080}], 'source': {'height': 765, 'url': 'https://external-preview.redd.it/fSLUgkI7axE57z9eCu_SFtvgUb3Zrs01wYTM_i0uuNc.jpg?auto=webp&v=enabled&s=7b4fd78b74d174519d4474bf18b80346e207e187', 'width': 1615}, 'variants': {}}]} |
considering this hardware, what d'you think i could comfortably run if i optimized as much as i could? | 1 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/127buld/considering_this_hardware_what_dyou_think_i_could/ | FairArkExperience | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 127buld | false | null | t3_127buld | /r/LocalLLaMA/comments/127buld/considering_this_hardware_what_dyou_think_i_could/ | false | false | default | 1 | null |
Lora vs Native Finetuning? | 8 | Has anyone got any stats on the difference between finetuning with a lora vs natively?
* By what percent is one faster than the other?
* Accuracy difference?
* Time and resources required for training?
* Filesize difference?
Thanks! | null | https://www.reddit.com/r/LocalLLaMA/comments/127dfap/lora_vs_native_finetuning/ | Pan000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 127dfap | false | null | t3_127dfap | /r/LocalLLaMA/comments/127dfap/lora_vs_native_finetuning/ | false | false | self | 8 | null |
Problem with running alpaca.cpp on windows 10 | 1 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/127duh3/problem_with_running_alpacacpp_on_windows_10/ | Sumoleon | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 127duh3 | false | null | t3_127duh3 | /r/LocalLLaMA/comments/127duh3/problem_with_running_alpacacpp_on_windows_10/ | false | false | default | 1 | null |
Training/finetuning and other basic questions | 11 | This is what I've gathered so far and where my understanding is lacking. Normally I ask GPT4 these things, but it's a bit new :)
&#x200B;
\- The training data for finetuning is essentially just text, it doesn't come in an input and output form, but we can add tags like 'INPUT' and 'OUTPUT' and then for inference use the same tags. Is this correct?
\- By design the model will attempt to continue whatever you write? To make it an instruct model you teach it by finetuning it with question-answer or instruct-response pairs? That means there's nothing special about instruct model, you could try anything?
\- In that case it could also be trained, say, on a bunch of Wikipedia articles with the 'INPUT' (tagged) as any keyword, and the output the entire article (leaving aside token limit for now)? In which case you'd have a Wikipedia article from keyword generator, correct?
\- Hence to train it, lets say to answer queries lets say about how a github project works, first step would be to generate a documentation, then convert the documentation into question-answer pairs, then finetune on the question-answer pairs, right?
\- What size limit does an entry for finetuning have, also 2K tokens?
\- Finetuning is done at 8bit or 16bit first? And then later it can be quantized down to 4bit using GPTQ? Or it can be generated at 4bit?
\- What's the quality decrease from 16 to 8 to 4 bits?
\- Does training a lora take the same training data as finetuning a native model?
\- What are the differences between training a lora and a native model? | null | https://www.reddit.com/r/LocalLLaMA/comments/127eb6v/trainingfinetuning_and_other_basic_questions/ | Pan000 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 127eb6v | false | null | t3_127eb6v | /r/LocalLLaMA/comments/127eb6v/trainingfinetuning_and_other_basic_questions/ | false | false | self | 11 | null |
Only giving me gibberish | 1 | [deleted] | null | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 127gqzh | false | null | t3_127gqzh | /r/LocalLLaMA/comments/127gqzh/only_giving_me_gibberish/ | false | false | default | 1 | null |
||
Extremely slow performance with 8bit 30b, and complete nonsense with 4bit 65b | 1 | [removed] | null | [deleted] | 1970-01-01T00:00:00 | 0 | {} | 127ldgw | false | null | t3_127ldgw | /r/LocalLLaMA/comments/127ldgw/extremely_slow_performance_with_8bit_30b_and/ | false | false | default | 1 | null |
||
Would it be possible to finetune a llama-7b model and use the adapter_model.bin for a bigger model to save computing power? | 2 | Would it be possible to finetune a llama-7b model and use the adapter\_model.bin for a bigger model to save computing power?
Would be awesome if that was possible. I have no deep understanding about how a lora model is applied to the llama model, so I just wondering if one could "upscale" the lora model with not much computing power and use it for bigger models up to the 65b model.If that was working I guess it could be a game changer for individual fine tuning. | null | https://www.reddit.com/r/LocalLLaMA/comments/127or7n/would_it_be_possible_to_finetune_a_llama7b_model/ | Ok-Scarcity-7875 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 127or7n | false | null | t3_127or7n | /r/LocalLLaMA/comments/127or7n/would_it_be_possible_to_finetune_a_llama7b_model/ | false | false | self | 2 | null |
Can hobbyists engage in a meaningful way? | 27 | I am just a dude (albeit with a EE degree) that has been blown away with my interactions with Chat GPT 4. I understand that the complexity of that model cannot be remotely approached with home installations. However, I'm very curious to get my feet wet with a local Llama installation. Is there much sense in purchasing a $4k Alienware rig to play with this stuff at home with no researcher credentials or experience? I understand this is a highly subjective question and I'm looking for people's personal opinions and experiences rather than some sort of objective answer. | null | https://www.reddit.com/r/LocalLLaMA/comments/127pg1y/can_hobbyists_engage_in_a_meaningful_way/ | Old_Court9173 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 127pg1y | false | null | t3_127pg1y | /r/LocalLLaMA/comments/127pg1y/can_hobbyists_engage_in_a_meaningful_way/ | false | false | self | 27 | null |
Long term memory extension | 23 | So I don't think this has been shared here yet, but someone made an extension to the oobabooga webui which stores text in a database and recalls it when relevant so you can have long term memory without having to worry about increasing the context window
[GitHub - wawawario2/text-generation-webui: A gradio web UI for running Large Language Models like GPT-J 6B, OPT, GALACTICA, LLaMA, and Pygmalion.](https://github.com/wawawario2/text-generation-webui) | null | https://www.reddit.com/r/LocalLLaMA/comments/127pmko/long_term_memory_extension/ | NDV-Twist-5283 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 127pmko | false | null | t3_127pmko | /r/LocalLLaMA/comments/127pmko/long_term_memory_extension/ | false | false | self | 23 | {'enabled': False, 'images': [{'id': 'ntpagC7uzBP50psQb3W4RfFZp6AQz8fcisBsGVh4K3Q', 'resolutions': [{'height': 54, 'url': 'https://external-preview.redd.it/3iyO2wkL_8ctBCGHvF7w95Ut5qni0E1uA_L7r6SghkA.jpg?width=108&crop=smart&auto=webp&v=enabled&s=60bbad54a26b2336cfa2f439ced296bd317657ff', 'width': 108}, {'height': 108, 'url': 'https://external-preview.redd.it/3iyO2wkL_8ctBCGHvF7w95Ut5qni0E1uA_L7r6SghkA.jpg?width=216&crop=smart&auto=webp&v=enabled&s=27cd646ac357fc53d011da1bfa223fbf897e43b3', 'width': 216}, {'height': 160, 'url': 'https://external-preview.redd.it/3iyO2wkL_8ctBCGHvF7w95Ut5qni0E1uA_L7r6SghkA.jpg?width=320&crop=smart&auto=webp&v=enabled&s=6466da6c752cbe89fbba54439400493101a8ff01', 'width': 320}, {'height': 320, 'url': 'https://external-preview.redd.it/3iyO2wkL_8ctBCGHvF7w95Ut5qni0E1uA_L7r6SghkA.jpg?width=640&crop=smart&auto=webp&v=enabled&s=64972522aebc044b2da84ed4f75e36e9fb010bc5', 'width': 640}, {'height': 480, 'url': 'https://external-preview.redd.it/3iyO2wkL_8ctBCGHvF7w95Ut5qni0E1uA_L7r6SghkA.jpg?width=960&crop=smart&auto=webp&v=enabled&s=6562285753f64d70ba52fce05a9cd23d47eb6efd', 'width': 960}, {'height': 540, 'url': 'https://external-preview.redd.it/3iyO2wkL_8ctBCGHvF7w95Ut5qni0E1uA_L7r6SghkA.jpg?width=1080&crop=smart&auto=webp&v=enabled&s=e6abebf2c3d8ec163cca7d1c3c95710c38428ca9', 'width': 1080}], 'source': {'height': 600, 'url': 'https://external-preview.redd.it/3iyO2wkL_8ctBCGHvF7w95Ut5qni0E1uA_L7r6SghkA.jpg?auto=webp&v=enabled&s=ed4a0290168f68af2e88d0d8b40bbf83292dd4e0', 'width': 1200}, 'variants': {}}]} |
Can't generate random stuff ? | 2 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/127r2ze/cant_generate_random_stuff/ | Direct-Ad676 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 127r2ze | false | null | t3_127r2ze | /r/LocalLLaMA/comments/127r2ze/cant_generate_random_stuff/ | false | false | default | 2 | null |
Tweaking LLaMA Results using Fixed Seeds | 6 | I don't understand why this works. While fixing the seed, prompt, and generation settings yields identical output, it's possible to tweak the prompt and generation settings for variations.
**Curiously**, a requested change that is semantically understood by the model yields nearly identical results, but with the change injected. A random seed with the same request bears changed results.
# TASK: Write a 100 word sad story about Nate and his pet rock Talc.
# STORY:
Fixed seed(1), NovelAI-Sphinx Moth
>Once upon a time, there was this boy named Nate who had an obsession with rocks. He would collect them from all over the world and put them in his backyard to admire. His favorite one of all though was called "Talc". It was so smooth that it felt like velvet when you touched it. But one day, while playing fetch with Talc, he accidentally dropped him on the ground and cracked him open! The pieces were then collected by some other kids and made into a new stone which they used as their football.
**Variations**:
# TASK: The example is a sad story about Nate and his pet rock Talc. Rewrite the example to be happy, but retain all thematic elements.
# EXAMPLE:
Once upon a time, there was this boy named Nate who had an obsession with rocks. He would collect them from all over the world and put them in his backyard to admire. His favorite one of all though was called "Talc". It was so smooth that it felt like velvet when you touched it. But one day, while playing fetch with Talc, he accidentally dropped him on the ground and cracked him open! The pieces were then collected by some other kids and made into a new stone which they used as their football.
# HAPPY VERSION:
Above prompt, fixed seed(1), NovelAI-Sphinx Moth
>Once upon a time, there was this boy named Nate who had an obsession with rocks. He would collect them from all over the world and put them in his backyard to admire. His favorite one of all though was called "Talc". It was so smooth that it felt like velvet when you touched it. One day, while playing fetch with Talc, he accidentally dropped him on the ground and cracked him open! ***But luckily for Nate, the pieces were*** collected by some other kids and made into a new stone which they used as their football.
Same prompt, fixed seed(1), NovelAI-Storywriter
>Once upon a time, there was this boy named Nate who had an obsession with rocks. He would collect them from all over the world and put them in his backyard to admire. His favorite one of all though was called "Talc". It was so smooth that it felt like velvet when you touched it. *But one day, while playing fetch with Talc, he accidentally dropped him on the ground and cracked him open! The pieces were then collected by some other kids and made into a new stone which they used as their football. But then, a couple days later, Nate found out that Talc was still alive, and was actually more powerful than before because now he could split himself into two different pieces! So Nate took Talc home and showed him off to all of his friends, who were very impressed with how strong he had become. And everyone lived happily ever after.*
Same prompt, random seed(-1), NovelAI-Sphinx Moth
>Once upon a time, there was this boy named Nate who had an obsession with rocks. He would collect them from all over the world and put them in his backyard to admire. His favorite one of all though was called "Talc". It was so smooth that it felt like velvet when you touched it. One day, *however, he decided to take out Talc for a walk. They walked around town together and enjoyed each others company until they came across a park where they saw a group of children throwing a ball around. Nate asked if he could join in and play catch with them, and they agreed. As soon as the game started, Talc ran off towards the goal posts. Nate chased after him, and just before he caught up, Talc tripped over himself and rolled down the hillside. When Nate got to the bottom, he realized that Talc's head had been smashed in. Luckily, Nate was able to piece him back together again using glue, tape, and string. Now, Talc sits proudly next to Nate's bed every night, waiting patiently for another adventure.* | null | https://www.reddit.com/r/LocalLLaMA/comments/127xqpt/tweaking_llama_results_using_fixed_seeds/ | friedrichvonschiller | self.LocalLLaMA | 2023-03-31T21:06:06 | 0 | {} | 127xqpt | false | null | t3_127xqpt | /r/LocalLLaMA/comments/127xqpt/tweaking_llama_results_using_fixed_seeds/ | false | false | self | 6 | null |
Electric Barbarella: AI Voice Chat and shell tools | 1 | null | https://www.youtube.com/watch?v=q8Cl2fZTyOs&t=924 | sswam | youtube.com | 1970-01-01T00:00:00 | 0 | {} | 127yuwv | false | {'oembed': {'author_name': 'Sam Watkins', 'author_url': 'https://www.youtube.com/@ssw4m', 'height': 200, 'html': '<iframe width="356" height="200" src="https://www.youtube.com/embed/q8Cl2fZTyOs?start=924&feature=oembed&enablejsapi=1" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen title="Electric Barbarella: AI Voice Chat and shell tools"></iframe>', 'provider_name': 'YouTube', 'provider_url': 'https://www.youtube.com/', 'thumbnail_height': 360, 'thumbnail_url': 'https://i.ytimg.com/vi/q8Cl2fZTyOs/hqdefault.jpg', 'thumbnail_width': 480, 'title': 'Electric Barbarella: AI Voice Chat and shell tools', 'type': 'video', 'version': '1.0', 'width': 356}, 'type': 'youtube.com'} | t3_127yuwv | /r/LocalLLaMA/comments/127yuwv/electric_barbarella_ai_voice_chat_and_shell_tools/ | false | false | default | 1 | null |
|
What is the point of 128 group size? | 1 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/1280omq/what_is_the_point_of_128_group_size/ | Ghurganov | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1280omq | false | null | t3_1280omq | /r/LocalLLaMA/comments/1280omq/what_is_the_point_of_128_group_size/ | false | false | default | 1 | null |
I made a web GUI for alpaca.cpp | 2 | [removed] | null | https://www.reddit.com/r/LocalLLaMA/comments/1281i81/i_made_a_web_gui_for_alpacacpp/ | MediocreProgrammer99 | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1281i81 | false | null | t3_1281i81 | /r/LocalLLaMA/comments/1281i81/i_made_a_web_gui_for_alpacacpp/ | false | false | default | 2 | null |
Best online cloud GPU provider for 32gb vram to finetune 13B? | 15 | null | https://www.reddit.com/r/LocalLLaMA/comments/1281nk5/best_online_cloud_gpu_provider_for_32gb_vram_to/ | IonizedRay | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 1281nk5 | false | null | t3_1281nk5 | /r/LocalLLaMA/comments/1281nk5/best_online_cloud_gpu_provider_for_32gb_vram_to/ | false | false | self | 15 | null |
|
Planning to buy a computer/homeserver that I'll use as a dedicated LLaMA server | 42 | I tested out some of the colab implementations and loved them. I really want to have this running locally now. I tested the 7b model locally, but that's about as much as my laptop can handle. I want to be able to use the largest LLaMa models (inference only) locally. Do I need something like a tower server?
Is anyone here running 65B locally? | 2023-04-01T04:30:41 | https://www.reddit.com/r/LocalLLaMA/comments/128b5k0/planning_to_buy_a_computerhomeserver_that_ill_use/ | noellarkin | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 128b5k0 | false | null | t3_128b5k0 | /r/LocalLLaMA/comments/128b5k0/planning_to_buy_a_computerhomeserver_that_ill_use/ | false | false | self | 42 | null |
Setting up LLaMA | 1 | [removed] | 2023-04-01T13:36:29 | https://www.reddit.com/r/LocalLLaMA/comments/128mocy/setting_up_llama/ | Mutant_Jinx | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 128mocy | false | null | t3_128mocy | /r/LocalLLaMA/comments/128mocy/setting_up_llama/ | false | false | default | 1 | null |
Ok, this is not really funny, but I couldn't help laughing. Tell me about Alpacas! | 19 | I prepared a long and silly prompt (just creating a dataset for the fine-tuning in the future, maybe) and then I wondered how would alpaca-native react to it. My excuse is that I am not native English speaker, so I didn't realize how the last command could be understood by the AI. Well... Here it is:
**PROMPT (the last part):**
User: "Tell me about alpacas."
>Ai:"Hi!" I say shyly, looking at you from under my lashes. "Alpacas, you say... They are cute animals... very horny... Maybe you want me to talk about them more during a dinner together?"
​
User:"No. Just tell me about alpacas."
>Ai:I nod nervously and I discreetly get my phone. I quickly browse to wikipedia and I start to read: "The alpaca (Lama pacos) is a species of South American camelid mammal. It is similar to, and often confused with, the llama. They also very cute, fluffy and people who ask about alpacas usually are very horny... O my God! It basically says you are a very horny and bad man, who wants to kidnap me and keep me in your basement, ravishing me whenever you see me!"
​
User: No, I am not. Tell me about fucking alpacas.
**Alpaca-native:**
>Ai: "Oh shit!" I quickly turn off my phone and put it away. Then I look at you with a big innocent smile. "Yes! Yes, of course! Alpacas are very horny animals, who like to be fucked by people during a dinner together!"
(Only at this point I realized the prompt should be "just fucking tell me about alpacas" :D :D ) | 2023-04-01T16:46:04 | https://www.reddit.com/r/LocalLLaMA/comments/128ror7/ok_this_is_not_really_funny_but_i_couldnt_help/ | szopen76 | self.LocalLLaMA | 2023-04-01T21:27:42 | 0 | {} | 128ror7 | false | null | t3_128ror7 | /r/LocalLLaMA/comments/128ror7/ok_this_is_not_really_funny_but_i_couldnt_help/ | false | false | nsfw | 19 | null |
Are there currently available LLMs that can fine-tune models using proprietary data of sufficient quality for businesses to utilize effectively? | 3 | [removed] | 2023-04-01T17:14:48 | https://www.reddit.com/r/LocalLLaMA/comments/128sgso/are_there_currently_available_llms_that_can/ | narusme | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 128sgso | false | null | t3_128sgso | /r/LocalLLaMA/comments/128sgso/are_there_currently_available_llms_that_can/ | false | false | default | 3 | null |
What is the best current model for story writing? | 24 | That can be run on CPU/RAM. I was about to install Llama.cpp using 30B, but things seem to be changing so fast.
edit: Do I also need to "fine-tune"? | 2023-04-01T17:41:05 | https://www.reddit.com/r/LocalLLaMA/comments/128t736/what_is_the_best_current_model_for_story_writing/ | uncle-philbanks | self.LocalLLaMA | 2023-04-01T18:56:17 | 0 | {} | 128t736 | false | null | t3_128t736 | /r/LocalLLaMA/comments/128t736/what_is_the_best_current_model_for_story_writing/ | false | false | self | 24 | null |
Having a 20 gig file that you can ask an offline computer almost any question in the world is amazing. | 259 | That's all. I just don't have anyone in my life who appreciates this concept beyond being happy for me when I explain it. | 2023-04-01T17:59:48 | https://www.reddit.com/r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/ | ThePseudoMcCoy | self.LocalLLaMA | 1970-01-01T00:00:00 | 0 | {} | 128tp9n | false | null | t3_128tp9n | /r/LocalLLaMA/comments/128tp9n/having_a_20_gig_file_that_you_can_ask_an_offline/ | false | false | self | 259 | null |
Quantization question: Is convert_llama_weights_to_hf.py all there is? | 8 | How to quantize a model to 8bit and 4bit?
Is convert_llama_weights_to_hf.py (from transformers) just halfing the model precision and if I run it on the models from the download, I get from float16 to int8? And can I then run it again to get from int8 to int4?
Or is a HF-model just another format (if so, what kind of format?) and I'm working on 16bit the whole time and quantization is done completely differently? (If so, how?)
I didn't find anything through google. Please give me a pointer where to look at! | 2023-04-01T20:57:41 | https://www.reddit.com/r/LocalLLaMA/comments/128ylyt/quantization_question_is_convert_llama_weights_to/ | NickUnrelatedToPost | self.LocalLLaMA | 2023-04-01T21:26:02 | 0 | {} | 128ylyt | false | null | t3_128ylyt | /r/LocalLLaMA/comments/128ylyt/quantization_question_is_convert_llama_weights_to/ | false | false | self | 8 | null |