--- license: llama2 language: - en pipeline_tag: conversational tags: - Platypus2 - WinterGoddess - frankenmerge - dare - ties - 90b --- # BigWeave v8 90B The BigWeave models aim to identify merge settings equaling or surpassing the performance of Goliath-120b. The version number merely tracks various attempts and is not a quality indicator. Only results demonstrating good performance are retained and shared. This version is a passthrough merge of Platypus2-70b-instruct + WinterGoddess-1.4x-70b. The 90b size allows for 4bit quants to fit into 48GB of VRAM. # Prompting Format Vicuna and Alpaca. # Merge process The models used in the merge are [Platypus2-70b-instruct](https://huggingface.co/garage-bAInd/Platypus2-70B-instruct) and [WinterGoddess-1.4x-70b](https://huggingface.co/Sao10K/WinterGoddess-1.4x-70B-L2). # Acknowledgements [@garage-bAInd](https://huggingface.co/garage-bAInd) For creating Platypus2 [@Sao10K](https://huggingface.co/Sao10K) For creating WinterGoddess [@alpindale](https://huggingface.co/alpindale) For creating the original Goliath [@chargoddard](https://huggingface.co/chargoddard) For developing [mergekit](https://github.com/cg123/mergekit).