--- base_model: - nbeerbower/Llama-3.1-Nemotron-lorablated-70B - SicariusSicariiStuff/Negative_LLAMA_70B - TheDrummer/Anubis-70B-v1 - EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1 - deepseek-ai/DeepSeek-R1-Distill-Llama-70B - Sao10K/L3.3-70B-Euryale-v2.3 library_name: transformers tags: - mergekit - merge ---
This model builds upon the original Nevoria foundation, incorporating the Deepseek-R1 reasoning architecture to enhance dialogue interaction and scene comprehension. While maintaining Nevoria's core strengths in storytelling and scene description (derived from EVA, EURYALE, and Anubis), this iteration aims to improve prompt adherence and creative reasoning capabilities. The model also retains the balanced perspective introduced by Negative_LLAMA and Nemotron elements. Also, the model plays the card to almost a fault, It'll pick up on minor issues and attempt to run with them. Users had it call them out for misspelling a word while playing in character.
Note: While Nevoria-R1 represents a significant architectural change, rather than a direct successor to Nevoria, it operates as a distinct model with its own characteristics.
The lorablated model base choice was intentional, creating unique weight interactions similar to the original Astoria model and Astoria V2 model. This "weight twisting" effect, achieved by subtracting the lorablated base model during merging, creates an interesting balance in the model's behavior. While unconventional compared to sequential component application, this approach was chosen for its unique response characteristics.