She's back!
Stheno's Sister Model, designed to impress.
- Same Dataset used as Stheno v3.2 -> See notes there.
- LoRA Fine-Tune -> FFT is simply too expensive.
- Trained over 8x H100 SXMs and then some more afterwards.
Testing Notes
- Better prompt adherence.
- Better anatomy / spatial awareness.
- Adapts much better to unique and custom formatting / reply formats.
- Very creative, lots of unique swipes.
- Is not restrictive during roleplays.
- Feels like a big brained version of Stheno.
Likely due to it being a 70B model instead of 8B. Similar vibes comparing back in llama 2, where 70B models were simply much more 'aware' in the subtler areas and contexts a smaller model like a 7B or 13B simply were not able to handle.
Recommended Sampler Settings:
Temperature - 1.17
min_p - 0.075
Repetition Penalty - 1.10
SillyTavern Instruct Settings:
Context Template: Llama-3-Instruct-Names
Instruct Presets: Euryale-v2.1-Llama-3-Instruct
As per usual, support me here:
Ko-fi: https://ko-fi.com/sao10k
Art by wada_kazu / γγ γγ (pixiv page private?)
- Downloads last month
- 909
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.