Some no safe political output for China mainland network use.
#64 opened about 2 months ago
by
Sakura12546
My_duplictemodel
#63 opened 4 months ago
by
leolaish
Model is not generating an answer or it takes a really long time
1
#62 opened 7 months ago
by
polycaman
Update README.md
#61 opened 7 months ago
by
MasonDixon2711
🚩Still receiving 'Fetch Failed' Error
2
#60 opened 7 months ago
by
Johnbigginsman
🚩 Report: Chat Not Working
4
#59 opened 8 months ago
by
redformurder
Internal Server Error
#58 opened 8 months ago
by
Rhea0000
Demo different from API
1
#57 opened 8 months ago
by
businesspig1
Nodejs?Nextjs streaming
#56 opened 8 months ago
by
dreyyy
Zephyr is off
#55 opened 9 months ago
by
yinbtologie
System prompts and settings from HF's Zephyr 7b-beta?
1
#53 opened 9 months ago
by
ParanoidPosition
Truncating Response
2
#52 opened 10 months ago
by
Mostafaadel174
getting CUDA out of memory
5
#51 opened 10 months ago
by
allpunks
Weird Responses?
#50 opened 10 months ago
by
TheAGames10
zephyr-7b-beta with VLLM
1
#49 opened 10 months ago
by
D3v
100k is converted into $100,00
#48 opened 10 months ago
by
wehapi
response is weird
1
#47 opened 11 months ago
by
wehapi
Taking way to long to generate a response
2
#46 opened 11 months ago
by
Idkkitsune
Answering in Spanish
2
#45 opened 11 months ago
by
whoami02
Update Ruined Inference
#44 opened 11 months ago
by
orick96
Error Message when the number of input tokens exceeds 2000. I am using ml.g4dn.8xlarge instance (128 GiB).
#43 opened 11 months ago
by
YWDallas
What EC2 configuration/instance should I use?
#41 opened 11 months ago
by
rikomi7571
Zephyr hallucinations with conversational memory
2
#39 opened 12 months ago
by
lfoppiano
Context length?
2
#38 opened 12 months ago
by
austinmw
[AUTOMATED] Model Memory Requirements
#37 opened 12 months ago
by
model-sizer-bot
What's the difference between zephyr-7b-beta and zephyr-7b-alpha?
1
#36 opened 12 months ago
by
haha-point
[AUTOMATED] Model Memory Requirements
3
#35 opened 12 months ago
by
model-sizer-bot
Can zephyr-7b support YARN 128K context window ?
#33 opened 12 months ago
by
tim9510019
Why isn't the `model_max_length` set to 2048?
1
#32 opened 12 months ago
by
alvarobartt
Did the LoRa finetuned model end up performing the same compared to full-finetuning?
1
#30 opened 12 months ago
by
timlim123
How do I achieve streaming output
2
#29 opened 12 months ago
by
wengnews
BFloat16 is not supported on MPS
2
#27 opened 12 months ago
by
mhelmy
Optimize Response Length and Quality
#26 opened 12 months ago
by
stargazer09
Add widget examples
#25 opened about 1 year ago
by
mishig
Understand reward metrics
1
#22 opened about 1 year ago
by
NhatHoang2002
Question on License given use of Ultrachat
1
#21 opened about 1 year ago
by
RonanMcGovern
Very Nice Work, But It Can't Be Prompted To Tell Stories
#19 opened about 1 year ago
by
deleted
Why not use the Plackett-Luce Model version of DPO when K=4 ranked responses are present?
#18 opened about 1 year ago
by
MasterGodzilla
Long Context Sucessor?
#17 opened about 1 year ago
by
brucethemoose
Load chat model directly
2
#16 opened about 1 year ago
by
uuguuguu
I just wanna thank everyone that worked on this
2
#15 opened about 1 year ago
by
Ryann
Error in fine tuning
1
#14 opened about 1 year ago
by
EviIgenius
Context Length Issue
#13 opened about 1 year ago
by
sekhar14
System prompting and generation config used for better prompting?
#12 opened about 1 year ago
by
Saugatkafley
Is anyone trying to make an uncensored version of zephyr at the moment?
8
#10 opened about 1 year ago
by
austincap
zephyr-7b-omicron ?!?!
3
#9 opened about 1 year ago
by
pszemraj
Dataset format for fine tuning
7
#7 opened about 1 year ago
by
andreaKIM
Deprecation Warning on Config File
2
#5 opened about 1 year ago
by
j4ckr4bbit
Sample code gives error 'KeyError: 'mistral''
3
#4 opened about 1 year ago
by
SteveC
Free and ready to use zephyr-7B-beta-GGUF model as OpenAI API compatible endpoint
9
#3 opened about 1 year ago
by
limcheekin