Inference Speed
1
#61 opened 2 months ago
by
khaled-hesham
Data Provenance
#60 opened 3 months ago
by
exdysa
Learning Rate during pretraining
1
#58 opened 4 months ago
by
shuyuej
Truly great model for text-based operations like analysing and researching
4
#56 opened 5 months ago
by
bkieser
"triu_tril_cuda_template" not implemented for 'BFloat16'
4
#52 opened 7 months ago
by
Ashmal
Prompt format for fine-tuning
#51 opened 7 months ago
by
skevja
Request: DOI
1
#50 opened 7 months ago
by
gagan3012
Please document pretraining datasets
#49 opened 7 months ago
by
markding
Instruct-finetuning dataset
5
#43 opened 7 months ago
by
Andriy
Context length is not 128k
2
#41 opened 8 months ago
by
pseudotensor
Is there a best way to infer this model from multiple small memory GPUs?
1
#39 opened 8 months ago
by
hongdouzi
Configuring command-r-gptq
#33 opened 8 months ago
by
Cyleux
Any recommended frontend to run this model?
2
#30 opened 8 months ago
by
DrNicefellow
[AUTOMATED] Model Memory Requirements
#26 opened 8 months ago
by
model-sizer-bot
[AUTOMATED] Model Memory Requirements
#25 opened 8 months ago
by
model-sizer-bot
Error "sharded is not supported for AutoModel" when deploying on sagemaker endpoint
#22 opened 8 months ago
by
LorenzoCevolaniAXA
gguf is required :)
12
#11 opened 8 months ago
by
flymonk