Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
tminh
/
SeaLLM-7B-v2.5-vi-pubmed-GPTQ
like
0
Text Generation
Transformers
gemma
conversational
Inference Endpoints
4-bit precision
gptq
arxiv:
1910.09700
Model card
Files
Files and versions
Community
Train
Deploy
Use this model
main
SeaLLM-7B-v2.5-vi-pubmed-GPTQ
1 contributor
History:
5 commits
tminh
add tokenizer
a48730a
verified
10 months ago
.gitattributes
Safe
1.57 kB
add tokenizer
10 months ago
README.md
Safe
5.17 kB
add tokenizer
10 months ago
config.json
Safe
1.07 kB
AutoGPTQ model for SeaLLMs/SeaLLM-7B-v2.5: 4bits, gr128, desc_act=False
10 months ago
gptq_model-4bit-128g.safetensors
Safe
7.18 GB
LFS
AutoGPTQ model for SeaLLMs/SeaLLM-7B-v2.5: 4bits, gr128, desc_act=False
10 months ago
quantize_config.json
Safe
314 Bytes
AutoGPTQ model for SeaLLMs/SeaLLM-7B-v2.5: 4bits, gr128, desc_act=False
10 months ago
special_tokens_map.json
Safe
555 Bytes
add tokenizer
10 months ago
tokenizer.json
Safe
17.5 MB
LFS
add tokenizer
10 months ago
tokenizer.model
Safe
4.24 MB
LFS
add tokenizer
10 months ago
tokenizer_config.json
Safe
1.44 kB
add tokenizer
10 months ago