Björn Plüster
bjoernp
AI & ML interests
None yet
Recent Activity
liked
a dataset
2 days ago
amphion/Emilia-Dataset
liked
a dataset
21 days ago
galileo-ai/agent-leaderboard
liked
a dataset
24 days ago
jinqij/VFF
Organizations
bjoernp's activity
Can you share how you converted this?
7
#1 opened 9 months ago
by
bjoernp
Hf safetensors version
9
#3 opened 9 months ago
by
ehartford

use_flash_attention_2=True
3
#9 opened 10 months ago
by
TillFetzer
leo-mistral-hessianai-7b-chat for privateGPT
3
#8 opened 11 months ago
by
Dodo124
Update tokenizer_config.json
#1 opened 11 months ago
by
bjoernp
Problems with flash-attention2
1
#13 opened 12 months ago
by
omaer0
Loss function?
1
#10 opened about 1 year ago
by
narvind2003

No multi GPU inference support?
8
#4 opened about 1 year ago
by
dataautogpt3

Llama2 vs Mistral
1
#2 opened about 1 year ago
by
lightningRalf
Add languages
#8 opened about 1 year ago
by
lbourdois

Missing module/classes: from transformers.cache_utils import Cache, DynamicCache
1
#7 opened about 1 year ago
by
panopstor
changed "tokenizer" typo to be the one we create.
#4 opened about 1 year ago
by
dyngnosis
Which transformers version is being used here?
2
#6 opened about 1 year ago
by
Promptengineering
Flash dependency (locks out non-NVIDIA GPUs)
3
#4 opened about 1 year ago
by
Thalesian
Update modeling_moe_mistral.py
#5 opened about 1 year ago
by
bjoernp
Really appreciate the work put into this! I have noticed a change in the model output since first release.
2
#3 opened about 1 year ago
by
AARon99
Trying to quantize. Running into the issue below. Any suggestions?
1
#5 opened about 1 year ago
by
BigDeeper
small readme fix
#1 opened about 1 year ago
by
jphme

Update modeling_moe_mistral.py
2
#1 opened about 1 year ago
by
bjoernp
AWQ-Variante
4
#2 opened over 1 year ago
by
SebastianBodza
