chatglm-6b / modeling_chatglm.py

Commit History

Change mask positions to batch
4de8efe

zxdu20 commited on

Add empty_init option
eb55ff0

zxdu20 commited on

Fix attention score on mps
cde457b

zxdu20 commited on

Fix LogitsProcessor using slim checkpoint (#29)
61eee50

zxdu20 bcol commited on

Use gmask in first place
9324de7

zxdu20 commited on

Update code for slim
63ce1ba

zxdu20 commited on

Fix position ids expand
f82b180

zxdu20 commited on

Fix generate
fb23542

zxdu20 commited on

Fix attention mask for prefix prompt
08bc851

zxdu20 commited on

No padding for chat function
4b7ffbf

zxdu20 commited on

Implement batch generation
cc96a22

zxdu20 commited on

Fix position id for training
11c270c

zxdu20 commited on

Add support for loading quantized model
2e1be30

zxdu20 commited on

Use dynamic dtype for prompts
c949d03

zxdu20 commited on

Fix backward for quantization
0cfae21

zxdu20 commited on

Implement gradient checkpointing
aea6cef

zxdu20 commited on

Fix bugs
0564795

zxdu20 commited on

Add pad_token_id in config.json
2200e2b

zxdu20 commited on

Set ignore_index for CrossEntropyLoss
5c64357

zxdu20 commited on

Support batch training
8127ab6

zxdu20 commited on

Merge branch 'main' into dev_pt
fbda120

zxdu20 commited on

Add p-tuning v2
812f43f

zxdu20 commited on

Fix context length in get_position_ids
096f3de

zxdu20 commited on

Close CPU fusion on Mac
4a9b711

zxdu20 commited on

Fix Chinese punctuation
d2bbc82

zxdu20 commited on

Remove hardcode bos_token_id
2460dc2

zxdu20 commited on

Add support for streaming output
42095d4

zxdu20 commited on

Fix overflow in FP16
220f772

zxdu20 commited on

Set is_parallelizable to False
f9f74fd

zxdu20 commited on

Add logit processor for NaN or Inf scores
c3dece3

zxdu20 commited on

Fix default history argument
9d1509a

zxdu20 commited on

Add support for float32
d4832e8

zxdu20 commited on

Fix past_key_values
cd8041e

zxdu20 commited on

Add chatglm-6b
d11c6aa

Sengxian commited on