Update README.md
Browse files
README.md
CHANGED
@@ -9,8 +9,7 @@ tags:
|
|
9 |
|
10 |
A 231M parameter base model trained on lichess games from January 2023 that ended in checkmate (filtered out games that were won because of time).
|
11 |
|
12 |
-
|
13 |
-
|
14 |
```py
|
15 |
from transformers import GPT2LMHeadModel, AutoTokenizer
|
16 |
|
@@ -26,4 +25,18 @@ next_move = tokenizer.decode(gen_tokens[-1])
|
|
26 |
print(next_move) #d4e5
|
27 |
```
|
28 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
29 |
|
|
|
9 |
|
10 |
A 231M parameter base model trained on lichess games from January 2023 that ended in checkmate (filtered out games that were won because of time).
|
11 |
|
12 |
+
## Inference
|
|
|
13 |
```py
|
14 |
from transformers import GPT2LMHeadModel, AutoTokenizer
|
15 |
|
|
|
25 |
print(next_move) #d4e5
|
26 |
```
|
27 |
|
28 |
+
### End of game detection
|
29 |
+
|
30 |
+
The model also has three special tokens for end game detection `<BLACK_WIN>`, `<WHITE_WIN>` and `<DRAW>`. This can be useful for implementing beam search strategies.
|
31 |
+
|
32 |
+
```py
|
33 |
+
moves = " ".join(["f2f3", "e7e5", "g2g4", "d8h4"])
|
34 |
+
|
35 |
+
model_inputs = tokenizer(moves, return_tensors="pt")
|
36 |
+
gen_tokens = model.generate(**model_inputs, max_new_tokens=1)[0]
|
37 |
+
next_move = tokenizer.decode(gen_tokens[-1])
|
38 |
+
|
39 |
+
print(next_move) # <BLACK_WIN>
|
40 |
+
```
|
41 |
+
|
42 |
|