I tokenized 6.2mill Danbooru captions using my tokenizer; this is how long they each were, as a big long 1d tensor
Browse files- caption_token_lengths.pt +3 -0
caption_token_lengths.pt
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:8a02d9379887f63abb987056f0a559afdffc603430fbe2553b88ec9733efe9d2
|
3 |
+
size 23081109
|