--- license: apache-2.0 --- This is a Small (112M parameter) Transformer trained for 100k steps on interarrival-time encoded music from the [Lakh MIDI dataset](https://colinraffel.com/projects/lmd/). # References for the Anticipatory Music Transformer The full model card is available [here](https://johnthickstun.com/assets/pdf/music-modelcard.pdf). Code for using this model is available on [GitHub](https://github.com/jthickstun/anticipation/). See the accompanying [blog post](https://crfm.stanford.edu/2023/06/14/anticipatory-music-transformer.html?idx=1#demo-example) for additional discussion of this model.