|
--- |
|
license: apache-2.0 |
|
pipeline_tag: text-generation |
|
--- |
|
|
|
|
|
|
|
|
|
<p align="center"> |
|
<b><font size="6">SongComposer</font></b> |
|
<p> |
|
|
|
<div align="center"> |
|
|
|
[💻Github Repo](https://github.com/pjlab-songcomposer/songcomposer) |
|
|
|
[📖Paper](https://arxiv.org/abs/2402.17645) |
|
|
|
</div> |
|
|
|
**SongComposer** is a language large model (LLM) based on [InternLM2](https://github.com/InternLM/InternLM) for lyric and melody composition in song generation. |
|
|
|
We release SongComposer series in two versions: |
|
|
|
- SongComposer_pretrain: The pretrained SongComposer with InternLM2 as the initialization of the LLM, gains basic knowledge of lyric and melody. |
|
- SongComposer_sft: The finetuned SongComposer for *instruction-following song generation* including lyric to melody, melody to lyric, song continuation, text to song. |
|
|
|
### Import from Transformers |
|
To load the SongComposer_pretrain model using Transformers, use the following code: |
|
```python |
|
from transformers import AutoTokenizer, AutoModelForCausalLM |
|
ckpt_path = "Mar2Ding/songcomposer_pretrain" |
|
tokenizer = AutoTokenizer.from_pretrained(ckpt_path, trust_remote_code=True) |
|
model = AutoModel.from_pretrained(ckpt_path, trust_remote_code=True).cuda().half() |
|
prompt = '<bop> Total 7 lines. The first line:可,<D4>,<137>,<79>|惜,<D#4>,<137>,<79>|这,<F4>,<137>,<88>|是,<F4>,<121>,<79>|属,<F4>,<121>,<79>|于,<D#4>,<214>,<88>|你,<D#4>,<141>,<79>|的,<D4>,<130>,<79>|风,<C4>,<151>,<79>|景,<A#3> <F3>,<181><137>,<79>\n' |
|
model.inference_pretrain(prompt, tokenizer, model) |
|
``` |
|
|
|
### 通过 Transformers 加载 |
|
通过以下的代码加载 SongComposer_pretrain 模型 |
|
```python |
|
from transformers import AutoTokenizer, AutoModelForCausalLM |
|
ckpt_path = "Mar2Ding/songcomposer_pretrain" |
|
tokenizer = AutoTokenizer.from_pretrained(ckpt_path, trust_remote_code=True) |
|
model = AutoModel.from_pretrained(ckpt_path, trust_remote_code=True).cuda().half() |
|
prompt = '<bop> Total 7 lines. The first line:可,<D4>,<137>,<79>|惜,<D#4>,<137>,<79>|这,<F4>,<137>,<88>|是,<F4>,<121>,<79>|属,<F4>,<121>,<79>|于,<D#4>,<214>,<88>|你,<D#4>,<141>,<79>|的,<D4>,<130>,<79>|风,<C4>,<151>,<79>|景,<A#3> <F3>,<181><137>,<79>\n' |
|
model.inference_pretrain(prompt, tokenizer, model) |
|
``` |
|
|
|
### Open Source License |
|
The code is licensed under Apache-2.0, while model weights are fully open for academic research and also allow free commercial usage. |