nenekochan
commited on
Commit
•
1bb0419
1
Parent(s):
a7d2126
pipeline for ja scripts and voice parallel corpus
Browse files- .gitignore +1 -0
- README.md +23 -9
- ks-parse-all.py +22 -13
- script/transcode.sh +12 -2
- voice-assemble.py +85 -0
.gitignore
CHANGED
@@ -3,3 +3,4 @@ __pycache__/
|
|
3 |
|
4 |
/conversation/
|
5 |
/scenario*/
|
|
|
|
3 |
|
4 |
/conversation/
|
5 |
/scenario*/
|
6 |
+
/sound*/
|
README.md
CHANGED
@@ -1,8 +1,9 @@
|
|
1 |
---
|
2 |
-
pretty_name: 夜羊L
|
3 |
language:
|
4 |
- zh
|
5 |
-
|
|
|
6 |
license: cc-by-nc-4.0
|
7 |
annotations_creators:
|
8 |
- expert-generated
|
@@ -19,26 +20,39 @@ tags:
|
|
19 |
## ⚠️注意
|
20 |
|
21 |
- **请注意,数据来自 R18 的视觉小说,并且包含可能被认为是不适当、令人震惊、令人不安、令人反感和极端的主题。如果您不确定在您的国家拥有任何形式的虚构文字内容的法律后果,请不要下载。**
|
22 |
-
- **本项目内的所有数据及基于这些数据的衍生作品禁止用作商业性目的。** 我不拥有 `scenario-raw` 里的 krkr2 脚本源文件,而其余的数据处理方法按照 CC BY-NC 4.0 协议开放。
|
23 |
-
- 按照数据预处理的先后次序,依次是:`scenario-raw` 里是 krkr2 脚本源文件,`scenario` 里是清理后的结构化脚本,`conversation` 里是我主观分段制作的对话格式数据。
|
24 |
-
- 对于主观分段,一部分是手动的,其余是基于文本相似度的不太靠谱自动分段(我还没推的那部分,我不想被剧透啊啊啊)。手动分段道且阻且长,慢慢做吧,进度记录在 [manual_seg-progress.md](manual_seg-progress.md)。
|
25 |
-
- 2015-2017 的前四作是单女主,后面的作品都是双女主的,脚本格式也略微不同。
|
26 |
- 🔑 压缩包已加密,解压密码是 yorunohitsuji
|
27 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
28 |
|
|
|
|
|
|
|
|
|
29 |
|
30 |
## 给我自己看的预处理流程
|
31 |
|
32 |
-
0. 各作的脚本提取出来放在 `scenario-raw/` 里,用 `script/transcode.sh` 转成 UTF-8,`2015-sssa` 额外需要 `script/dos2unix.sh` 转成 LF
|
33 |
1. 修复格式小问题 `cd scenario-raw && bash patch.sh`
|
34 |
-
2. 运行 `python ks-parse-all.py
|
35 |
3. 分段,再转成 `conversation/`
|
36 |
a. 自动分段:`python -m segment.auto path/to/scenario.jsonl`
|
37 |
b. 手动分段后,`python -m segment.manual path/to/scenario-manual_seg.jsonl`
|
38 |
|
39 |
添加新卷:
|
40 |
|
41 |
-
0. 脚本放在 `scenario-raw/` 里
|
42 |
1. 在 `ks-parse-all.py` 里添加新卷的元数据
|
43 |
|
44 |
## 致谢
|
|
|
1 |
---
|
2 |
+
pretty_name: 夜羊L系列脚本
|
3 |
language:
|
4 |
- zh
|
5 |
+
- ja
|
6 |
+
language_details: zho_Hans, jpn
|
7 |
license: cc-by-nc-4.0
|
8 |
annotations_creators:
|
9 |
- expert-generated
|
|
|
20 |
## ⚠️注意
|
21 |
|
22 |
- **请注意,数据来自 R18 的视觉小说,并且包含可能被认为是不适当、令人震惊、令人不安、令人反感和极端的主题。如果您不确定在您的国家拥有任何形式的虚构文字内容的法律后果,请不要下载。**
|
23 |
+
- **本项目内的所有数据及基于这些数据的衍生作品禁止用作商业性目的。** 我不拥有 `scenario-raw` 和 `scenario_ja-raw` 里的 krkr2 脚本源文件,而其余的数据处理方法按照 CC BY-NC 4.0 协议开放。
|
|
|
|
|
|
|
24 |
- 🔑 压缩包已加密,解压密码是 yorunohitsuji
|
25 |
|
26 |
+
## 文件结构
|
27 |
+
|
28 |
+
```
|
29 |
+
yoruno-vn.7z # (zh)
|
30 |
+
├── scenario-raw/ # krkr2 脚本源文件
|
31 |
+
├── scenario/ # 清理后的结构化脚本
|
32 |
+
└── conversation/ # 我主观分段制作的对话格式数据
|
33 |
+
yoruno_ja-vn.7z # (ja)
|
34 |
+
├── scenario_ja-raw/ # krkr2 脚本源文件
|
35 |
+
├── scenario_ja/ # 清理后的结构化脚本
|
36 |
+
└── sound_ja/ # (并不存在的语音和)我手工标注的分类元信息
|
37 |
+
```
|
38 |
|
39 |
+
- 对于主观分段,一部分是手动的,其余是基于文本相似度的不太靠谱自动分段(我还没推的那部分,我不想被剧透啊啊啊)。手动分段道且阻且长,慢慢做吧,进度记录在 [manual_seg-progress.md](manual_seg-progress.md)。
|
40 |
+
- 2015-2017 的前四作是单女主,后面的作品都是双女主的,脚本格式也略微不同。
|
41 |
+
- 主观分段内容排除了一些与 npc 的对话、改了错别字,所以和原脚本不完全一致。
|
42 |
+
- 语音文件果然还是不放了,不过我有标注元信息,用来分离出含有喘和或口腔音的语音。
|
43 |
|
44 |
## 给我自己看的预处理流程
|
45 |
|
46 |
+
0. 各作的脚本提取出来放在 `scenario[_ja]-raw/` 里,用 `script/transcode.sh` 转成 UTF-8,`2015-sssa` 额外需要 `script/dos2unix.sh` 转成 LF
|
47 |
1. 修复格式小问题 `cd scenario-raw && bash patch.sh`
|
48 |
+
2. 运行 `python ks-parse-all.py --voice scenario[_ja]-raw/ scenario[_ja]/` 得到 `scenario[_ja]/`
|
49 |
3. 分段,再转成 `conversation/`
|
50 |
a. 自动分段:`python -m segment.auto path/to/scenario.jsonl`
|
51 |
b. 手动分段后,`python -m segment.manual path/to/scenario-manual_seg.jsonl`
|
52 |
|
53 |
添加新卷:
|
54 |
|
55 |
+
0. 脚本放在 `scenario[_ja]-raw/` 里
|
56 |
1. 在 `ks-parse-all.py` 里添加新卷的元数据
|
57 |
|
58 |
## 致谢
|
ks-parse-all.py
CHANGED
@@ -1,7 +1,8 @@
|
|
1 |
from ks_parse.__main__ import parse_file
|
2 |
|
3 |
-
from attrs import define
|
4 |
|
|
|
5 |
from pathlib import Path
|
6 |
|
7 |
|
@@ -9,28 +10,36 @@ from pathlib import Path
|
|
9 |
class Volume:
|
10 |
name: str
|
11 |
stype: str
|
12 |
-
# ロリ妊娠はダメ、下江コハル.jpg
|
13 |
-
ignore_list: list[str] = Factory(list)
|
14 |
|
15 |
|
16 |
ALL_VOLUMES = [
|
17 |
Volume(name="2015-sssa", stype="novel"),
|
18 |
-
Volume(name="2016a-yubikiri", stype="novel"
|
19 |
Volume(name="2016b-sssa2", stype="novel"),
|
20 |
-
Volume(name="2017-otomari", stype="novel"
|
21 |
-
Volume(name="2018-harem", stype="adv"
|
22 |
-
Volume(name="2019-aiyoku", stype="adv"
|
23 |
-
Volume(name="2020-yuuwaku", stype="adv"
|
24 |
-
Volume(name="2022-mainichi", stype="adv"
|
25 |
]
|
26 |
BASE = Path("./scenario-raw")
|
27 |
|
28 |
if __name__ == "__main__":
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
29 |
for vol in ALL_VOLUMES:
|
30 |
print(f"Processing {vol.name}...", end="")
|
31 |
-
for ks_path in sorted((
|
32 |
-
if ks_path.stem in vol.ignore_list:
|
33 |
-
continue
|
34 |
print(f" {ks_path.stem}", end="")
|
35 |
-
parse_file(ks_path, vol.stype)
|
36 |
print()
|
|
|
1 |
from ks_parse.__main__ import parse_file
|
2 |
|
3 |
+
from attrs import define
|
4 |
|
5 |
+
import argparse
|
6 |
from pathlib import Path
|
7 |
|
8 |
|
|
|
10 |
class Volume:
|
11 |
name: str
|
12 |
stype: str
|
|
|
|
|
13 |
|
14 |
|
15 |
ALL_VOLUMES = [
|
16 |
Volume(name="2015-sssa", stype="novel"),
|
17 |
+
Volume(name="2016a-yubikiri", stype="novel"),
|
18 |
Volume(name="2016b-sssa2", stype="novel"),
|
19 |
+
Volume(name="2017-otomari", stype="novel"),
|
20 |
+
Volume(name="2018-harem", stype="adv"),
|
21 |
+
Volume(name="2019-aiyoku", stype="adv"),
|
22 |
+
Volume(name="2020-yuuwaku", stype="adv"),
|
23 |
+
Volume(name="2022-mainichi", stype="adv"),
|
24 |
]
|
25 |
BASE = Path("./scenario-raw")
|
26 |
|
27 |
if __name__ == "__main__":
|
28 |
+
argp = argparse.ArgumentParser()
|
29 |
+
argp.add_argument(
|
30 |
+
"ks_base", type=Path, help="Base directory of .ks files, e.g. ./scenario-raw"
|
31 |
+
)
|
32 |
+
argp.add_argument(
|
33 |
+
"output_base",
|
34 |
+
type=Path,
|
35 |
+
help="Base directory of output .jsonl files, e.g. ./scenario",
|
36 |
+
)
|
37 |
+
argp.add_argument("--voice", action="store_true", help="Include voice file name")
|
38 |
+
args = argp.parse_args()
|
39 |
+
|
40 |
for vol in ALL_VOLUMES:
|
41 |
print(f"Processing {vol.name}...", end="")
|
42 |
+
for ks_path in sorted((args.ks_base / vol.name).glob("*.ks")):
|
|
|
|
|
43 |
print(f" {ks_path.stem}", end="")
|
44 |
+
parse_file(ks_path, args.output_base, vol.stype, args.voice)
|
45 |
print()
|
script/transcode.sh
CHANGED
@@ -1,10 +1,20 @@
|
|
1 |
#!/usr/bin/env bash
|
2 |
|
3 |
BASE=$1
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
|
5 |
for f in $BASE/*.ks; do
|
6 |
-
iconv -f
|
7 |
-
#iconv -f gbk -t utf-8 < $f > $f.tmp # for 2016a-yubikiri-old
|
8 |
mv $f.tmp $f
|
9 |
done
|
10 |
|
|
|
1 |
#!/usr/bin/env bash
|
2 |
|
3 |
BASE=$1
|
4 |
+
if [[ $BASE == *ja* ]]; then
|
5 |
+
ENCODING=shift-jis
|
6 |
+
if [[ $BASE == *2022-mainichi* ]]; then
|
7 |
+
ENCODING=unicode
|
8 |
+
fi
|
9 |
+
else
|
10 |
+
ENCODING=unicode
|
11 |
+
if [[ $BASE == *2016a-yubikiri-old* ]]; then
|
12 |
+
ENCODING=gbk
|
13 |
+
fi
|
14 |
+
fi
|
15 |
|
16 |
for f in $BASE/*.ks; do
|
17 |
+
iconv -f $ENCODING -t utf-8 < $f > $f.tmp
|
|
|
18 |
mv $f.tmp $f
|
19 |
done
|
20 |
|
voice-assemble.py
ADDED
@@ -0,0 +1,85 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"""Compile zh/ja parallel voice list
|
2 |
+
|
3 |
+
label legend:
|
4 |
+
n - normal
|
5 |
+
h - h
|
6 |
+
i - interjection
|
7 |
+
o - oral
|
8 |
+
"""
|
9 |
+
from pathlib import Path
|
10 |
+
import re
|
11 |
+
import json
|
12 |
+
import argparse
|
13 |
+
import csv
|
14 |
+
from typing import Iterable
|
15 |
+
|
16 |
+
|
17 |
+
RE_PARANTHESIS = re.compile(r"(.+?)|【.+?】")
|
18 |
+
RE_SINGLE_RM = re.compile(r"「|」|“|”|〜|—|♪")
|
19 |
+
RE_QPREFIX = re.compile(r"^(…*?)")
|
20 |
+
SET_INTJ = set("呜唔嗯哈啊呃呣呼咕咚唉嘿嘻噗啾噜")
|
21 |
+
SET_INTJP = {*SET_INTJ, *"…―~。、,!?"}
|
22 |
+
|
23 |
+
|
24 |
+
def gather_voice(fi: Iterable[str], d: dict[str, str], lang: str):
|
25 |
+
for line in fi:
|
26 |
+
if '"voice"' not in line:
|
27 |
+
continue
|
28 |
+
jl = json.loads(line)
|
29 |
+
d[jl["voice"]] = clean_text(jl["text"], lang)
|
30 |
+
|
31 |
+
|
32 |
+
def clean_text(s: str, lang: str) -> str:
|
33 |
+
s = RE_PARANTHESIS.sub(r"", s)
|
34 |
+
s = RE_SINGLE_RM.sub(r"", s)
|
35 |
+
if lang == "zh":
|
36 |
+
s = RE_QPREFIX.sub(r"嗯\1", s)
|
37 |
+
else:
|
38 |
+
s = RE_QPREFIX.sub(r"ん\1", s)
|
39 |
+
return s
|
40 |
+
|
41 |
+
|
42 |
+
def label_voice(
|
43 |
+
voice_map_zh: dict[str, str], voice_map_ja: dict[str, str]
|
44 |
+
) -> Iterable[list[str]]:
|
45 |
+
for k, vja in sorted(voice_map_ja.items()):
|
46 |
+
if (vzh := voice_map_zh.get(k)) is None:
|
47 |
+
raise ValueError(f"Voice entry {k} not found in zh")
|
48 |
+
label = label_rule(vzh)
|
49 |
+
yield [k, vja, vzh, label]
|
50 |
+
|
51 |
+
|
52 |
+
def label_rule(s: str) -> str:
|
53 |
+
"""Simple heuristic to label voice entry;
|
54 |
+
needs manual labeling for "h" and "o" and manual check
|
55 |
+
"""
|
56 |
+
if not (set(s) - SET_INTJP):
|
57 |
+
return "i"
|
58 |
+
i_stat = sum(c in SET_INTJ for c in s)
|
59 |
+
if i_stat >= 3 or (i_stat / len(s) > 0.1):
|
60 |
+
return "ni"
|
61 |
+
return "n"
|
62 |
+
|
63 |
+
|
64 |
+
if __name__ == "__main__":
|
65 |
+
argp = argparse.ArgumentParser(description="Compile zh/ja parallel voice list")
|
66 |
+
argp.add_argument("volume", type=str, help="Volume name (e.g. 2019-aiyoku)")
|
67 |
+
args = argp.parse_args()
|
68 |
+
|
69 |
+
scenario_zh: Path = Path("scenario") / args.volume
|
70 |
+
scenario_ja: Path = Path("scenario_ja") / args.volume
|
71 |
+
|
72 |
+
voice_map_zh = {}
|
73 |
+
voice_map_ja = {}
|
74 |
+
for scenario, voice_map, lang in (
|
75 |
+
(scenario_zh, voice_map_zh, "zh"),
|
76 |
+
(scenario_ja, voice_map_ja, "ja"),
|
77 |
+
):
|
78 |
+
for sc in scenario.glob("*.jsonl"):
|
79 |
+
with sc.open("r") as fi:
|
80 |
+
gather_voice(fi, voice_map, lang)
|
81 |
+
|
82 |
+
with open(f"{args.volume}.csv", "w", newline="") as csvfile:
|
83 |
+
cw = csv.writer(csvfile)
|
84 |
+
for row in label_voice(voice_map_zh, voice_map_ja):
|
85 |
+
cw.writerow(row)
|