Aureliano commited on
Commit
f775f8b
1 Parent(s): 77e4d25
Files changed (4) hide show
  1. README.md +21 -1
  2. config.json +2 -2
  3. pytorch_model.bin +2 -2
  4. tf_model.h5 +2 -2
README.md CHANGED
@@ -1,6 +1,9 @@
1
  ---
2
  license: apache-2.0
3
  ---
 
 
 
4
 
5
  ## How to use the discriminator in `transformers` on a custom dataset
6
  (Heavily based on: https://github.com/huggingface/notebooks/blob/master/examples/text_classification-tf.ipynb)
@@ -54,7 +57,7 @@ batches_per_epoch = math.ceil(len(train_dataset) / batch_size)
54
  total_train_steps = int(batches_per_epoch * num_epochs)
55
 
56
  optimizer, schedule = create_optimizer(
57
- init_lr=1e-5, num_warmup_steps=1, num_train_steps=total_train_steps
58
  )
59
 
60
  discriminator.compile(optimizer=optimizer, loss=loss)
@@ -72,3 +75,20 @@ print(text, ":", label)
72
  # ideally [v01214265 -> take.v.04 -> "get into one's hands, take physically"], but probably only with a better dataset
73
 
74
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  ---
4
+ ## How to use the discriminator in `transformers`
5
+ ```python
6
+ ```
7
 
8
  ## How to use the discriminator in `transformers` on a custom dataset
9
  (Heavily based on: https://github.com/huggingface/notebooks/blob/master/examples/text_classification-tf.ipynb)
 
57
  total_train_steps = int(batches_per_epoch * num_epochs)
58
 
59
  optimizer, schedule = create_optimizer(
60
+ init_lr=1e-6, num_warmup_steps=1, num_train_steps=total_train_steps
61
  )
62
 
63
  discriminator.compile(optimizer=optimizer, loss=loss)
 
75
  # ideally [v01214265 -> take.v.04 -> "get into one's hands, take physically"], but probably only with a better dataset
76
 
77
  ```
78
+
79
+ ## How to use in a Rasa pipeline
80
+ The model can integrated in a Rasa pipeline through a [`LanguageModelFeaturizer`](https://rasa.com/docs/rasa/components#languagemodelfeaturizer)
81
+ ```yaml
82
+ recipe: default.v1
83
+ language: en
84
+
85
+ pipeline:
86
+ # See https://rasa.com/docs/rasa/tuning-your-model for more information.
87
+ ...
88
+ - name: "WhitespaceTokenizer"
89
+ ...
90
+ - name: LanguageModelFeaturizer
91
+ model_name: "distilbert"
92
+ model_weights: "Aureliano/distilbert-base-uncased-if"
93
+ ...
94
+ ```
config.json CHANGED
@@ -1,8 +1,8 @@
1
  {
2
- "_name_or_path": "wn_full_classifier-trainer",
3
  "activation": "gelu",
4
  "architectures": [
5
- "DistilBertModel"
6
  ],
7
  "attention_dropout": 0.1,
8
  "dim": 768,
 
1
  {
2
+ "_name_or_path": "wn_classifier-trainer",
3
  "activation": "gelu",
4
  "architectures": [
5
+ "DistilBertForSequenceClassification"
6
  ],
7
  "attention_dropout": 0.1,
8
  "dim": 768,
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:12eaf2d9db88163a36fbbc5e45331658bc35d26421cb061f05c473c7708a30d4
3
- size 265487161
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:90d57abc8b44939285b112eb10a677035d03d16c6441e3dd2537db9335c1f53e
3
+ size 268111281
tf_model.h5 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:d789b0b93a1504985b62a35e46433ea14548042e4bdb2e9710619198d6a7b62e
3
- size 265571752
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:92875108e5ce200e5c4bf672bc691f1bffe63ffe18cad7fb6ad4d773e5a99251
3
+ size 268203800