File size: 6,179 Bytes
e1d7dc5 7fed6f8 b94afb5 fea72f8 7fed6f8 e1d7dc5 2279d50 e1d7dc5 e96535c e1d7dc5 d5c03ab a0c74e3 95cebc9 3868642 e1d7dc5 7184cc0 e36c886 e1d7dc5 9f5634d 4dfb479 210a301 4dfb479 210a301 14af5bf 4dfb479 9f5634d 4bc5809 97860ec 7b54e83 4bc5809 fea72f8 4bc5809 2f00ec1 a661eb4 2f00ec1 4bc5809 e1d7dc5 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 |
---
license: apache-2.0
base_model: sentence-transformers/all-MiniLM-L6-v2
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: new_classifier_model
results: []
language: en
widget:
- text: "In the case of (ioii) and (1 lii), the passive transformation will apply to the embedded sentence, and in all four cases other operations will give the final surface forms of (8) and (g)."
- text: "(10) (i) Noun Phrase — Verb — Noun Phrase — Sentence (/ — persuaded — a specialist — a specialist will examine John) (ii) Noun Phrase — Verb — Noun Phrase — Sentence (/ — persuaded — John — a specialist will examine John)"
- text: "184 SOME RESIDUAL PROBLEMS"
- text: "Peshkovskii, A. M. (1956). Russkii Sintaksis v Nauchnom Osveshchenii. Moscow."
- text: "S -» NP^Aux^VP"
- text: "(sincerity, [+N, —Count, +Abstract]) (boy, [+N, —Count, +Common, +Animate, +Human]) (may, [+M])"
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Classifier for Academic Text Contents
This model is a fine-tuned version of [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) on a collection of Linguistics publications.
It achieves the following results on the evaluation set:
- Loss: 0.4181
- Accuracy: 0.9193
## Model description
The model is fine-tuned with academic publications in Linguistics, to classify texts in publications into 4 classes as a filter to other tasks.
Sentence-based data obtained from OCR-processed PDF files was annotated manually with the following classes:
- 0: out of scope - materials that are of low significance, eg. page number and page header, noise from OCR/pdf-to-text convertion
- 1: main text - texts that are the main texts of the publication, to be used for down-stream tasks
- 2: examples - texts that are captions of the figures, or quotes or excerpts
- 3: references - references of the publication, excluding in-text citations
## Intended uses & limitations
Intended uses:
- to extract main text in academic texts for down-stream tasks
Limitations:
- training and evaluation data is limited to English, and academic texts in Linguistics (though still to a higher extent usable for German texts)
## How to run
```python
from transformers import pipeline
# return output for the best label
# eg. [{'label': 'EXAMPLE', 'score': 0.9601941108703613}]
classifier = pipeline("text-classification", model="howanching-clara/classifier_for_academic_texts", tokenizer="howanching-clara/classifier_for_academic_texts")
# return output for all labels
# eg. [[{'label': 'OUT OF SCOPE', 'score': 0.007808608002960682}, {'label': 'MAIN TEXT', 'score': 0.028077520430088043}, {'label': 'EXAMPLE', 'score': 0.9601941108703613}, {'label': 'REFERENCE', 'score': 0.003919811453670263}]]
# classifier = pipeline("text-classification", model="howanching-clara/classifier_for_academic_texts", tokenizer="howanching-clara/classifier_for_academic_texts", return_all_scores=True)
# Perform inference on your input text
your_text = "your text here."
result = classifier(your_text)
print(result)
```
## Try it yourself with the following examples (not in training/ evaluation data)
Excerpts from Chomsky, N. (2014). Aspects of the Theory of Syntax (No. 11). MIT press.
retrieved from https://apps.dtic.mil/sti/pdfs/AD0616323.pdf
- In the case of (ioii) and (1 lii), the passive transformation will
apply to the embedded sentence, and in all four cases other
operations will give the final surface forms of (8) and (g).
- (10) (i) Noun Phrase — Verb — Noun Phrase — Sentence
(/ — persuaded — a specialist — a specialist will examine
John)
(ii) Noun Phrase — Verb — Noun Phrase — Sentence
(/ — persuaded — John — a specialist will examine John)
- (13) S
Det
Predicate-Phrase
[+Definite] nom VP
their
F1...Fm Det N
destroy [+Definite] G, ... G,
the property
- 184 SOME RESIDUAL PROBLEMS
- Peshkovskii, A. M. (1956). Russkii Sintaksis v Nauchnom Osveshchenii.
Moscow.
- S -» NP^Aux^VP
- (sincerity, [+N, —Count, +Abstract])
(boy, [+N, —Count, +Common, +Animate, +Human])
(may, [+M])
## Problematic cases
Definitions or findings written in point form are challenging for the model. For example:
- (2) (i) the string (1) is a Sentence (S); frighten the boy is a Verb
Phrase (VP) consisting of the Verb (V) frighten and the
Noun Phrase (NP) the boy; sincerity is also an NP; the
NP the boy consists of the Determiner (Det) the, followed
by a Noun (N); the NP sincerity consists of just an N;
the is, furthermore, an Article (Art); may is a Verbal
Auxiliary (Aux) and, furthermore, a Modal (M).
- (v) specification of a function m such that m(i) is an integer
associated with the grammar G4 as its value (with, let us
say, lower value indicated by higher number)
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5772 | 1.0 | 762 | 0.3256 | 0.9062 |
| 0.2692 | 2.0 | 1524 | 0.3038 | 0.9163 |
| 0.217 | 3.0 | 2286 | 0.3109 | 0.9180 |
| 0.1773 | 4.0 | 3048 | 0.3160 | 0.9209 |
| 0.1619 | 5.0 | 3810 | 0.3440 | 0.9206 |
| 0.1329 | 6.0 | 4572 | 0.3675 | 0.9160 |
| 0.1165 | 7.0 | 5334 | 0.3770 | 0.9209 |
| 0.0943 | 8.0 | 6096 | 0.4012 | 0.9203 |
| 0.085 | 9.0 | 6858 | 0.4166 | 0.9196 |
| 0.0811 | 10.0 | 7620 | 0.4181 | 0.9193 |
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cpu
- Datasets 2.14.7
- Tokenizers 0.14.1
|