File size: 2,596 Bytes
d759971
 
f3ed620
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
from transformers import PretrainedConfig
from abc import ABCMeta
import torch
from transformers.pytorch_utils import nn
from transformers import BertModel, BertConfig
import torch
import torch.nn.functional as F
from transformers import PreTrainedModel
from transformers.modeling_outputs import SequenceClassifierOutput
from transformers import PretrainedConfig

class BertLSTMConfig(PretrainedConfig):
    model_type = "bertLSTMForSequenceClassification"

    def __init__(self,
                 num_classes=2,
                 embed_dim=768,
                 num_layers=12,
                 hidden_dim_lstm=256,  # New parameter for LSTM
                 dropout_rate=0.1,
                 **kwargs):
        super().__init__(**kwargs)
        self.num_classes = num_classes
        self.embed_dim = embed_dim
        self.num_layers = num_layers
        self.hidden_dim_lstm = hidden_dim_lstm  # Assign LSTM hidden dimension
        self.dropout_rate = dropout_rate
        self.id2label = {
            0: "fake",
            1: "true",
        }
        self.label2id = {
            "fake": 0,
            "true": 1,
        }


class BertLSTMForSequenceClassification(PreTrainedModel, metaclass=ABCMeta):
    config_class = BertLSTMConfig
    def __init__(self, config):
        super(BertLSTMForSequenceClassification, self).__init__(config)
        self.num_classes = config.num_classes
        self.embed_dim = config.embed_dim
        self.num_layers = config.num_layers
        self.hidden_dim_lstm = config.hidden_dim_lstm
        self.dropout = nn.Dropout(config.dropout_rate)
        self.bert = BertModel.from_pretrained('bert-base-uncased')
        print("BERT Model Loaded")
        self.lstm = nn.LSTM(self.embed_dim, self.hidden_dim_lstm, batch_first=True, num_layers=3)
        self.fc = nn.Linear(self.hidden_dim_lstm, self.num_classes)

    def forward(self, input_ids, attention_mask, token_type_ids, labels=None):
        bert_output = self.bert(input_ids=input_ids, attention_mask=attention_mask, token_type_ids=token_type_ids)
        pooled_output = bert_output.pooler_output  # Use the pooled output for classification
        out, _ = self.lstm(pooled_output.unsqueeze(1))
        out = self.dropout(out[:, -1, :])
        logits = self.fc(out)
        loss = None
        if labels is not None:
            loss = F.cross_entropy(logits, labels)
        out = SequenceClassifierOutput(
            loss=loss,
            logits=logits,
            hidden_states=bert_output.hidden_states,
            attentions=bert_output.attentions,
        )
        return out