如何使用 Huggingface BERT 转换器进行 NER 预测

数据挖掘 机器学习 张量流 变压器 命名实体识别 拥抱脸
2022-02-22 03:06:33

我正在尝试对没有任何标签的测试数据集进行预测,以解决 NER 问题。

这里有一些背景。我正在使用 tensorflow 和 Keras 进行命名实体识别。我正在使用拥抱脸变压器。

我有两个数据集。一个训练数据集和一个测试数据集。训练集有标签,测试没有。下面你会看到一个标记化的句子是什么样子的,它的标签是什么样子的,以及它在编码后是什么样子的

['The', 'pope', "isn't", 'really', 'making', 'much', 'of', 'an', 'effort', '.', 'He', "'s", 'wearing', 'the', 'same', 'clothes', 'as', 'yesterday', '.']
['O', 'B-person', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O']
[101, 1109, 17460, 2762, 112, 189, 1541, 1543, 1277, 1104, 1126, 3098, 119, 1124, 112, 188, 3351, 1103, 1269, 3459, 1112, 8128, 119, 102, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]

这是关于我如何标记我的文本和编码我的标签的代码

from transformers import DistilBertTokenizerFast

tokenizer = DistilBertTokenizerFast.from_pretrained('distilbert-base-cased')
train_encodings = tokenizer(train_texts, is_split_into_words=True, return_offsets_mapping=True, padding=True, truncation=True)
val_encodings = tokenizer(val_texts, is_split_into_words=True, return_offsets_mapping=True, padding=True, truncation=True)

def encode_tags(tags, encodings):
    labels = [[tag2id[tag] for tag in doc] for doc in tags]
    encoded_labels = []
    for doc_labels, doc_offset in zip(labels, encodings.offset_mapping):
        # create an empty array of -100
        doc_enc_labels = np.ones(len(doc_offset),dtype=int) * -100
        arr_offset = np.array(doc_offset)

        # set labels whose first offset position is 0 and the second is not 0
        doc_enc_labels[(arr_offset[:,0] == 0) & (arr_offset[:,1] != 0)] = doc_labels
        encoded_labels.append(doc_enc_labels.tolist())

    return encoded_labels

train_labels = encode_tags(train_tags, train_encodings)
val_labels = encode_tags(val_tags, val_encodings)

我已经让我的模型训练和工作。验证时我得到了漂亮的商品编号。这是如何完成的

from transformers import TFDistilBertForTokenClassification, TFTrainer, TFTrainingArguments

training_args = TFTrainingArguments(
    output_dir='./results',
    num_train_epochs=5,              # total number of training epochs
    per_device_train_batch_size=16,  # batch size per device during training
    per_device_eval_batch_size=16,   # batch size for evaluation
    warmup_steps=500,                # number of warmup steps for learning rate scheduler
    weight_decay=0.01,               # strength of weight decay
    evaluation_strategy = "epoch",
    learning_rate = 2e-5
)

with training_args.strategy.scope():
    model = TFDistilBertForTokenClassification.from_pretrained('distilbert-base-cased', num_labels=len(unique_tags))

trainer = TFTrainer(
    model=model,                         # the instantiated 🤗 Transformers model to be trained
    args=training_args,                  # training arguments, defined above
    train_dataset=train_dataset,         # training dataset
    eval_dataset=val_dataset,            # evaluation dataset
    compute_metrics=compute_metrics
)

trainer.train()

trainer.evaluate()

我的主要问题是我不知道如何预测。我不熟悉这个库,文档也没有太大帮助。

我显然可以使用trainer.predict(*param*),但我不知道实际输入什么作为param

另一方面,当我model.predict(param)在参数是上面显示的编码句子示例时,我得到了这个结果

TFTokenClassifierOutput(loss=None, logits=array([[[-0.3232851 ,  0.12578554, -0.47193137, ...,  0.16509804,
          0.19799986, -0.3560003 ]],

       [[-1.8808482 , -1.07631   , -0.49765658, ..., -0.7443374 ,
         -1.2379731 , -0.5022731 ]],

       [[-1.4291595 , -1.8587289 , -1.5842767 , ..., -1.1863587 ,
         -0.21151644, -0.52205306]],

       ...,

       [[-1.6405941 , -1.2474233 , -1.0701559 , ..., -1.1816512 ,
          0.323739  , -0.45317683]],

       [[-1.6405947 , -1.247423  , -1.0701554 , ..., -1.1816509 ,
          0.3237388 , -0.45317668]],

       [[-1.6405947 , -1.247423  , -1.0701554 , ..., -1.1816509 ,
          0.3237388 , -0.45317668]]], dtype=float32), hidden_states=None, attentions=None)

我不知道我应该如何获取该结果并将其解码回标签。我应该如何处理 logits 数组?我应该如何预测这一点?

1个回答

训练完成后,在 NERpipeline中使用经过训练的模型实例,使用与以前相同的标记器:

from transformers import pipeline
nlp = pipeline('ner', model=model, tokenizer=tokenizer)
s = "I am Joe and live in London"
print(nlp(s))

训练后不要忘记保存模型save_pretrained