使用 NLP 提取信息并将其存储在 csv 文件中

数据挖掘 文本挖掘 数据科学模型 特征提取 nlp
2022-02-27 05:26:48

我有一个存储拾取、掉落和时间的文本文件。SMS 文本是一个虚拟文件,用于训练出租车服务模型。文本是这样的格式:

Please book a cab from airport to hauz khaas at 3 PM
airport to hauz khaas at 6 PM
Kindly book a cab for me at 1 PM from hauz khaas to dwarka sector 23
airport to hauz khaas at 1 AM
I want to go to dwarka sector 21 from airport leaving at 10 PM
airport to dwarka sector 21 at 12 PM
Please book a cab for dwarka sector 23 from hauz khaas at 12 PM
Please book a cab from dwarka sector 23 to dwarka sector 21 at 4 PM 

问题是我需要在 csv 文件中创建 3 列 - 目的地、取件和时间。我几乎使用了所有技术,但它没有准确地提取文本。我尝试了 chinking、POS 标记、正则表达式 我也尝试LatentDirichletAllocation创建功能,但需要一些帮助来了解缺少的内容。这是我使用的代码:

import nltk
returnme = list()
def process_content():
    try:
        returnme1 = list()
        for i in txtData.splitlines()[0:4]:
            list1 = set()
            words = nltk.ngrams(i.split(), 2)

            for j in words:
              pos = nltk.pos_tag(j)
              grm = r"""origin:{(<NN><TO>)|(<NN><VBG>)|(<VB><NN><TO>)}
              time:{(<CD><NN> ) | (<CD><NNS>)}
              dest: {(<VB><NN><CD>) | (<VB><NN>)}
              All:{(<IN><NN>)|<CD>|<NN>|<TO><NN>|<NN><NN><CD>} """
              chunkword = nltk.RegexpParser(grm)
              chuncked = chunkword.parse(pos)
              subtreelst = set()
              for subtree in chuncked.subtrees():                           
                if (subtree.label() == 'origin' and subtree.label() != 'S'):
                    subtreelst.add('origin: '+subtree.leaves()[0][0])
                if (subtree.label() == 'time' and subtree.label() != 'S'):
                    subtreelst.add('time: '+subtree.leaves()[0][0])
                if (subtree.label() == 'dest' and subtree.label() != 'S'):
                    subtreelst.add('dest: '+subtree.leaves()[0][0])
                if (subtree.label() == 'All' and subtree.label() != 'S'):
                   subtreelst.add('All: '+subtree.leaves()[0][0])
              list1.update(subtreelst)
            returnme.append(list1)
        returnme1.append(returnme)  


        return returnme1
    except Exception as e:
        print(str(e))


mylst = list()
mylst.append(process_content())
mylst

这给出了以下输出:

[[[{'All: 3',
    'All: book',
    'All: cab',
    'All: from',
    'All: hauz',
    'All: khaas',
    'origin: airport',
    'time: 3'},
   {'All: 6', 'All: hauz', 'All: khaas', 'origin: airport', 'time: 6'},
   {'All: 1',
    'All: 23',
    'All: PM',
    'All: book',
    'All: cab',
    'All: dwarka',
    'All: from',
    'All: hauz',
    'All: khaas',
    'All: sector',
    'origin: khaas',
    'time: 1'},
   {'All: 1', 'All: hauz', 'All: khaas', 'origin: airport'}]]]

潜在狄利克雷分配部分:

    import pandas as pd
    import nltk
    from nltk.tokenize import word_tokenize
    from nltk.stem import PorterStemmer
    from nltk.corpus import stopwords
    from nltk.probability import FreqDist
    from sklearn.model_selection import train_test_split
    import re
    All_Reviews = pd.DataFrame(txtData.splitlines())
    def remove_non_alphabets(text):
        non_valid_word = re.compile(r'[-.?!,:;()"--``\[\]\|]')
        token = word_tokenize(text)
        return_me = list()
        for w in token:
            word= non_valid_word.sub("",w)
            word= re.sub(r'^https?:\/\/.*[\r\n]*', '', word, flags=re.MULTILINE) # removed URLs
            word= re.sub(" \d+", " ", word) # remove digits 
            #word = re.sub('[^A-Za-z0-9]+', "", word)
            #word = re.sub(r'\[\[(?:[^\]|]*\|)?([^\]|]*)\]\]', r'\1', line)
            return_me.append(word)
        return return_me

    def dostopwords(text):
        return_me = " ".join([c for c in text if c not in stopwords.words('english')])
        return return_me    
    #     return_me = list()
    #        # token = word_tokenize(text)
    #     for w in text:
    #         if w not in stopwords.words('english'):
    #             return_me.append(w)
    #     return return_me

    def counter(text):    
        fdist = FreqDist()
        for f in text:
            fdist[f.lower()] +=1
        return fdist
All_Reviews[0]= All_Reviews[0].apply(lambda lb: remove_non_alphabets(lb))
All_Reviews[0] = All_Reviews[0].apply(lambda lb: dostopwords(lb))
from sklearn.feature_extraction.text import CountVectorizer
CV = CountVectorizer(max_df=0.95, min_df=2,max_features=1000,ngram_range = (1,3),stop_words='english')
vect = CV.fit_transform(All_Reviews[0])
header = CV.get_feature_names()
from sklearn.decomposition import LatentDirichletAllocation
lda = LatentDirichletAllocation(n_components=5)
lda_output = lda.fit_transform(vect)
sorting = np.argsort(lda.components_)[:,::-1] 
features = np.array(CV.get_feature_names()) 
features

输出是:

array(['10', '10 airport', '10 dwarka', '10 dwarka sector', '10 pm',
       '10 pm dwarka', '10 pm hauz', '11', '11 dwarka',
       '11 dwarka sector', '11 pm', '11 pm airport', '11 pm hauz', '12',
       '12 dwarka', '12 dwarka sector', '12 hauz', '12 hauz khaas',
       '12 pm', '12 pm airport', '12 pm dwarka', '12 pm hauz', '21',
       '21 10', '21 10 pm', '21 11', '21 11 pm', '21 12', '21 12 pm',
       '21 airport', '21 airport 10', '21 airport 11', '21 airport 12',
       '21 airport leaving', '21 airport pm', '21 dwarka',
       '21 dwarka sector', '21 hauz', '21 hauz khaas', '21 leaving',
       '21 leaving 10', '21 leaving 11', '21 leaving 12', '21 leaving pm',
       '21 pm', '23', '23 10', '23 10 pm', '23 11', '23 11 pm', '23 12',
       '23 12 pm', '23 airport', '23 airport 10', '23 airport 11',
       '23 airport 12', '23 airport leaving', '23 airport pm',
       '23 dwarka', '23 dwarka sector', '23 hauz', '23 hauz khaas',
       '23 leaving', '23 leaving 10', '23 leaving 11', '23 leaving pm',
       '23 pm', 'airport', 'airport 10', 'airport 10 pm', 'airport 11',
       'airport 11 pm', 'airport 12', 'airport 12 pm', 'airport dwarka',
       'airport dwarka sector', 'airport hauz', 'airport hauz khaas',
       'airport leaving', 'airport leaving 10', 'airport leaving 12',
       'airport leaving pm', 'airport pm', 'book', 'book cab',
       'book cab 10', 'book cab 11', 'book cab 12', 'book cab airport',
       'book cab dwarka', 'book cab hauz', 'book cab pm', 'cab', 'cab 10',
       'cab 10 airport', 'cab 10 dwarka', 'cab 10 pm', 'cab 11',
       'cab 11 dwarka', 'cab 11 pm', 'cab 12', 'cab 12 dwarka',
       'cab 12 hauz', 'cab 12 pm', 'cab airport', 'cab airport dwarka',
       'cab airport hauz', 'cab dwarka', 'cab dwarka sector', 'cab hauz',
       'cab hauz khaas', 'cab pm', 'cab pm airport', 'cab pm dwarka',
       'cab pm hauz', 'dwarka', 'dwarka sector', 'dwarka sector 21',
       'dwarka sector 23', 'hauz', 'hauz khaas', 'hauz khaas 10',
       'hauz khaas 11', 'hauz khaas 12', 'hauz khaas airport',
       'hauz khaas dwarka', 'hauz khaas leaving', 'hauz khaas pm',
       'khaas', 'khaas 10', 'khaas 10 pm', 'khaas 11', 'khaas 11 pm',
       'khaas 12', 'khaas 12 pm', 'khaas airport', 'khaas airport 10',
       'khaas airport 11', 'khaas airport 12', 'khaas airport leaving',
       'khaas airport pm', 'khaas dwarka', 'khaas dwarka sector',
       'khaas leaving', 'khaas leaving 10', 'khaas leaving 11',
       'khaas leaving 12', 'khaas leaving pm', 'khaas pm', 'kindly',
       'kindly book', 'kindly book cab', 'leaving', 'leaving 10',
       'leaving 10 pm', 'leaving 11', 'leaving 11 pm', 'leaving 12',
       'leaving 12 pm', 'leaving pm', 'pm', 'pm airport',
       'pm airport dwarka', 'pm airport hauz', 'pm dwarka',
       'pm dwarka sector', 'pm hauz', 'pm hauz khaas', 'sector',
       'sector 21', 'sector 21 10', 'sector 21 11', 'sector 21 12',
       'sector 21 airport', 'sector 21 dwarka', 'sector 21 hauz',
       'sector 21 leaving', 'sector 21 pm', 'sector 23', 'sector 23 10',
       'sector 23 11', 'sector 23 12', 'sector 23 airport',
       'sector 23 dwarka', 'sector 23 hauz', 'sector 23 leaving',
       'sector 23 pm', 'want', 'want airport', 'want airport dwarka',
       'want airport hauz', 'want book', 'want book cab', 'want dwarka',
       'want dwarka sector', 'want hauz', 'want hauz khaas'], dtype='<U21')
2个回答

看起来您尝试了所有方法,但没有设计系统以使其完成您需要做的事情。在这项任务中,我看不出有任何理由使用诸如 LDA 之类的东西。在我看来,这是训练自定义 NE 系统的典型案例,该系统专门提取您想要的目标。第一步是注释数据的子集,例如:

Please   _
book     _
a        _
cab      _
from     _
airport  FROM_B
to       _
hauz     TO_B
khaas    TO_I
at       _
3        TIME_B
PM       TIME_I

NE 模型是从这些带注释的数据中训练出来的。在这里,我提出了一个选项,标签按类别加上 B 代表 Begin,I 代表 Inside,但可以有很多变体。

训练模型后,应用于任何未标记的文本应该直接为您提供目标信息。

正如@Erwan 所提到的,您必须构建命名实体识别模型,这将轻松完成您的任务。要了解 ner 任务的实现,您可以参考我在 kaggle 上的笔记本,它基于类似的飞行而不是 cab 数据集。因此,它将有助于自定义构建数据集并在一定程度上使用我的模型的预测。

Kaggle 笔记本链接

['BOS', 'Please', 'book', 'a', 'flight', 'from', 'dwarka', 'sector', '23', 'from', 'hauz', 'khaas', 'at', '12', 'PM', 'EOS']
['O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'O', 'B-depart_time.time', 'I-depart_time.time', 'O']