努力将 sklearn 和 pandas 集成到简单的 Kaggle 任务中

数据挖掘 Python 熊猫 scikit-学习
2021-09-28 13:34:47

我正在尝试使用 sklearn_pandas 模块来扩展我在 pandas 中所做的工作并涉足机器学习,但我正在努力解决一个我不太了解如何修复的错误。

我正在研究Kaggle上的以下数据集。

它本质上是一个带有浮点值的无标题表(1000 行,40 个特征)。

import pandas as pdfrom sklearn import neighbors
from sklearn_pandas import DataFrameMapper, cross_val_score
path_train ="../kaggle/scikitlearn/train.csv"
path_labels ="../kaggle/scikitlearn/trainLabels.csv"
path_test = "../kaggle/scikitlearn/test.csv"

train = pd.read_csv(path_train, header=None)
labels = pd.read_csv(path_labels, header=None)
test = pd.read_csv(path_test, header=None)
mapper_train = DataFrameMapper([(list(train.columns),neighbors.KNeighborsClassifier(n_neighbors=3))])
mapper_train

输出:

DataFrameMapper(features=[([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39], KNeighborsClassifier(algorithm='auto', leaf_size=30, metric='minkowski',
       n_neighbors=3, p=2, weights='uniform'))])

到现在为止还挺好。但后来我尝试了合身

mapper_train.fit_transform(train, labels)

输出:

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-6-e3897d6db1b5> in <module>()
----> 1 mapper_train.fit_transform(train, labels)

//anaconda/lib/python2.7/site-packages/sklearn/base.pyc in fit_transform(self, X, y,     **fit_params)
    409         else:
    410             # fit method of arity 2 (supervised transformation)
--> 411             return self.fit(X, y, **fit_params).transform(X)
    412 
    413 

//anaconda/lib/python2.7/site-packages/sklearn_pandas/__init__.pyc in fit(self, X, y)
    116         for columns, transformer in self.features:
    117             if transformer is not None:
--> 118                 transformer.fit(self._get_col_subset(X, columns))
    119         return self
    120 

TypeError: fit() takes exactly 3 arguments (2 given)`

我究竟做错了什么?虽然这种情况下的数据都是相同的,但我计划为混合分类、名义和浮点特征制定一个工作流程,而 sklearn_pandas 似乎是合乎逻辑的。

2个回答

我从来没有使用过sklearn_pandas,但是从阅读他们的源代码来看,这似乎是他们这边的一个错误。如果您寻找抛出异常的函数,您会注意到它们正在丢弃y参数(它甚至在文档字符串之前都无法生存),并且内部fit函数还需要一个参数,这可能是y

def fit(self, X, y=None):
    '''
    Fit a transformation from the pipeline

    X       the data to fit
    '''
    for columns, transformer in self.features:
        if transformer is not None:
            transformer.fit(self._get_col_subset(X, columns))
    return self

我建议您在他们的错误跟踪器中打开一个问题。

更新

如果您从 IPython 运行您的代码,您可以对此进行测试。总而言之,如果你%pdb on在运行有问题的调用之前使用魔法,异常会被 Python 调试器捕获,所以你可以试一试,看看fit用标签值调用函数y[0]确实有效——见最后一行pdb>提示(CSV 文件是从 Kaggle 下载的,除了最大的文件,它只是真实文件的一部分)。

In [1]: import pandas as pd

In [2]: from sklearn import neighbors

In [3]: from sklearn_pandas import DataFrameMapper, cross_val_score

In [4]: path_train ="train.csv"

In [5]: path_labels ="trainLabels.csv"

In [6]: path_test = "test.csv"

In [7]: train = pd.read_csv(path_train, header=None)

In [8]: labels = pd.read_csv(path_labels, header=None)

In [9]: test = pd.read_csv(path_test, header=None)

In [10]: mapper_train = DataFrameMapper([(list(train.columns),neighbors.KNeighborsClassifier(n_neighbors=3))])

In [13]: %pdb on

In [14]: mapper_train.fit_transform(train, labels)
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-14-e3897d6db1b5> in <module>()
----> 1 mapper_train.fit_transform(train, labels)

/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/sklearn/base.pyc in fit_transform(self, X, y, **fit_params)
    409         else:
    410             # fit method of arity 2 (supervised transformation)
--> 411             return self.fit(X, y, **fit_params).transform(X)
    412
    413

/opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/sklearn_pandas/__init__.pyc in fit(self, X, y)
    116         for columns, transformer in self.features:
    117             if transformer is not None:
--> 118                 transformer.fit(self._get_col_subset(X, columns))
    119         return self
    120

TypeError: fit() takes exactly 3 arguments (2 given)
> /opt/local/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/sklearn_pandas/__init__.py(118)fit()
    117             if transformer is not None:
--> 118                 transformer.fit(self._get_col_subset(X, columns))
    119         return self

ipdb> l
    113
    114         X       the data to fit
    115         '''
    116         for columns, transformer in self.features:
    117             if transformer is not None:
--> 118                 transformer.fit(self._get_col_subset(X, columns))
    119         return self
    120
    121
    122     def transform(self, X):
    123         '''
ipdb> transformer.fit(self._get_col_subset(X, columns), y[0])
KNeighborsClassifier(algorithm='auto', leaf_size=30, metric='minkowski',
           n_neighbors=3, p=2, weights='uniform')

这是一个如何让 pandas 和 sklearn 玩得很好的例子

假设您有 2 列都是字符串并且您希望矢量化 - 但您不知道哪些矢量化参数将导致最佳的下游性能。

创建矢量化器

to_vect = Pipeline([('vect',CountVectorizer(min_df =1,max_df=.9,ngram_range=(1,2),max_features=1000)),
                    ('tfidf', TfidfTransformer())])

创建 DataFrameMapper 对象。

full_mapper = DataFrameMapper([
        ('col_name1', to_vect),
        ('col_name2',to_vect)
    ])

这是完整的管道

full_pipeline  = Pipeline([('mapper',full_mapper),('clf', SGDClassifier(n_iter=15, warm_start=True))])

定义您希望扫描考虑的参数

full_params = {'clf__alpha': [1e-2,1e-3,1e-4],
                   'clf__loss':['modified_huber','hinge'],
                   'clf__penalty':['l2','l1'],
                   'mapper__features':[[('col_name1',deepcopy(to_vect)),
                                        ('col_name2',deepcopy(to_vect))],
                                       [('col_name1',deepcopy(to_vect).set_params(vect__analyzer= 'char_wb')),
                                        ('col_name2',deepcopy(to_vect))]]}

而已!- 但请注意 mapper_features 是本词典中的单个项目 - 因此使用 for 循环或 itertools.product 生成您希望考虑的所有 to_vect 选项的 FLAT 列表 - 但这是问题范围之外的单独任务。

继续创建最佳分类器或您的管道以结束的任何其他内容

gs_clf = GridSearchCV(full_pipe, full_params, n_jobs=-1)