one-hot编码后的接收值错误决策树分类器

数据挖掘 决策树 数据科学模型
2022-03-11 00:36:21

我正在尝试建立一个决策树模型。在 one-hot 编码之后,数据似乎仍然存在问题。当我运行以下代码时,我收到此错误:

ValueError: Number of labels=172 does not match number of samples=540


#Code:  
import pandas as pd
import numpy as np

df = pd.read_csv("https://library.startlearninglabs.uw.edu/DATASCI420/Datasets/Bank%20Data.csv", sep=",")
df.columns=['age', 'sex', 'region', 'income', 'married', 'children', 'car',
       'save_act', 'current_act', 'mortgage', 'pep']
df.info()


(nrows, ncols) = df.shape
colnames = list(df.columns.values)
string_encoding = {}
df_encoded = df.copy()
for i in range(ncols):
    levels = list(set(df.iloc[:, i]))
    num_levels = len(levels)
    string_encoding_i = dict(zip(levels, range(num_levels)))
    string_encoding[colnames[i]] = string_encoding_i
    for j in range(nrows):
        df_encoded.iloc[j, i] = string_encoding_i[df.iloc[j, i]]

print(string_encoding)
print(df_encoded.head())

# One Hot Encoding Categorial Variables
from sklearn import preprocessing
enc = preprocessing.OneHotEncoder()

non_categorial_features = ['age',
                          'income',
                          'children',
                          'pep']

for categorical_feature in list(df.columns):
    if categorical_feature not in non_categorial_features:
        df[categorical_feature] = df[categorical_feature].astype('category')

df_with_dummies = pd.get_dummies(df, sparse=True)

df = pd.concat([df, df_with_dummies], axis=1)

df.head(5)

df = df.drop(['sex', 'region', 'married', 'car',
       'save_act', 'current_act', 'mortgage', 'pep_NO', 'pep_YES'], axis=1)

df.head()
df.info()

from sklearn import tree
import numpy as np
from sklearn.model_selection import train_test_split 

# prepare for decision tree
np.random.seed(101)

title_names =['age', 'income', 'children', 'counts', 'age', 'income',
       'children', 'sex_FEMALE', 'sex_MALE', 'region_INNER_CITY',
       'region_RURAL', 'region_SUBURBAN', 'region_TOWN', 'married_NO',
       'married_YES', 'car_NO', 'car_YES', 'save_act_NO', 'save_act_YES',
       'current_act_NO', 'current_act_YES', 'mortgage_NO', 'mortgage_YES', 'pep']

df = df[title_names]


X = df.iloc[:,0:23]
Y = df.iloc[:, 23]
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size = 0.1, random_state = 99)

# decision tree
from sklearn.tree import DecisionTreeClassifier 

# Use entropy = no limit on samples for split
model_ent = DecisionTreeClassifier(criterion='entropy').fit(X_train, y_train) 
y_ent_pred = model_ent.predict(X_test)

# Use information gain (default) limit min_samples to 4
model_gini = DecisionTreeClassifier(min_samples_leaf=5).fit(X_train, y_train)
y_gini_pred = model_gini.predict(X_test)
```
1个回答

看起来Y是 aSpareSeries以及y_trainy_test因此,当它被传递给决策树拟合方法时,它只会将那些带有标签的条目解释1为存在。根据熊猫文档

我们已经实现了 Series 和 DataFrame 的“稀疏”版本。这些在典型的“大部分为 0”中并不稀疏。相反,您可以将这些对象视为“压缩”的,其中任何与特定值匹配的数据(NaN / 缺失值,尽管可以选择任何值)都将被忽略。

我不确定为什么它是一个稀疏数据结构,但您可以使用该to_dense方法对其进行致密化:

Y = df.iloc[:, 23].to_dense()

编辑: Danny 下面提到您可以Sparse=True从中删除get_dummies