我试图从这个数据集中预测“销售额”:
https://www.kaggle.com/c/rossmann-store-sales
有>1,000,000 行,我使用数据集中的 10 个特征来预测销售额
我提前将两个数据集合并为一个。我在 Keras 中创建了一个代码来预测“销售额”。首先,我创建了一些新变量,丢弃了一些不需要的数据。然后我对分类变量应用了一种热编码,将数据集拆分为训练和测试部分,使用 StandardScaler 缩放 X_train 和 X_test 的变量。之后,我创建了一个 Keras 模型,如下所示:
model = Sequential()
model.add(Dense(units = 64, kernel_initializer = 'uniform', activation = 'relu', input_dim = 31))
model.add(Dropout(p = 0.1))
model.add(Dense(units = 64, kernel_initializer = 'uniform', activation = 'relu'))
model.add(Dropout(p = 0.1))
model.add(Dense(units = 64, kernel_initializer = 'uniform', activation = 'relu'))
model.add(Dropout(p = 0.1))
model.add(Dense(units = 64, kernel_initializer = 'uniform', activation = 'relu'))
model.add(Dropout(p = 0.1))
model.add(Dense(units = 1, kernel_initializer = 'uniform', activation='linear'))
model.compile(loss='mse', optimizer='adam', metrics=['mse', 'mae', 'mape'])
history = model.fit(X_train, y_train, batch_size = 10000, epochs = 15)
这是一个非常基本的模型:4 层,每层有 64 个神经元,小的 dropout 以防止过度拟合,relu 作为激活器,均方误差作为损失函数,adam 作为优化器,15 个 epoch。
该模型的结果:
- R平方:0.86
- 微博:20841
- 硕士:103
我想它做得很好,这是真实值和预测值的比较
y_test final_preds
0 1495.0 1737.393188
1 970.0 763.265747
2 660.0 696.281006
3 695.0 884.019226
4 802.0 620.464294
5 437.0 413.590912
6 599.0 564.844177
7 426.0 507.872650
8 1163.0 934.790405
9 563.0 591.833313
10 798.0 729.736572
11 507.0 422.795746
12 447.0 546.338440
13 437.0 437.536194
14 599.0 643.752441
15 607.0 667.271423
16 836.0 793.968872
17 568.0 599.968262
18 522.0 508.874084
19 350.0 395.198883
20 1160.0 1277.464111
我试图通过使用 DNNRegressor 在 Tensorflow 中“模仿”具有相同配置的神经网络的相同结构。结果甚至与 Keras 所取得的成果相去甚远。我的 TF 代码是:
创建特征列
DayOfWeek_vocab = [4, 3, 1, 5, 6, 2, 7] DayOfWeek_column = tf.feature_column.categorical_column_with_vocabulary_list( key="DayOfWeek", vocabulary_list=DayOfWeek_vocab Open_vocab = [1] Open_column = tf.feature_column.categorical_column_with_vocabulary_list( key="Open", vocabulary_list=Open_vocab) Promo_vocab = [1,0] Promo_column = tf.feature_column.categorical_column_with_vocabulary_list( key="Promo", vocabulary_list=Promo_vocab) StateHoliday_vocab = ['0', 'b', 'a', 'c'] StateHoliday_column = tf.feature_column.categorical_column_with_vocabulary_list( key="StateHoliday", vocabulary_list=StateHoliday_vocab) SchoolHoliday_vocab = [1, 0] SchoolHoliday_column = tf.feature_column.categorical_column_with_vocabulary_list( key="SchoolHoliday", vocabulary_list=SchoolHoliday_vocab) StoreType_vocab = ['a', 'd', 'c', 'b'] StoreType_column = tf.feature_column.categorical_column_with_vocabulary_list( key="StoreType", vocabulary_list=StoreType_vocab) Assortment_vocab = ['a', 'c', 'b'] Assortment_column = tf.feature_column.categorical_column_with_vocabulary_list( key="Assortment", vocabulary_list=Assortment_vocab) month_vocab = [10, 3, 4, 2, 9, 6, 5, 7, 1, 8, 12, 11] month_column = tf.feature_column.categorical_column_with_vocabulary_list( key="month", vocabulary_list=month_vocab) Season_vocab = ['Autumn', 'Spring', 'Winter', 'Summer'] Season_column = tf.feature_column.categorical_column_with_vocabulary_list( key="Season", vocabulary_list=Season_vocab) feature_columns = [ tf.feature_column.indicator_column(DayOfWeek_column), tf.feature_column.indicator_column(Open_column), tf.feature_column.indicator_column(Promo_column), tf.feature_column.indicator_column(StateHoliday_column), tf.feature_column.indicator_column(SchoolHoliday_column), tf.feature_column.indicator_column(StoreType_column), tf.feature_column.indicator_column(Assortment_column), tf.feature_column.numeric_column('CompetitionDistance'), tf.feature_column.indicator_column(month_column), tf.feature_column.indicator_column(Season_column), ]
模型本身
input_func = tf.estimator.inputs.pandas_input_fn(x=X_train,y=y_train ,batch_size=10000,num_epochs=15, shuffle=True) model = tf.estimator.DNNRegressor(hidden_units=[64,64,64,64],feature_columns=feature_columns, optimizer=tf.train.AdamOptimizer(learning_rate=0.0001), activation_fn = tf.nn.relu) model.train(input_fn=input_func,steps=1000000)
结构和 Keras 一样,4 层,64 个神经元。relu、adam 和 mse 作为成本(它是 DNNRegressor 的默认值),但 tf 的效果不如 Keras
结果一团糟,MSE 是 44303762026251.3,MAE 是 3809120.3086946052,R-squared 甚至是负数,-4598900.028032559
我在这里做错了什么?我在 Tensorflow 中忘记了什么吗?Keras 使用的是 TF,所以我想如果模型以相同的方式调整,结果应该是相似的。
我将数字随机放入层、神经元、学习率、时期,但效果不佳提前谢谢!
编辑1
感谢您的意见!我试着应用你推荐的东西。我完全放弃了 DNNRegressor 并尝试用 tf.layers.dense “手动”创建所有内容。我再次复制了 keras 的结构(在 keras 中也改为 glorot)。这就是它现在的样子:
import tensorflow as tf
import numpy as np
import uuid
x = tf.placeholder(shape=[None, 30], dtype=tf.float32)
y = tf.placeholder(shape=[None, 1], dtype=tf.float32)
dense = tf.layers.dense(x, 30, activation = tf.nn.relu,
bias_initializer = tf.zeros_initializer(),
kernel_initializer = tf.glorot_uniform_initializer())
dropout = tf.layers.dropout(inputs = dense, rate = 0.1)
dense = tf.layers.dense(dropout, 64, activation = tf.nn.relu,
bias_initializer = tf.zeros_initializer(),
kernel_initializer = tf.glorot_uniform_initializer())
dropout = tf.layers.dropout(inputs = dense, rate = 0.1)
dense = tf.layers.dense(dropout, 64, activation = tf.nn.relu,
bias_initializer = tf.zeros_initializer(),
kernel_initializer = tf.glorot_uniform_initializer())
dropout = tf.layers.dropout(inputs = dense, rate = 0.1)
dense = tf.layers.dense(dropout, 64, activation = tf.nn.relu,
bias_initializer = tf.zeros_initializer(),
kernel_initializer = tf.glorot_uniform_initializer())
dropout = tf.layers.dropout(inputs = dense, rate = 0.1)
dense = tf.layers.dense(dropout, 64, activation = tf.nn.relu,
bias_initializer = tf.zeros_initializer(),
kernel_initializer = tf.glorot_uniform_initializer())
dropout = tf.layers.dropout(inputs = dense, rate = 0.1)
output = tf.layers.dense(dropout, 1, activation = tf.nn.sigmoid)
cost = tf.losses.absolute_difference(y, output) #mae
optimizer = tf.train.AdamOptimizer(learning_rate=0.0001).minimize(cost)
init = tf.global_variables_initializer()
tf.summary.scalar("cost", cost)
merged_summary_op = tf.summary.merge_all()
with tf.Session() as sess:
sess.run(init)
uniq_id = "/tmp/tensorboard-layers-api/" + uuid.uuid1().__str__()[:6]
summary_writer = tf.summary.FileWriter(uniq_id, graph=tf.get_default_graph())
x_vals = X_train
y_vals = y_train
#for step in range(673764):
for step in range(673764):
_, val, summary = sess.run([optimizer, cost, merged_summary_op],
feed_dict={x: x_vals, y: y_vals})
if step % 20 == 0:
print("step: {}, value: {}".format(step, val))
summary_writer.add_summary(summary, step)
TF 模型较慢,所以我无法精确检查输出,但 TF 的第一步接近 keras 的第一个时代的结果:
Epoch 1/15
673764/673764 [==============================] - 13s 19us/step - loss: 57019592.1866 - mean_squared_error: 57019592.1866 - mean_absolute_error: 6883.4074 - mean_absolute_percentage_error: 2668499.3291
特遣部队:
step: 0, value: 6957.24365234375
step: 20, value: 6957.2373046875
step: 40, value: 6957.23583984375
step: 60, value: 6957.22998046875
所以两个模型的 MAE 都很接近,在 6900 左右。我想现在问题已经解决了。
我只剩下一个问题,如何在这种类型的 tensorflow 中应用批次?这是我第一次像这样构建 tf,我还没有在网上找到明显的解决方案。谢谢!