我已经为回归训练了一个深度学习模型。模型的准确性很差。我对深度学习很陌生。我该如何改进它?目标变量Y是通过将特征X1与相乘得到的X2。
数据集(5800 行)
X1 | X2 | Y
1.000000 70.000000 70.000000
0.714286 29.045455 20.746753
0.000000 35.000000 0.000000
0.538462 22.071429 11.884615
0.000000 54.000000 0.000000
模型
#Define a larger model
def larger_model():
#Create Model
model = Sequential()
model.add(Dense(2, input_dim=2, kernel_initializer='normal', activation='relu'))
model.add(Dense(6, kernel_initializer='normal', activation='relu'))
model.add(Dense(1, kernel_initializer='normal'))
#Compile Model
model.compile(loss='mean_squared_error', optimizer='adam')
return model
#Evaluate Model
estimator = KerasRegressor(build_fn=larger_model, epochs=10, batch_size=5)
kfold = KFold(n_splits=10)
results = cross_val_score(estimator, X, y, cv=kfold)
print("Results: %.5f (%.5f) MSE" % (results.mean(), results.std()))
输出
Results: -83.81452 (170.38108) MSE