当我训练我的模型时,每个时期的损失都会增加。我觉得这是一个简单的解决方案,我遗漏了一些明显的东西,但我无法弄清楚它是什么。任何帮助将不胜感激。
神经网络:
def neural_network(data):
hidden_L1 = {'weights': tf.Variable(tf.random_normal([784, neurons_L1])),
'biases': tf.Variable(tf.random_normal([neurons_L1]))}
hidden_L2 = {'weights': tf.Variable(tf.random_normal([neurons_L1, neurons_L2])),
'biases': tf.Variable(tf.random_normal([neurons_L2]))}
output_L = {'weights': tf.Variable(tf.random_normal([neurons_L2, num_of_classes])),
'biases': tf.Variable(tf.random_normal([num_of_classes]))}
L1 = tf.add(tf.matmul(data, hidden_L1['weights']), hidden_L1['biases']) #matrix multiplication
L1 = tf.nn.relu(L1)
L2 = tf.add(tf.matmul(L1, hidden_L2['weights']), hidden_L2['biases']) #matrix multiplication
L2 = tf.nn.relu(L2)
output = tf.add(tf.matmul(L2, output_L['weights']), output_L['biases']) #matrix multiplication
output = tf.nn.softmax(output)
return output
我每个时期的损失、优化器和循环:
output = neural_network(x)
loss = tf.reduce_mean( tf.nn.softmax_cross_entropy_with_logits(logits=output, labels=y) )
optimiser = tf.train.AdamOptimizer().minimize(loss)
init = tf.global_variables_initializer()
epochs = 5
total_batch_count = 60000//batch_size
with tf.Session() as sess:
sess.run(init)
for epoch in range(epochs):
avg_loss = 0
for i in range(total_batch_count):
batch_x, batch_y = next_batch(batch_size, x_train, y_train)
_, c = sess.run([optimiser, loss], feed_dict = {x:batch_x, y:batch_y})
avg_loss +=c/total_batch_count
print("epoch = ", epoch + 1, "loss =", avg_loss)
sess.close()
我有一种感觉,我的问题在于我为每个时期编写的损失函数或循环,但是我是 TensorFlow 的新手,无法弄清楚这一点。