当输入=输出时TensorFlow不学习(或者我可能遗漏了一些东西)

数据挖掘 Python 神经网络 深度学习 张量流
2022-03-03 03:54:15

我之前使用 TensorFlow 训练模型以识别图像(类似于此示例:https ://github.com/tensorflow/models/tree/master/inception )。我试图从头开始使用 inception-v4 做类似的事情(使用来自https://github.com/tensorflow/models的模型)。但它不起作用,它没有学习,所以我决定减少我的代码并简化一切。该模型是一个隐藏层,具有与输入相同的神经元,模型必须学习的是输入=输出。看起来很简单很容易,但是我的代码的结果远非好。

我尝试了不同的东西:sigmoid 交叉熵而不是 softmax 交叉熵,其他优化器,不同的学习率,模型更符合更多的神经元和更多的层,reduce_sum 而不是减少损失的均值......但我得到的唯一时间“好结果”(> 90%)似乎是因为模型陷入了局部最小值,我无法复制它们。通常我会看到 70% 的精度和高损失 (>=0.7)。

我的结论是,我的代码中有一些我无法发现的错误,这就是学习过程缓慢并陷入高损失的原因。否则,带有 inception-v4 的图像代码至少会学到一些东西,并且不会让损失一直停留在相同的值上。

import os
import random
import shutil

import numpy as np
import tensorflow as tf
import tensorflow.contrib.layers as layers

# parameters
max_steps = 10000
batch_size = 64
output_size = 10
summary_path = './summary'

with tf.Graph().as_default() as graph:
    shape = [batch_size, output_size]
    inputs = tf.placeholder(dtype=tf.float32, shape=shape, name='inputs')
    outputs = tf.placeholder(dtype=tf.float32, shape=shape, name='outputs')
    global_step = tf.Variable(0, name='global_step', trainable=False)
    logits = layers.fully_connected(inputs, output_size)
    learning_rate = tf.train.exponential_decay(0.1, global_step, max_steps / 3, 0.1,
                                               name='learning_rate')
    loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits, outputs, name='loss'))
    optimizer = tf.train.GradientDescentOptimizer(learning_rate)
    train_op = optimizer.minimize(loss, global_step=global_step)
    init_op = tf.global_variables_initializer()
    input_label = tf.argmax(inputs, axis=1) # vector with the input labels
    output_label = tf.argmax(logits, axis=1) # vector with the predicted labels
    tf.summary.scalar('loss', loss)
    tf.summary.scalar('learning_rate', learning_rate)
    summary_op = tf.summary.merge_all()

# prepare summary
if os.path.exists(summary_path):
    shutil.rmtree(summary_path)
os.makedirs(summary_path)
summary_writer = tf.train.SummaryWriter(summary_path, graph=graph)

with tf.Session(graph=graph).as_default() as session:
    session.run(init_op)
    to_run = {'step': global_step, 'loss': loss, 'logits': logits, 'train_op': train_op,
              'learning_rate': learning_rate, 'summary': summary_op,
              'in_label': input_label, 'out_label': output_label}
    step = 0
    while step < max_steps:
        # fake input
        data = np.zeros(shape, dtype=float)
        for i in range(batch_size):
            data[i, random.randint(0, output_size - 1)] = 1.0
        # run
        results = session.run(to_run, feed_dict={inputs: data, outputs: data})
        # print data
        step, in_label, out_label = results['step'],  results['in_label'], results['out_label']
        precision = (in_label == out_label).sum() * 100.0 / batch_size
        print('step: {0}/{1}  learning rate: {2:.4f}  loss: {3:.4f}  precision: {4:.1f}%'
              .format(step, max_steps, results['learning_rate'], results['loss'], precision))
        summary_writer.add_summary(results['summary'], step)
    # print last batch
    print(data)
    print(results['logits'])
    print(in_label)
    print(out_label)
    print('precision {}%'.format(precision))

summary_writer.flush()

学习率和损失图

0个回答
没有发现任何回复~