用于确定井字游戏获胜者的 CNN 训练数据大小

数据挖掘 神经网络 深度学习 张量流
2022-02-19 09:03:53

我正在尝试使用 tensorflow 学习机器学习,并编写了一个程序,该程序使用 CNN 来确定给定井字游戏的游戏结果。它的输入和输出是 -

输入 - 代表棋盘的 9 个元素的数组(0=空,1='X',2='O')

输出 - 3 个元素的数组([1,0,0]=draw, [0,1,0]=player1 won, [0,0,1]=player2 won)。

下面是程序的 tensorflow 图部分(它是 tensorflow 教程的修改版本 - https://www.tensorflow.org/get_started/mnist/pros)。

baseFeatureSize = 64
x = tf.placeholder(tf.float32, shape=[None, 9])
x_image = tf.reshape(x, [-1, 3, 3, 1])
W_conv1 = nn_utils.weight_variable([2, 2, 1, baseFeatureSize])
b_conv1 = nn_utils.bias_variable([baseFeatureSize])
h_conv1 = tf.nn.relu(nn_utils.conv2d(x_image, W_conv1) + b_conv1)
W_conv2 = nn_utils.weight_variable([2, 2, baseFeatureSize, baseFeatureSize * 2])
b_conv2 = nn_utils.bias_variable([baseFeatureSize * 2])
h_conv2 = tf.nn.relu(nn_utils.conv2d(h_conv1, W_conv2) + b_conv2)
W_fc1 = nn_utils.weight_variable([3 * 3 * baseFeatureSize * 2, baseFeatureSize * 4])
b_fc1 = nn_utils.bias_variable([baseFeatureSize * 4])
h_pool2_flat = tf.reshape(h_conv2, [-1, 3 * 3 * baseFeatureSize * 2])
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)
keep_prob = tf.placeholder(tf.float32)
h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)
W_fc2 = nn_utils.weight_variable([baseFeatureSize * 4, 3])
b_fc2 = nn_utils.bias_variable([3])
y_ = tf.placeholder(tf.float32, shape=[None, 3])
y_conv = tf.matmul(h_fc1_drop, W_fc2) + b_fc2

cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(labels=y_, logits=y_conv))
train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)
correct_prediction = tf.equal(tf.argmax(y_conv,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

def conv2d(x, W): return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')
def max_pool_2x2(x): return tf.nn.max_pool(x, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding='SAME')
def weight_variable(shape):
    initial = tf.truncated_normal(shape, stddev=0.1)
    return tf.Variable(initial)
def bias_variable(shape):
    initial = tf.constant(0.1, shape=shape)
    return tf.Variable(initial)

问题- 上述模型运行良好,但需要使用超过 700k 的输入进行训练才能获得高达 80% 的输出预测准确度。但是游戏板的总可能排列少于300K如果我只提供所有独特的排列,那么准确性会变差。

  1. CNN 要求训练数据大小远大于输入的所有可能排列是否正常?我假设这不是真的,否则他们将无法下棋和去等,那我做错了什么?

  2. 根据您的经验,多大的训练样本量通常对这种设置有意义?

1个回答

我也尝试了其他一些方法,发现 MLP 可以为这个问题产生相同或更好的结果。感谢评论者 Neil Slater 和 Student T 为我指明了正确的方向。

对于任何感兴趣的人,这是最终对我有用的模型

boardSize = 3*3
featureSize = #I experimented with 500-1500 with varying results here
output = 3
x = tf.placeholder(tf.float32, [None, boardSize])
y_ = tf.placeholder(tf.float32, [None, featureSize])
learning_rate = 0.01
seed = 128
weights = {
    'hidden': tf.Variable(tf.random_normal([boardSize, featureSize], seed=seed)),
    'output': tf.Variable(tf.random_normal([featureSize, output], seed=seed))
}
biases = {
    'hidden': tf.Variable(tf.random_normal([featureSize], seed=seed)),
    'output': tf.Variable(tf.random_normal([output], seed=seed))
}
keep_prob = tf.placeholder(tf.float32)
hidden_layer = tf.add(tf.matmul(x, weights['hidden']), biases['hidden'])
hidden_layer = tf.nn.relu(hidden_layer)
hidden_layer2 = tf.add(tf.matmul(hidden_layer, tf.Variable(tf.random_normal([featureSize, featureSize], seed=seed))), 
                tf.Variable(tf.random_normal([hidden_num_units], seed=seed)))
hidden_layer2 = tf.nn.relu(hidden_layer2)

y_conv = tf.matmul(hidden_layer2, weights['output']) + biases['output']
cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=y_conv, labels=y_))
train_step = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cross_entropy)
correct_prediction = tf.equal(tf.argmax(y_conv,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))