`loss` 和 `accuracy` 值是什么意思?

数据挖掘 机器学习 深度学习 张量流
2022-02-14 14:57:48

我正在使用这个:

Python version: 3.7.7 (default, May  6 2020, 11:45:54) [MSC v.1916 64 bit (AMD64)]
TensorFlow version: 2.1.0
Eager execution: True

使用此 U-Net 模型:

inputs = Input(shape=img_shape)

    conv1 = Conv2D(64, (5, 5), activation='relu', padding='same', data_format="channels_last", name='conv1_1')(inputs)
    conv1 = Conv2D(64, (5, 5), activation='relu', padding='same', data_format="channels_last", name='conv1_2')(conv1)
    pool1 = MaxPooling2D(pool_size=(2, 2), data_format="channels_last", name='pool1')(conv1)
    conv2 = Conv2D(96, (3, 3), activation='relu', padding='same', data_format="channels_last", name='conv2_1')(pool1)
    conv2 = Conv2D(96, (3, 3), activation='relu', padding='same', data_format="channels_last", name='conv2_2')(conv2)
    pool2 = MaxPooling2D(pool_size=(2, 2), data_format="channels_last", name='pool2')(conv2)

    conv3 = Conv2D(128, (3, 3), activation='relu', padding='same', data_format="channels_last", name='conv3_1')(pool2)
    conv3 = Conv2D(128, (3, 3), activation='relu', padding='same', data_format="channels_last", name='conv3_2')(conv3)
    pool3 = MaxPooling2D(pool_size=(2, 2), data_format="channels_last", name='pool3')(conv3)

    conv4 = Conv2D(256, (3, 3), activation='relu', padding='same', data_format="channels_last", name='conv4_1')(pool3)
    conv4 = Conv2D(256, (4, 4), activation='relu', padding='same', data_format="channels_last", name='conv4_2')(conv4)
    pool4 = MaxPooling2D(pool_size=(2, 2), data_format="channels_last", name='pool4')(conv4)

    conv5 = Conv2D(512, (3, 3), activation='relu', padding='same', data_format="channels_last", name='conv5_1')(pool4)
    conv5 = Conv2D(512, (3, 3), activation='relu', padding='same', data_format="channels_last", name='conv5_2')(conv5)

    up_conv5 = UpSampling2D(size=(2, 2), data_format="channels_last", name='up_conv5')(conv5)
    ch, cw = get_crop_shape(conv4, up_conv5)
    crop_conv4 = Cropping2D(cropping=(ch, cw), data_format="channels_last", name='crop_conv4')(conv4)
    up6 = concatenate([up_conv5, crop_conv4])
    conv6 = Conv2D(256, (3, 3), activation='relu', padding='same', data_format="channels_last", name='conv6_1')(up6)
    conv6 = Conv2D(256, (3, 3), activation='relu', padding='same', data_format="channels_last", name='conv6_2')(conv6)

    up_conv6 = UpSampling2D(size=(2, 2), data_format="channels_last", name='up_conv6')(conv6)
    ch, cw = get_crop_shape(conv3, up_conv6)
    crop_conv3 = Cropping2D(cropping=(ch, cw), data_format="channels_last", name='crop_conv3')(conv3)
    up7 = concatenate([up_conv6, crop_conv3])
    conv7 = Conv2D(128, (3, 3), activation='relu', padding='same', data_format="channels_last", name='conv7_1')(up7)
    conv7 = Conv2D(128, (3, 3), activation='relu', padding='same', data_format="channels_last", name='conv7_2')(conv7)

    up_conv7 = UpSampling2D(size=(2, 2), data_format="channels_last", name='up_conv7')(conv7)
    ch, cw = get_crop_shape(conv2, up_conv7)
    crop_conv2 = Cropping2D(cropping=(ch, cw), data_format="channels_last", name='crop_conv2')(conv2)
    up8 = concatenate([up_conv7, crop_conv2])
    conv8 = Conv2D(96, (3, 3), activation='relu', padding='same', data_format="channels_last", name='conv8_1')(up8)
    conv8 = Conv2D(96, (3, 3), activation='relu', padding='same', data_format="channels_last", name='conv8_2')(conv8)

    up_conv8 = UpSampling2D(size=(2, 2), data_format="channels_last", name='up_conv8')(conv8)
    ch, cw = get_crop_shape(conv1, up_conv8)
    crop_conv1 = Cropping2D(cropping=(ch, cw), data_format="channels_last", name='crop_conv1')(conv1)
    up9 = concatenate([up_conv8, crop_conv1])
    conv9 = Conv2D(64, (3, 3), activation='relu', padding='same', data_format="channels_last", name='conv9_1')(up9)
    conv9 = Conv2D(64, (3, 3), activation='relu', padding='same', data_format="channels_last", name='conv9_2')(conv9)

    ch, cw = get_crop_shape(inputs, conv9)
    conv9 = ZeroPadding2D(padding=(ch, cw), data_format="channels_last", name='conv9_3')(conv9)
    conv10 = Conv2D(1, (1, 1), activation='sigmoid', data_format="channels_last", name='conv10_1')(conv9)
    model = Model(inputs=inputs, outputs=conv10)

要编译我做的模型:

model.compile(tf.keras.optimizers.Adam(lr=(1e-4) * 2), loss=dice_coef_loss, metrics=['accuracy'])

我在训练时得到了这个输出:

纪元 1/2 5/5 [===============================] - 8s 2s/样本 - 损失:0.9830 - 准确度: 0.1033 - val_loss: 1.0000 - val_accuracy: 0.9469 Epoch 2/2 5/5 [=============================] - 5s 1s/sample - loss: 1.0000 - accuracy: 0.9442 - val_loss: 0.9999 - val_accuracy: 0.9972 训练 5 个样本,验证 5 个样本

我不明白值lossaccuracy含义(如果它们是百分比等)。

例如,使用以下值:

损失:0.9830 - 准确度:0.1033

意味着有 98.3% 的错误和 10.33% 的成功。

这些价值观:

损失:1.0000 - 准确度:0.9442

有 100% 的错误和 94.42% 的成功。

我认为lossand的值accuracy可能是 [1.0, 0.0](如果我没记错的话)。

lossaccuracy价值观是什么意思我的意思是我不知道我的模型什么时候变得更好。

1个回答

我不明白损失和准确度值是什么意思(如果它们是百分比等)。

损失是前馈输出和实际目标之间的差异。它是使用您在 compile 方法中为 Loss 传递的函数计算的。
在您的情况下,它是 - loss=dice_coef_loss它是在每批之后计算的。


Metrics 是根据您在 compile 方法中传递给 metrics parm 的指标计算的。
在您的情况下,它是 "Accuracy"
这意味着模型在每批之后计算准确性(正确预测/总预测)并通过打印让您知道。

改进的指标意味着模型正在学习每批
如果模型将学习目标,则损失将减少。


不用说,过多地学习训练数据会产生另一个问题——过度拟合

损失:0.9830 - 准确度:0.1033

损失值为 0.98,准确度为 10.33%(非常差)
损失不是百分比,它是使用 y_true 和 y_pred 的损失函数的输出值。