我正在调试用于识别眼睛图像中角膜反射的 UNET 架构的结果。虽然我获得了超过 99% 的训练准确度和非常高(超过 99%)的验证准确度,但当我自己运行验证图像时,我从模型预测中得到的只是空白图像。当我使用具有完全相同参数的相同架构进行训练时,我会再次获得高精度数字,但在预测中再次运行验证集会产生很好的结果。这是我遇到问题、验证和预测结果不匹配的数据集的示例输出:
Train on 326 samples, validate on 140 samples
Epoch 1/1
277s - loss: 0.1961 - dice_coef: 0.0012 - acc: 0.9834 - val_loss: 0.0338 - val_dice_coef: 4.8418e-11 - val_acc: 0.9979
326/326 [==============================] - 79s
这是我的代码:
#-------------------------------------------------------
# Define UNET model
#-------------------------------------------------------
print("Compiling UNET Model.....")
def dice_coef(y_true, y_pred):
y_true_f = K.flatten(y_true)
y_pred_f = K.flatten(y_pred)
intersection = K.sum(y_true_f * y_pred_f)
coef = (2. * intersection + K.epsilon()) / (K.sum(y_true_f) + K.sum(y_pred_f) + K.epsilon())
return coef
x_data = x_data[:,:,:,np.newaxis]
y_data = y_data[:,:,:,np.newaxis]
x_train, x_val, y_train, y_val = train_test_split(x_data, y_data, test_size = 0.3)
input_layer = Input(shape=x_train.shape[1:])
c1 = Conv2D(filters=8, kernel_size=(3,3), activation='relu', padding='same')(input_layer)
l = MaxPool2D(strides=(2,2))(c1)
c2 = Conv2D(filters=16, kernel_size=(3,3), activation='relu', padding='same')(l)
l = MaxPool2D(strides=(2,2))(c2)
c3 = Conv2D(filters=32, kernel_size=(3,3), activation='relu', padding='same')(l)
l = MaxPool2D(strides=(2,2))(c3)
c4 = Conv2D(filters=32, kernel_size=(1,1), activation='relu', padding='same')(l)
l = concatenate([UpSampling2D(size=(2,2))(c4), c3], axis=-1)
l = Conv2D(filters=32, kernel_size=(2,2), activation='relu', padding='same')(l)
l = concatenate([UpSampling2D(size=(2,2))(l), c2], axis=-1)
l = Conv2D(filters=24, kernel_size=(2,2), activation='relu', padding='same')(l)
l = concatenate([UpSampling2D(size=(2,2))(l), c1], axis=-1)
l = Conv2D(filters=16, kernel_size=(2,2), activation='relu', padding='same')(l)
l = Conv2D(filters=64, kernel_size=(1,1), activation='relu')(l)
l = Dropout(0.5)(l)
output_layer = Conv2D(filters=1, kernel_size=(1,1), activation='sigmoid')(l)
model = Model(input_layer, output_layer)
model.compile(optimizer=Adam(1e-4), loss='binary_crossentropy', metrics=[dice_coef, 'acc'])
#-------------------------------------------------------
# Train UNET MOdel
#-------------------------------------------------------
if train_opt:
print("Training UNET Model.....")
weight_saver = ModelCheckpoint(weights_file, monitor='val_dice_coef', save_best_only=True, save_weights_only=True)
annealer = LearningRateScheduler(lambda x: 1e-3 * 0.8 ** x)
hist = model.fit(x_train, y_train, batch_size = 8, validation_data = (x_val, y_val), epochs=1, verbose=2, callbacks = [weight_saver, annealer])
model.evaluate(x_train, y_train)
非常感谢你的帮助!