即使训练损失减少,Val Accuracy 也没有增加

数据挖掘 Python 喀拉斯 张量流 美国有线电视新闻网
2022-02-23 12:36:02

我正在训练一个图像分类模型,我的训练准确度在增加,训练损失也在减少,但验证准确度保持不变。

这是我的代码:

from keras.applications.vgg19 import VGG19 model= VGG19(include_top=False, weights='imagenet', 
        input_tensor=None, input_shape=(224,224,3), pooling=None, classes=1000) x=model.output

x=Conv2D(filters=1024,kernel_size=2)(x)
x=MaxPooling2D()(x) 
x=Flatten()(x) 
x=Dense(1024,activation='relu')(x) 
x=BatchNormalization(axis=1)(x) 
x=Dropout(0.8)(x) 
x=Dense(64,activation='relu')(x) 
x=Dense(4,activation='softmax')(x)

model = Model(inputs=model.input,outputs=x)

for layer in model.layers[:12]:
   layer.trainable = False

for layer in model.layers[12:]:
   layer.trainable=True


opt = Adam(lr=0.0001, decay=1e-6) 
model.compile(loss='sparse_categorical_crossentropy', 
           optimizer=opt, metrics=['accuracy'])

checkpoint_path="/content/drive/My Drive/Model/model_vgg19_6.h5"

checkpoint = ModelCheckpoint(checkpoint_path, monitor="val_acc", mode="max", 
   save_best_only = True,verbose=1)

reduce_lr = ReduceLROnPlateau(monitor = 'val_acc', mode="max", factor = 0.7, 
     patience = 5, verbose = 1, min_delta =0.00001)

earlystop = EarlyStopping(monitor = 'val_acc', mode="max", min_delta = 0, patience = 30, verbose = 1, 
            restore_best_weights = True)

callbacks = [reduce_lr,checkpoint]

model.fit_generator(aug_train, steps_per_epoch=int((len(data_x)/128)+1), validation_data= 
                (val_x,val_y), validation_steps=int((len(val_x)/128)+1), workers=-1, 
                use_multiprocessing=True, shuffle=True, epochs=300, callbacks=callbacks )

我得到一个 0.24541 的常量 val_acc Val Acc 结果

1个回答

你从一个在 ImageNet 上预训练的 VGG 网络开始——这可能意味着权重不会发生太大变化(例如,无需进一步修改或大幅提高学习率)。

如果您期望在预训练网络上提高性能,那么您正在执行微调有一节介绍了对 InceptionV3 网络的 Keras 实现进行微调,但原则是相同的:您应该冻结一些早期的特征提取层,只留下一些标记为可训练的最终层。这是文档中给出的示例,他们在基础模型中添加新层,只训练这些层一段时间,然后另外解冻一些基础模型。

# create the base pre-trained model
base_model = InceptionV3(weights='imagenet', include_top=False)   # <--- leave out final layers!
# first: train only the top layers (which were randomly initialized)
# i.e. freeze all convolutional InceptionV3 layers
for layer in base_model.layers:
    layer.trainable = False

-> run training for a few epochs

一段时间后,解冻基础模型的部分

# we chose to train the top 2 inception blocks, i.e. we will freeze
# the first 249 layers and unfreeze the rest:
for layer in model.layers[:249]:
   layer.trainable = False
for layer in model.layers[249:]:
   layer.trainable = True

请查看完整文档以获取更多详细信息。