改进CNN的结果

数据挖掘 喀拉斯 美国有线电视新闻网 图像分类 混淆矩阵
2022-02-20 10:43:13

编辑 2我解决了我的问题。该问题是由validation_generator 引起的。我使用了 shuffle = true 的方法 flow_from_directory。通过将值更改为 false 并在 model.predict_generator() 之前调用方法 validation_generator.reset() 来计算混淆矩阵解决了我的问题。reset() 方法似乎非常重要。

编辑:我能够稍微隔离一下问题。我注意到 evaluate_generator 方法从训练中返回正确的值,例如 [0.068286080902908, 0.9853515625]。但是, predict_generator() 方法的行为很奇怪。结果如下所示:

[[8.8930092e-06 5.8127771e-04 3.8436747e-06 7.7528159e-07 9.9940526e-01] [1.4138629e-03 9.9854565e-01 5.4473304e-07 3.9719587e-05 1.8904993e-07] [9.0803866e-07 2.7020766E-05 7.9189061E-07 4.9350000E-09 9.9997127E-01] ... 10 4.7919415e-09 1.0000000e+00 1.4161311e-12] [2.1354626e-06 9.6519925e-06 1.9460406e-07 4.6475903e-09 9.9998796e-01]]

####

我用 CNN 做了一些图像分类。训练集和验证集的准确率都很高,而且两者的损失都很低。但是,我的混淆矩阵没有从左上角到右下角的典型对角线。如果我正确理解了混淆矩阵,我就会有很多错误分类。那么,如何改进我的模型以获得更好的结果呢?

每个类别的样本分布为:

早期:800 健康:749 晚期:764 叶霉病:761 黄色:708

模型结构:

model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu',input_shape=(150, 150, 
3)))
model.add(layers.MaxPooling2D((2, 2)))

model.add(layers.Dropout(0.15))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))

model.add(layers.Dropout(0.2))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))

model.add(layers.Conv2D(256, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))

model.add(layers.Conv2D(256, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(BatchNormalization())

model.add(layers.Flatten())
model.add(layers.Dropout(0.6))
model.add(layers.Dense(150, activation='relu', 
kernel_regularizer=regularizers.l2(0.002)))
model.add(layers.Dense(5, activation='softmax'))

model.compile(loss='categorical_crossentropy',
optimizer=optimizers.Adam(lr=1e-3),
metrics=['acc'])

这些是训练的准确性和损失:

Epoch 00067: val_loss did not improve from 0.08283
Epoch 68/200
230/230 [==============================] - 56s 243ms/step - loss: 0.0893 - 
acc: 0.9793 - val_loss: 0.0876 - val_acc: 0.9784

Epoch 00068: val_loss did not improve from 0.08283
Epoch 69/200
230/230 [==============================] - 58s 250ms/step - loss: 0.0874 - 
acc: 0.9774 - val_loss: 0.1209 - val_acc: 0.9684

Epoch 00069: val_loss did not improve from 0.08283
Epoch 70/200
230/230 [==============================] - 57s 246ms/step - loss: 0.0879 - 
acc: 0.9803 - val_loss: 0.1384 - val_acc: 0.9706

Epoch 00070: val_loss did not improve from 0.08283
Epoch 71/200
230/230 [==============================] - 59s 257ms/step - loss: 0.0903 - 
acc: 0.9783 - val_loss: 0.1352 - val_acc: 0.9728

Epoch 00071: val_loss did not improve from 0.08283
Epoch 72/200
230/230 [==============================] - 58s 250ms/step - loss: 0.0852 - 
acc: 0.9798 - val_loss: 0.1324 - val_acc: 0.9621

Epoch 00072: val_loss did not improve from 0.08283
Epoch 73/200
230/230 [==============================] - 58s 250ms/step - loss: 0.0831 - 
acc: 0.9815 - val_loss: 0.1634 - val_acc: 0.9574

Epoch 00073: val_loss did not improve from 0.08283
Epoch 74/200
230/230 [==============================] - 57s 246ms/step - loss: 0.0824 - 
acc: 0.9816 - val_loss: 0.1280 - val_acc: 0.9640

Epoch 00074: val_loss did not improve from 0.08283
Epoch 75/200
230/230 [==============================] - 57s 247ms/step - loss: 0.0869 - 
acc: 0.9774 - val_loss: 0.0777 - val_acc: 0.9882

Epoch 00075: val_loss improved from 0.08283 to 0.07765, saving model to 
C:/Users/xxx/Desktop/best_model_7.h5
Epoch 76/200
230/230 [==============================] - 56s 243ms/step - loss: 0.0739 - 
acc: 0.9851 - val_loss: 0.0683 - val_acc: 0.9851

Epoch 00076: val_loss improved from 0.07765 to 0.06826, saving model to 
C:/Users/xxx/Desktop/best_model_7.h5

准确性和损失 混淆矩阵

1个回答

这有点奇怪……一个问题可能是你没有太多的训练样本。你使用预训练模型吗?如果没有,使用预训练模型可能会提高分类准确性(尤其是在训练样本有限的情况下)。 https://keras.io/applications/

-Edit-这是一个很好的示例代码:https ://github.com/fchollet/deep-learning-with-python-notebooks/blob/master/5.3-using-a-pretrained-convnet.ipynb

调整为多级:

import keras

from keras.applications import VGG16

conv_base = VGG16(weights='imagenet',
                  include_top=False,
                  input_shape=(150, 150, 3))

import os
import numpy as np
from keras.preprocessing.image import ImageDataGenerator

base_dir = 'C:/kerasimages'

train_dir = os.path.join(base_dir, 'train')
validation_dir = os.path.join(base_dir, 'val')
test_dir = os.path.join(base_dir, 'test')

datagen = ImageDataGenerator(rescale=1./255)
batch_size = 20

def extract_features(directory, sample_count):
    features = np.zeros(shape=(sample_count, 4, 4, 512))
    labels = np.zeros(shape=(sample_count))
    generator = datagen.flow_from_directory(
        directory,
        target_size=(150, 150),
        batch_size=batch_size,
        class_mode='binary')
    i = 0
    for inputs_batch, labels_batch in generator:
        features_batch = conv_base.predict(inputs_batch)
        features[i * batch_size : (i + 1) * batch_size] = features_batch
        labels[i * batch_size : (i + 1) * batch_size] = labels_batch
        i += 1
        if i * batch_size >= sample_count:
            # Note that since generators yield data indefinitely in a loop,
            # we must `break` after every image has been seen once.
            break
    return features, labels

train_features, train_labels = extract_features(train_dir, 2000)
validation_features, validation_labels = extract_features(validation_dir, 1000)
test_features, test_labels = extract_features(test_dir, 1000)

from keras.utils import to_categorical
print(train_labels)
print(train_labels.shape)
train_labels = to_categorical(train_labels)
print(train_labels)
print(train_labels.shape)
validation_labels = to_categorical(validation_labels)
test_labels = to_categorical(test_labels)

train_features = np.reshape(train_features, (2000, 4 * 4 * 512))
validation_features = np.reshape(validation_features, (1000, 4 * 4 * 512))
test_features = np.reshape(test_features, (1000, 4 * 4 * 512))

from keras import models
from keras import layers
from keras import optimizers

model = models.Sequential()
model.add(conv_base)
model.add(layers.Flatten())
model.add(layers.Dense(256, activation='relu'))
# NUMBER OF CLASSES
model.add(layers.Dense(3, activation='softmax'))

model.summary()

conv_base.trainable = False

from keras.preprocessing.image import ImageDataGenerator

train_datagen = ImageDataGenerator(
      rescale=1./255,
      rotation_range=40,
      width_shift_range=0.2,
      height_shift_range=0.2,
      shear_range=0.2,
      zoom_range=0.2,
      horizontal_flip=True,
      fill_mode='nearest')

# Note that the validation data should not be augmented!
test_datagen = ImageDataGenerator(rescale=1./255)

train_generator = train_datagen.flow_from_directory(
        # This is the target directory
        train_dir,
        # All images will be resized to 150x150
        target_size=(150, 150),
        batch_size=20,
        # Since we use categorical_crossentropy loss, we need binary labels
        class_mode='categorical')

validation_generator = test_datagen.flow_from_directory(
        validation_dir,
        target_size=(150, 150),
        batch_size=20,
        class_mode='categorical')

model.compile(loss='categorical_crossentropy',
              optimizer=optimizers.RMSprop(lr=2e-5),
              metrics=['acc'])

history = model.fit_generator(
      train_generator,
      steps_per_epoch=100,
      epochs=30,
      validation_data=validation_generator,
      validation_steps=50,
      verbose=2)


#######################################
# Fine tuning

#conv_base.summary()
conv_base.trainable = True

set_trainable = False
for layer in conv_base.layers:
    if layer.name == 'block5_conv1':
        set_trainable = True
    if set_trainable:
        layer.trainable = True
    else:
        layer.trainable = False

model.compile(loss='categorical_crossentropy',
              optimizer=optimizers.RMSprop(lr=1e-5),
              metrics=['acc'])

history = model.fit_generator(
      train_generator,
      steps_per_epoch=100,
      epochs=100,
      validation_data=validation_generator,
      validation_steps=50)

model.save('my_model_multiclass.hdf5')