Keras 异常:ValueError:检查输入时出错:预期 conv2d_1_input 的形状为 (150, 150, 3) 但得到的数组形状为 (256, 256, 3)

数据挖掘 喀拉斯 图像分类 美国有线电视新闻网
2021-09-21 03:47:55

我正在研究图像的多类分类。为此,我在 keras 中创建了一个 CNN 模型。我已经将所有图像预处理为尺寸(150,150,3)。这是模型摘要-

Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_1 (Conv2D)            (None, 146, 146, 32)      2432      
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 73, 73, 32)        0         
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 71, 71, 64)        18496     
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 35, 35, 64)        0         
_________________________________________________________________
conv2d_3 (Conv2D)            (None, 33, 33, 64)        36928     
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 16, 16, 64)        0         
_________________________________________________________________
conv2d_4 (Conv2D)            (None, 14, 14, 128)       73856     
_________________________________________________________________
max_pooling2d_4 (MaxPooling2 (None, 7, 7, 128)         0         
_________________________________________________________________
flatten_1 (Flatten)          (None, 6272)              0         
_________________________________________________________________
dense_1 (Dense)              (None, 300)               1881900   
_________________________________________________________________
dense_2 (Dense)              (None, 10)                3010      
=================================================================
Total params: 2,016,622
Trainable params: 2,016,622
Non-trainable params: 0

我也在使用数据增强和flow_from_directory方法-

train_datagen = image.ImageDataGenerator( rescale = 1./255, rotation_range=40, width_shift_range=0.2, height_shift_range=0.2, zoom_range=0.2, horizontal_flip=True, fill_mode='nearest')
test_datagen = image.ImageDataGenerator(rescale=1./255)

train_generator = train_datagen.flow_from_directory( new_train, batch_size=20)
validation_generator = test_datagen.flow_from_directory( new_valid, batch_size=20)

然后我编译模型并运行fit_generator-

model.compile(loss='categorical_crossentropy', optimizer=sgd, metrics = ['acc',metrics.categorical_accuracy])
history = model.fit_generator( train_generator, steps_per_epoch=100, epochs=10, validation_data=validation_generator, validation_steps=50)

在这部分我得到错误 -

ValueError:检查输入时出错:预期 conv2d_1_input 的形状为 (150, 150, 3) 但得到的数组的形状为 (256, 256, 3)

我不明白当所有输入图像的大小都是(150、150、3)时,它怎么能得到(256、256、3)?请告诉我哪里出错了。

编辑

我创建模型的代码是-

model = models.Sequential()
model.add(layers.Conv2D(32, (5, 5), activation='relu', input_shape=(150, 150, 3)))
model.add(layers.MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(64, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Conv2D(128, (3, 3), activation='relu'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(layers.Flatten())
model.add(layers.Dense(300, activation='relu'))
model.add(layers.Dense(10, activation='softmax'))

对于图像预处理,我使用了以下 -

for image_name in os.listdir(train_dir):
    im = cv2.resize(cv2.imread(os.path.join(train_dir,image_name)), (150, 150)).astype(np.float32)
    if image_name in validation_img:
        cv2.imwrite(os.path.join(new_valid,image_name), im)
    else:
        cv2.imwrite(os.path.join(new_train,image_name), im)
1个回答

[编辑:] 您的问题肯定出在生成器中,因为您没有设置目标大小,并且默认为 (256, 256) - 如文档flow_from_directory中所示:

flow_from_directory(directory, target_size=(256, 256), color_mode='rgb', ...)

target_size:整数元组(高度,宽度),默认值:(256, 256)。找到的所有图像都将调整到的尺寸。

尝试将目标大小参数设置为 (150, 150),我认为它会起作用。该默认值似乎覆盖了您的预处理。


它必须在您的生成器中 - 我运行了以下代码和按预期训练的模型:

from keras import models, layers, metrics
import numpy as np

model = models.Sequential()
   ...: model.add(layers.Conv2D(32, (5, 5), activation='relu', input_shape=(150, 150,
   ...:  3)))
   ...: model.add(layers.MaxPooling2D(pool_size=(2, 2), strides=(2, 2)))
   ...: model.add(layers.Conv2D(64, (3, 3), activation='relu'))
   ...: model.add(layers.MaxPooling2D((2, 2)))
   ...: model.add(layers.Conv2D(64, (3, 3), activation='relu'))
   ...: model.add(layers.MaxPooling2D((2, 2)))
   ...: model.add(layers.Conv2D(128, (3, 3), activation='relu'))
   ...: model.add(layers.MaxPooling2D((2, 2)))
   ...: model.add(layers.Flatten())
   ...: model.add(layers.Dense(300, activation='relu'))
   ...: model.add(layers.Dense(10, activation='softmax'))

In [10]: model.summary()
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
conv2d_1 (Conv2D)            (None, 146, 146, 32)      2432      
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 73, 73, 32)        0         
_________________________________________________________________
conv2d_2 (Conv2D)            (None, 71, 71, 64)        18496     
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 35, 35, 64)        0         
_________________________________________________________________
conv2d_3 (Conv2D)            (None, 33, 33, 64)        36928     
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 16, 16, 64)        0         
_________________________________________________________________
conv2d_4 (Conv2D)            (None, 14, 14, 128)       73856     
_________________________________________________________________
max_pooling2d_4 (MaxPooling2 (None, 7, 7, 128)         0         
_________________________________________________________________
flatten_1 (Flatten)          (None, 6272)              0         
_________________________________________________________________
dense_1 (Dense)              (None, 300)               1881900   
_________________________________________________________________
dense_2 (Dense)              (None, 10)                3010      
=================================================================
Total params: 2,016,622
Trainable params: 2,016,622
Non-trainable params: 0
_________________________________________________________________
In [17]: model.compile(loss='categorical_crossentropy', optimizer='sgd', metrics = ['
    ...: acc',metrics.categorical_accuracy])

# Create some fake data to match your inputs. Each label seems to be 10 points: (1, 10)
In [11]: fakes = np.random.randint(0, 255, (100, 150, 150, 3))
In [24]: labels = np.random.randint(0, 2, (100, 10))

In [25]: model.fit(fakes, labels, validation_split=0.2)

Epoch 1/10
80/80 [==============================] - 2s - loss: 62.8913 - acc: 0.1125 - categorical_accuracy: 0.1125 - val_loss: 71.7255 - val_acc: 0.0000e+00 - val_categorical_accuracy: 0.0000e+00
Epoch 2/10
80/80 [==============================] - 0s - loss: 67.0916 - acc: 0.0000e+00 - categorical_accuracy: 0.0000e+00 - val_loss: 71.7255 - val_acc: 0.0000e+00 - val_categorical_accuracy: 0.0000e+00
Epoch 3/10
80/80 [==============================] - 0s - loss: 67.0916 - acc: 0.0000e+00 - categorical_accuracy: 0.0000e+00 - val_loss: 71.7255 - val_acc: 0.0000e+00 - val_categorical_accuracy: 0.0000e+00
Epoch 4/10
80/80 [==============================] - 0s - loss: 67.0916 - acc: 0.0000e+00 - categorical_accuracy: 0.0000e+00 - val_loss: 71.7255 - val_acc: 0.0000e+00 - val_categorical_accuracy: 0.0000e+00
Epoch 5/10
80/80 [==============================] - 0s - loss: 67.0916 - acc: 0.0000e+00 - categorical_accuracy: 0.0000e+00 - val_loss: 71.7255 - val_acc: 0.0000e+00 - val_categorical_accuracy: 0.0000e+00
Epoch 6/10
80/80 [==============================] - 0s - loss: 67.0916 - acc: 0.0000e+00 - categorical_accuracy: 0.0000e+00 - val_loss: 71.7255 - val_acc: 0.0000e+00 - val_categorical_accuracy: 0.0000e+00
Epoch 7/10
80/80 [==============================] - 0s - loss: 67.0916 - acc: 0.0000e+00 - categorical_accuracy: 0.0000e+00 - val_loss: 71.7255 - val_acc: 0.0000e+00 - val_categorical_accuracy: 0.0000e+00
Epoch 8/10
80/80 [==============================] - 0s - loss: 67.0916 - acc: 0.0000e+00 - categorical_accuracy: 0.0000e+00 - val_loss: 71.7255 - val_acc: 0.0000e+00 - val_categorical_accuracy: 0.0000e+00
Epoch 9/10
80/80 [==============================] - 0s - loss: 67.0916 - acc: 0.0000e+00 - categorical_accuracy: 0.0000e+00 - val_loss: 71.7255 - val_acc: 0.0000e+00 - val_categorical_accuracy: 0.0000e+00
Epoch 10/10
80/80 [==============================] - 0s - loss: 67.0916 - acc: 0.0000e+00 - categorical_accuracy: 0.0000e+00 - val_loss: 71.7255 - val_acc: 0.0000e+00 - val_categorical_accuracy: 0.0000e+00