我目前正在尝试为大约 100,000 张图像构建一个 CNN。有42个班。我使用了 32 的默认批量大小。这是我的模型的样子:
model = Sequential()
model.add(Conv2D(filters = 32, kernel_size = (3, 3), activation = 'relu', input_shape = training_data.image_shape))
model.add(MaxPool2D(pool_size = (2, 2)))
model.add(Dropout(rate = 0.3))
model.add(Conv2D(filters = 64, kernel_size = (3, 3), activation = 'relu'))
model.add(MaxPool2D(pool_size = (2, 2)))
model.add(Dropout(rate = 0.2))
model.add(Conv2D(filters = 126, kernel_size = (3, 3), activation = 'relu'))
model.add(MaxPool2D(pool_size = (2, 2)))
model.add(Dropout(rate = 0.15))
model.add(Flatten())
model.add(Dense(units = 32, activation = 'relu'))
model.add(Dropout(rate = 0.15))
model.add(Dense(units = 64, activation = 'relu'))
model.add(Dropout(rate = 0.1))
model.add(Dense(units = 42, activation = 'softmax'))
model.compile(optimizer = 'adam', loss = 'categorical_crossentropy', metrics = ['accuracy'])
但是,训练时间非常长,每个 epoch 大约需要运行 35 分钟。准确度也非常低,并且增加非常缓慢。
我的 jupyter 实验室有时会停止并不得不再次刷新所有内容。那么有没有办法小批量训练呢?还是提高训练速度的方法?任何帮助表示赞赏。这是一个非常庞大的数据集。
Epoch 1/15
2307/2307 [==============================] - 3999s 2s/step - loss: 3.5377 - accuracy: 0.0687 - val_loss: 3.3247 - val_accuracy: 0.1223
Epoch 2/15
2307/2307 [==============================] - 3764s 2s/step - loss: 3.2884 - accuracy: 0.1239 - val_loss: 3.1065 - val_accuracy: 0.1739
Epoch 3/15
2307/2307 [==============================] - 2204s 955ms/step - loss: 3.1435 - accuracy: 0.1562 - val_loss: 2.9825 - val_accuracy: 0.2069
Epoch 4/15
2307/2307 [==============================] - 2193s 951ms/step - loss: 3.0526 - accuracy: 0.1778 - val_loss: 2.9059 - val_accuracy: 0.2171