我有点困惑。我已经看到批量标准化导致更快的收敛和更高的准确性。但在我的情况下发生了相反的情况。通过标准化,我的准确性实际上降低了。有什么我想念的吗?
以下是我正在使用的代码
model = Sequential()
model.add(Bidirectional(LSTM(64 ,return_sequences=True),input_shape=(X_train.shape[1],X_train.shape[2])))
model.add(Conv1D(filters=16, kernel_size=3, padding=‘same’))
model.add(BatchNormalization())
model.add(Activation(‘relu’))
model.add(MaxPooling1D(pool_size=2))
model.add(Conv1D(filters=32, kernel_size=3, padding=‘same’))
model.add(BatchNormalization())
model.add(Activation(‘relu’))
model.add(MaxPooling1D(pool_size=2))
model.add(Conv1D(filters=64, kernel_size=3, padding=‘same’))
model.add(BatchNormalization())
model.add(Activation(‘relu’))
model.add(MaxPooling1D(pool_size=2))
model.add(Dropout(0.3))
model.add(Flatten())
model.add(Dense(150))
model.add(BatchNormalization())
model.add(Activation(‘relu’))
model.add(Dropout(0.4))
model.add(Dense(10))
model.add(BatchNormalization())
model.add(Activation(‘relu’))
model.add(Dropout(0.4))
model.add(Dense(dummy_y.shape[1],activation = ‘softmax’))
model.compile(loss=‘categorical_crossentropy’, optimizer=‘adam’, metrics=[‘categorical_accuracy’])
model.summary()
它是批量标准化的正确位置还是我做错了什么?