keras 模型是这样的
features_input = Input(shape=(features.shape[1],))
inp = Input(shape=(maxlen, ))
x = Embedding(max_features, embed_size, weights=[embedding_matrix], trainable=False)(inp)
x = Bidirectional(LSTM(num_filters, return_sequences=True))(x)
max_pool = GlobalMaxPooling1D()(x)
x = concatenate([x_h, max_pool,features_input])
outp = Dense(6, activation="sigmoid")(x)
GlobalMaxPooling1D()(x) 对 LSTM 的输出真正做了什么?我知道 LSTM 层的输入是维度(batch_size、steps、features)。
GlobalMaxPooling1D 是否在每个 LSTM 单元的 num_filters/hidden 单元中取最大值?