我正在尝试训练一个模型来区分两种时间信号,即具有 RTS 噪声的时间信号和仅具有白噪声的时间信号。
我有一个简单的 1D CNN,它在一个训练集上运行良好(92% 的准确率),但在另一个训练集上变成了一个完整的硬币翻转。在眼睛看来,这些套装非常相似。一个是使用真实信号创建的,另一个是使用模拟信号创建的。对我来说唯一真正的区别是平均幅度。模型在第二组中如此可靠地失败是否有原因?我是否需要以某种方式规范化数据?
from keras.models import Sequential
from keras.layers import Dense, Dropout,Activation
from keras.layers import Embedding
from keras.layers import Conv1D, GlobalAveragePooling1D, MaxPooling1D,
Flatten, LSTM
import numpy as np
x_test = np.load('C:/Users/Ben WORK ONLY/Desktop/GH repos/RTS ML detect
beta/x_test.npy')
x_train = np.load('C:/Users/Ben WORK ONLY/Desktop/GH repos/RTS ML detect
beta/x_train.npy')
y_test = np.load('C:/Users/Ben WORK ONLY/Desktop/GH repos/RTS ML detect
beta/y_test.npy')
y_train = np.load('C:/Users/Ben WORK ONLY/Desktop/GH repos/RTS ML detect
beta/y_train.npy')
X_train = np.expand_dims(x_train, axis=2)
X_test = np.expand_dims(x_test, axis=2)
model = Sequential()
model.add(Conv1D(32, 12, activation='relu', input_shape=(1500, 1)))
model.add(MaxPooling1D(3))
model.add(Conv1D(64, 12, activation='relu'))
model.add(MaxPooling1D(3))
model.add(Conv1D(128, 12, activation='relu'))
model.add(GlobalAveragePooling1D())
model.add(Dropout(0.5))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy',
optimizer='rmsprop',
metrics=['accuracy'])
model.fit(X_train, y_train, batch_size=16, epochs=5)
score = model.evaluate(X_test, y_test, batch_size=16)
#model.save('C:/Users/Ben WORK ONLY/Desktop/GH repos/RTS ML detect
beta/CNNlin_model.h5')


