训练准确率和验证准确率之间的关系

人工智能 神经网络 机器学习 卷积神经网络 训练 交叉验证
2021-10-21 01:00:16

在模型训练期间,我注意到训练和验证准确性之间的各种行为。我了解'训练集用于训练模型,而验证集仅用于评估模型的性能......',但我想知道训练和验证准确性之间是否存在任何关系,以及是否是的,

1)当训练和验证准确性在训练期间发生变化时究竟发生了什么;

2)不同的行为意味着什么

例如,有些人认为如果训练 > 验证准确度存在过拟合问题。如果一个交替大于另一个会发生什么情况,如下所示?

这是代码

inputs_1 = keras.Input(shape=(10081,1))

layer1 = Conv1D(64,14)(inputs_1)
layer2 = layers.MaxPool1D(5)(layer1)
layer3 = Conv1D(64, 14)(layer2)
layer4 = layers.GlobalMaxPooling1D()(layer3)

inputs_2 = keras.Input(shape=(104,))             
layer5 = layers.concatenate([layer4, inputs_2])
layer6 = Dense(128, activation='relu')(layer5)
layer7 = Dense(2, activation='softmax')(layer6)


model_2 = keras.models.Model(inputs = [inputs_1, inputs_2], output = [layer7])
model_2.summary()


X_train, X_test, y_train, y_test = train_test_split(df.iloc[:,0:10185], df[['Result_cat','Result_cat1']].values, test_size=0.2) 
X_train = X_train.to_numpy()
X_train = X_train.reshape([X_train.shape[0], X_train.shape[1], 1]) 
X_train_1 = X_train[:,0:10081,:]
X_train_2 = X_train[:,10081:10185,:].reshape(736,104)   


X_test = X_test.to_numpy()
X_test = X_test.reshape([X_test.shape[0], X_test.shape[1], 1]) 
X_test_1 = X_test[:,0:10081,:]
X_test_2 = X_test[:,10081:10185,:].reshape(185,104)    

adam = keras.optimizers.Adam(lr = 0.0005)
model_2.compile(loss = 'categorical_crossentropy', optimizer = adam, metrics = ['acc'])

history = model_2.fit([X_train_1,X_train_2], y_train, epochs = 120, batch_size = 256, validation_split = 0.2, callbacks = [keras.callbacks.EarlyStopping(monitor='val_loss', patience=20)])

模型摘要

/usr/local/lib/python3.6/dist-packages/ipykernel_launcher.py:15: UserWarning: Update your `Model` call to the Keras 2 API: `Model(inputs=[<tf.Tenso..., outputs=[<tf.Tenso...)`
  from ipykernel import kernelapp as app
Model: "model_3"
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
input_5 (InputLayer)            (None, 10081, 1)     0                                            
__________________________________________________________________________________________________
conv1d_5 (Conv1D)               (None, 10068, 64)    960         input_5[0][0]                    
__________________________________________________________________________________________________
max_pooling1d_3 (MaxPooling1D)  (None, 2013, 64)     0           conv1d_5[0][0]                   
__________________________________________________________________________________________________
conv1d_6 (Conv1D)               (None, 2000, 64)     57408       max_pooling1d_3[0][0]            
__________________________________________________________________________________________________
global_max_pooling1d_3 (GlobalM (None, 64)           0           conv1d_6[0][0]                   
__________________________________________________________________________________________________
input_6 (InputLayer)            (None, 104)          0                                            
__________________________________________________________________________________________________
concatenate_3 (Concatenate)     (None, 168)          0           global_max_pooling1d_3[0][0]     
                                                                 input_6[0][0]                    
__________________________________________________________________________________________________
dense_5 (Dense)                 (None, 128)          21632       concatenate_3[0][0]              
__________________________________________________________________________________________________
dense_6 (Dense)                 (None, 2)            258         dense_5[0][0]                    
==================================================================================================
Total params: 80,258
Trainable params: 80,258
Non-trainable params: 0

和训练过程

__________________________________________________________________________________________________
Train on 588 samples, validate on 148 samples
Epoch 1/120
588/588 [==============================] - 16s 26ms/step - loss: 5.6355 - acc: 0.4932 - val_loss: 4.1086 - val_acc: 0.6216
Epoch 2/120
588/588 [==============================] - 15s 25ms/step - loss: 4.5977 - acc: 0.5748 - val_loss: 3.8252 - val_acc: 0.4459
Epoch 3/120
588/588 [==============================] - 15s 25ms/step - loss: 4.3815 - acc: 0.4575 - val_loss: 2.4087 - val_acc: 0.6622
Epoch 4/120
588/588 [==============================] - 15s 25ms/step - loss: 3.7480 - acc: 0.6003 - val_loss: 2.0060 - val_acc: 0.6892
Epoch 5/120
588/588 [==============================] - 15s 25ms/step - loss: 3.3019 - acc: 0.5408 - val_loss: 2.3176 - val_acc: 0.5676
Epoch 6/120
588/588 [==============================] - 15s 25ms/step - loss: 3.1739 - acc: 0.5663 - val_loss: 2.2607 - val_acc: 0.6892
Epoch 7/120
588/588 [==============================] - 15s 25ms/step - loss: 3.2322 - acc: 0.6207 - val_loss: 1.8898 - val_acc: 0.7230
Epoch 8/120
588/588 [==============================] - 15s 25ms/step - loss: 2.9777 - acc: 0.6020 - val_loss: 1.8401 - val_acc: 0.7500
Epoch 9/120
588/588 [==============================] - 15s 25ms/step - loss: 2.8982 - acc: 0.6429 - val_loss: 1.8517 - val_acc: 0.7365
Epoch 10/120
588/588 [==============================] - 15s 25ms/step - loss: 2.8342 - acc: 0.6344 - val_loss: 1.7941 - val_acc: 0.7095
Epoch 11/120
588/588 [==============================] - 15s 25ms/step - loss: 2.7426 - acc: 0.6327 - val_loss: 1.8495 - val_acc: 0.7162
Epoch 12/120
588/588 [==============================] - 15s 25ms/step - loss: 2.7340 - acc: 0.6531 - val_loss: 1.7652 - val_acc: 0.7162
Epoch 13/120
588/588 [==============================] - 15s 25ms/step - loss: 2.6680 - acc: 0.6616 - val_loss: 1.8097 - val_acc: 0.7365
Epoch 14/120
588/588 [==============================] - 15s 25ms/step - loss: 2.6922 - acc: 0.6786 - val_loss: 1.7143 - val_acc: 0.7500
Epoch 15/120
588/588 [==============================] - 15s 25ms/step - loss: 2.6161 - acc: 0.6786 - val_loss: 1.6960 - val_acc: 0.7568
Epoch 16/120
588/588 [==============================] - 15s 25ms/step - loss: 2.6054 - acc: 0.6905 - val_loss: 1.6779 - val_acc: 0.7297
Epoch 17/120
588/588 [==============================] - 15s 25ms/step - loss: 2.6072 - acc: 0.6684 - val_loss: 1.6750 - val_acc: 0.7703
Epoch 18/120
588/588 [==============================] - 15s 25ms/step - loss: 2.5907 - acc: 0.6871 - val_loss: 1.6774 - val_acc: 0.7432
Epoch 19/120
588/588 [==============================] - 15s 25ms/step - loss: 2.5854 - acc: 0.6718 - val_loss: 1.6609 - val_acc: 0.7770
Epoch 20/120
588/588 [==============================] - 15s 25ms/step - loss: 2.5621 - acc: 0.6905 - val_loss: 1.6709 - val_acc: 0.7365
Epoch 21/120
588/588 [==============================] - 15s 25ms/step - loss: 2.5515 - acc: 0.6854 - val_loss: 1.6904 - val_acc: 0.7703
Epoch 22/120
588/588 [==============================] - 15s 25ms/step - loss: 2.5749 - acc: 0.6837 - val_loss: 1.6862 - val_acc: 0.7297
Epoch 23/120
588/588 [==============================] - 15s 25ms/step - loss: 2.6212 - acc: 0.6514 - val_loss: 1.7215 - val_acc: 0.7568
Epoch 24/120
588/588 [==============================] - 15s 25ms/step - loss: 2.6532 - acc: 0.6633 - val_loss: 1.7105 - val_acc: 0.7230
Epoch 25/120
588/588 [==============================] - 15s 25ms/step - loss: 2.7300 - acc: 0.6344 - val_loss: 1.6870 - val_acc: 0.7432
Epoch 26/120
588/588 [==============================] - 15s 25ms/step - loss: 2.7355 - acc: 0.6650 - val_loss: 1.6733 - val_acc: 0.7703
Epoch 27/120
588/588 [==============================] - 15s 25ms/step - loss: 2.6336 - acc: 0.6650 - val_loss: 1.6572 - val_acc: 0.7297
Epoch 28/120
588/588 [==============================] - 15s 25ms/step - loss: 2.6018 - acc: 0.6803 - val_loss: 1.7292 - val_acc: 0.7635
Epoch 29/120
588/588 [==============================] - 15s 25ms/step - loss: 2.5448 - acc: 0.7143 - val_loss: 1.8065 - val_acc: 0.7095
Epoch 30/120
588/588 [==============================] - 15s 25ms/step - loss: 2.5724 - acc: 0.6820 - val_loss: 1.8029 - val_acc: 0.7297
Epoch 31/120
588/588 [==============================] - 15s 25ms/step - loss: 2.6622 - acc: 0.6650 - val_loss: 1.6594 - val_acc: 0.7568
Epoch 32/120
588/588 [==============================] - 15s 25ms/step - loss: 2.6211 - acc: 0.6582 - val_loss: 1.6375 - val_acc: 0.7770
Epoch 33/120
588/588 [==============================] - 15s 25ms/step - loss: 2.5911 - acc: 0.6854 - val_loss: 1.6964 - val_acc: 0.7500
Epoch 34/120
588/588 [==============================] - 15s 25ms/step - loss: 2.5050 - acc: 0.7262 - val_loss: 1.8496 - val_acc: 0.6892
Epoch 35/120
588/588 [==============================] - 15s 25ms/step - loss: 2.6012 - acc: 0.6752 - val_loss: 1.7443 - val_acc: 0.7432
Epoch 36/120
588/588 [==============================] - 15s 25ms/step - loss: 2.5688 - acc: 0.6871 - val_loss: 1.6220 - val_acc: 0.7568
Epoch 37/120
588/588 [==============================] - 15s 25ms/step - loss: 2.4843 - acc: 0.7279 - val_loss: 1.6166 - val_acc: 0.7905
Epoch 38/120
588/588 [==============================] - 15s 25ms/step - loss: 2.4707 - acc: 0.7449 - val_loss: 1.6496 - val_acc: 0.7905
Epoch 39/120
588/588 [==============================] - 15s 25ms/step - loss: 2.4683 - acc: 0.7109 - val_loss: 1.6641 - val_acc: 0.7432
Epoch 40/120
588/588 [==============================] - 15s 25ms/step - loss: 2.4671 - acc: 0.7279 - val_loss: 1.6553 - val_acc: 0.7703
Epoch 41/120
588/588 [==============================] - 15s 25ms/step - loss: 2.4479 - acc: 0.7347 - val_loss: 1.6302 - val_acc: 0.7973
Epoch 42/120
588/588 [==============================] - 15s 25ms/step - loss: 2.4355 - acc: 0.7551 - val_loss: 1.6241 - val_acc: 0.7973
Epoch 43/120
588/588 [==============================] - 14s 25ms/step - loss: 2.4286 - acc: 0.7568 - val_loss: 1.6249 - val_acc: 0.7973
Epoch 44/120
588/588 [==============================] - 15s 25ms/step - loss: 2.4250 - acc: 0.7585 - val_loss: 1.6248 - val_acc: 0.7770
Epoch 45/120
588/588 [==============================] - 14s 25ms/step - loss: 2.4198 - acc: 0.7517 - val_loss: 1.6212 - val_acc: 0.7703
Epoch 46/120
588/588 [==============================] - 15s 25ms/step - loss: 2.4246 - acc: 0.7568 - val_loss: 1.6129 - val_acc: 0.7838
Epoch 47/120
588/588 [==============================] - 15s 25ms/step - loss: 2.4237 - acc: 0.7517 - val_loss: 1.6166 - val_acc: 0.7973
Epoch 48/120
588/588 [==============================] - 15s 25ms/step - loss: 2.4287 - acc: 0.7432 - val_loss: 1.6309 - val_acc: 0.8041
Epoch 49/120
588/588 [==============================] - 15s 25ms/step - loss: 2.4179 - acc: 0.7381 - val_loss: 1.6271 - val_acc: 0.7838
Epoch 50/120
588/588 [==============================] - 15s 25ms/step - loss: 2.4164 - acc: 0.7381 - val_loss: 1.6258 - val_acc: 0.7973
Epoch 51/120
588/588 [==============================] - 14s 24ms/step - loss: 2.1996 - acc: 0.7398 - val_loss: 1.3612 - val_acc: 0.7973
Epoch 52/120
588/588 [==============================] - 15s 25ms/step - loss: 1.1387 - acc: 0.8265 - val_loss: 1.4811 - val_acc: 0.7973
Epoch 53/120
588/588 [==============================] - 15s 25ms/step - loss: 1.1607 - acc: 0.8078 - val_loss: 1.5060 - val_acc: 0.7838
Epoch 54/120
588/588 [==============================] - 15s 25ms/step - loss: 1.1783 - acc: 0.8129 - val_loss: 1.4878 - val_acc: 0.8176
Epoch 55/120
588/588 [==============================] - 15s 25ms/step - loss: 1.1745 - acc: 0.8197 - val_loss: 1.4762 - val_acc: 0.8108
Epoch 56/120
588/588 [==============================] - 15s 25ms/step - loss: 1.1764 - acc: 0.8129 - val_loss: 1.4631 - val_acc: 0.7905
Epoch 57/120
588/588 [==============================] - 15s 25ms/step - loss: 1.1637 - acc: 0.8078 - val_loss: 1.4615 - val_acc: 0.7770
Epoch 58/120
588/588 [==============================] - 15s 25ms/step - loss: 1.1563 - acc: 0.8112 - val_loss: 1.4487 - val_acc: 0.7703
Epoch 59/120
588/588 [==============================] - 15s 25ms/step - loss: 1.1396 - acc: 0.8146 - val_loss: 1.4362 - val_acc: 0.7905
Epoch 60/120
588/588 [==============================] - 15s 25ms/step - loss: 1.1240 - acc: 0.8316 - val_loss: 1.4333 - val_acc: 0.8041
Epoch 61/120
588/588 [==============================] - 15s 25ms/step - loss: 1.1173 - acc: 0.8333 - val_loss: 1.4369 - val_acc: 0.8041
Epoch 62/120
588/588 [==============================] - 15s 25ms/step - loss: 1.1228 - acc: 0.8384 - val_loss: 1.4393 - val_acc: 0.8041
Epoch 63/120
588/588 [==============================] - 15s 25ms/step - loss: 1.1113 - acc: 0.8316 - val_loss: 1.4380 - val_acc: 0.8041
Epoch 64/120
588/588 [==============================] - 15s 25ms/step - loss: 1.1102 - acc: 0.8452 - val_loss: 1.4217 - val_acc: 0.8041
Epoch 65/120
588/588 [==============================] - 15s 25ms/step - loss: 1.0961 - acc: 0.8469 - val_loss: 1.4129 - val_acc: 0.7973
Epoch 66/120
588/588 [==============================] - 15s 25ms/step - loss: 1.0903 - acc: 0.8537 - val_loss: 1.4019 - val_acc: 0.8041
Epoch 67/120
588/588 [==============================] - 15s 25ms/step - loss: 1.0890 - acc: 0.8503 - val_loss: 1.3850 - val_acc: 0.8176
Epoch 68/120
588/588 [==============================] - 15s 25ms/step - loss: 1.0878 - acc: 0.8520 - val_loss: 1.4035 - val_acc: 0.7635
Epoch 69/120
588/588 [==============================] - 15s 25ms/step - loss: 1.0984 - acc: 0.8469 - val_loss: 1.4060 - val_acc: 0.8041
Epoch 70/120
588/588 [==============================] - 15s 25ms/step - loss: 1.0893 - acc: 0.8418 - val_loss: 1.3981 - val_acc: 0.7973
Epoch 71/120
588/588 [==============================] - 15s 25ms/step - loss: 1.0876 - acc: 0.8605 - val_loss: 1.3951 - val_acc: 0.8041__________________________________________________________________________________________________
Train on 588 samples, validate on 148 samples
Epoch 1/120
588/588 [==============================] - 16s 26ms/step - loss: 5.6355 - acc: 0.4932 - val_loss: 4.1086 - val_acc: 0.6216
Epoch 2/120
588/588 [==============================] - 15s 25ms/step - loss: 4.5977 - acc: 0.5748 - val_loss: 3.8252 - val_acc: 0.4459
Epoch 3/120
588/588 [==============================] - 15s 25ms/step - loss: 4.3815 - acc: 0.4575 - val_loss: 2.4087 - val_acc: 0.6622
Epoch 4/120
588/588 [==============================] - 15s 25ms/step - loss: 3.7480 - acc: 0.6003 - val_loss: 2.0060 - val_acc: 0.6892
Epoch 5/120
588/588 [==============================] - 15s 25ms/step - loss: 3.3019 - acc: 0.5408 - val_loss: 2.3176 - val_acc: 0.5676
Epoch 6/120
588/588 [==============================] - 15s 25ms/step - loss: 3.1739 - acc: 0.5663 - val_loss: 2.2607 - val_acc: 0.6892
Epoch 7/120
588/588 [==============================] - 15s 25ms/step - loss: 3.2322 - acc: 0.6207 - val_loss: 1.8898 - val_acc: 0.7230
Epoch 8/120
588/588 [==============================] - 15s 25ms/step - loss: 2.9777 - acc: 0.6020 - val_loss: 1.8401 - val_acc: 0.7500
Epoch 9/120
588/588 [==============================] - 15s 25ms/step - loss: 2.8982 - acc: 0.6429 - val_loss: 1.8517 - val_acc: 0.7365
Epoch 10/120
588/588 [==============================] - 15s 25ms/step - loss: 2.8342 - acc: 0.6344 - val_loss: 1.7941 - val_acc: 0.7095
Epoch 11/120
588/588 [==============================] - 15s 25ms/step - loss: 2.7426 - acc: 0.6327 - val_loss: 1.8495 - val_acc: 0.7162
Epoch 12/120
588/588 [==============================] - 15s 25ms/step - loss: 2.7340 - acc: 0.6531 - val_loss: 1.7652 - val_acc: 0.7162
Epoch 13/120
588/588 [==============================] - 15s 25ms/step - loss: 2.6680 - acc: 0.6616 - val_loss: 1.8097 - val_acc: 0.7365
Epoch 14/120
588/588 [==============================] - 15s 25ms/step - loss: 2.6922 - acc: 0.6786 - val_loss: 1.7143 - val_acc: 0.7500
Epoch 15/120
588/588 [==============================] - 15s 25ms/step - loss: 2.6161 - acc: 0.6786 - val_loss: 1.6960 - val_acc: 0.7568
Epoch 16/120
588/588 [==============================] - 15s 25ms/step - loss: 2.6054 - acc: 0.6905 - val_loss: 1.6779 - val_acc: 0.7297
Epoch 17/120
588/588 [==============================] - 15s 25ms/step - loss: 2.6072 - acc: 0.6684 - val_loss: 1.6750 - val_acc: 0.7703
Epoch 18/120
588/588 [==============================] - 15s 25ms/step - loss: 2.5907 - acc: 0.6871 - val_loss: 1.6774 - val_acc: 0.7432
Epoch 19/120
588/588 [==============================] - 15s 25ms/step - loss: 2.5854 - acc: 0.6718 - val_loss: 1.6609 - val_acc: 0.7770
Epoch 20/120
588/588 [==============================] - 15s 25ms/step - loss: 2.5621 - acc: 0.6905 - val_loss: 1.6709 - val_acc: 0.7365
Epoch 21/120
588/588 [==============================] - 15s 25ms/step - loss: 2.5515 - acc: 0.6854 - val_loss: 1.6904 - val_acc: 0.7703
Epoch 22/120
588/588 [==============================] - 15s 25ms/step - loss: 2.5749 - acc: 0.6837 - val_loss: 1.6862 - val_acc: 0.7297
Epoch 23/120
588/588 [==============================] - 15s 25ms/step - loss: 2.6212 - acc: 0.6514 - val_loss: 1.7215 - val_acc: 0.7568
Epoch 24/120
588/588 [==============================] - 15s 25ms/step - loss: 2.6532 - acc: 0.6633 - val_loss: 1.7105 - val_acc: 0.7230
Epoch 25/120
588/588 [==============================] - 15s 25ms/step - loss: 2.7300 - acc: 0.6344 - val_loss: 1.6870 - val_acc: 0.7432
Epoch 26/120
588/588 [==============================] - 15s 25ms/step - loss: 2.7355 - acc: 0.6650 - val_loss: 1.6733 - val_acc: 0.7703
Epoch 27/120
588/588 [==============================] - 15s 25ms/step - loss: 2.6336 - acc: 0.6650 - val_loss: 1.6572 - val_acc: 0.7297
Epoch 28/120
588/588 [==============================] - 15s 25ms/step - loss: 2.6018 - acc: 0.6803 - val_loss: 1.7292 - val_acc: 0.7635
Epoch 29/120
588/588 [==============================] - 15s 25ms/step - loss: 2.5448 - acc: 0.7143 - val_loss: 1.8065 - val_acc: 0.7095
Epoch 30/120
588/588 [==============================] - 15s 25ms/step - loss: 2.5724 - acc: 0.6820 - val_loss: 1.8029 - val_acc: 0.7297
Epoch 31/120
588/588 [==============================] - 15s 25ms/step - loss: 2.6622 - acc: 0.6650 - val_loss: 1.6594 - val_acc: 0.7568
Epoch 32/120
588/588 [==============================] - 15s 25ms/step - loss: 2.6211 - acc: 0.6582 - val_loss: 1.6375 - val_acc: 0.7770
Epoch 33/120
588/588 [==============================] - 15s 25ms/step - loss: 2.5911 - acc: 0.6854 - val_loss: 1.6964 - val_acc: 0.7500
Epoch 34/120
588/588 [==============================] - 15s 25ms/step - loss: 2.5050 - acc: 0.7262 - val_loss: 1.8496 - val_acc: 0.6892
Epoch 35/120
588/588 [==============================] - 15s 25ms/step - loss: 2.6012 - acc: 0.6752 - val_loss: 1.7443 - val_acc: 0.7432
Epoch 36/120
588/588 [==============================] - 15s 25ms/step - loss: 2.5688 - acc: 0.6871 - val_loss: 1.6220 - val_acc: 0.7568
Epoch 37/120
588/588 [==============================] - 15s 25ms/step - loss: 2.4843 - acc: 0.7279 - val_loss: 1.6166 - val_acc: 0.7905
Epoch 38/120
588/588 [==============================] - 15s 25ms/step - loss: 2.4707 - acc: 0.7449 - val_loss: 1.6496 - val_acc: 0.7905
Epoch 39/120
588/588 [==============================] - 15s 25ms/step - loss: 2.4683 - acc: 0.7109 - val_loss: 1.6641 - val_acc: 0.7432
Epoch 40/120
588/588 [==============================] - 15s 25ms/step - loss: 2.4671 - acc: 0.7279 - val_loss: 1.6553 - val_acc: 0.7703
Epoch 41/120
588/588 [==============================] - 15s 25ms/step - loss: 2.4479 - acc: 0.7347 - val_loss: 1.6302 - val_acc: 0.7973
Epoch 42/120
588/588 [==============================] - 15s 25ms/step - loss: 2.4355 - acc: 0.7551 - val_loss: 1.6241 - val_acc: 0.7973
Epoch 43/120
588/588 [==============================] - 14s 25ms/step - loss: 2.4286 - acc: 0.7568 - val_loss: 1.6249 - val_acc: 0.7973
Epoch 44/120
588/588 [==============================] - 15s 25ms/step - loss: 2.4250 - acc: 0.7585 - val_loss: 1.6248 - val_acc: 0.7770
Epoch 45/120
588/588 [==============================] - 14s 25ms/step - loss: 2.4198 - acc: 0.7517 - val_loss: 1.6212 - val_acc: 0.7703
Epoch 46/120
588/588 [==============================] - 15s 25ms/step - loss: 2.4246 - acc: 0.7568 - val_loss: 1.6129 - val_acc: 0.7838
Epoch 47/120
588/588 [==============================] - 15s 25ms/step - loss: 2.4237 - acc: 0.7517 - val_loss: 1.6166 - val_acc: 0.7973
Epoch 48/120
588/588 [==============================] - 15s 25ms/step - loss: 2.4287 - acc: 0.7432 - val_loss: 1.6309 - val_acc: 0.8041
Epoch 49/120
588/588 [==============================] - 15s 25ms/step - loss: 2.4179 - acc: 0.7381 - val_loss: 1.6271 - val_acc: 0.7838
Epoch 50/120
588/588 [==============================] - 15s 25ms/step - loss: 2.4164 - acc: 0.7381 - val_loss: 1.6258 - val_acc: 0.7973
Epoch 51/120
588/588 [==============================] - 14s 24ms/step - loss: 2.1996 - acc: 0.7398 - val_loss: 1.3612 - val_acc: 0.7973
Epoch 52/120
588/588 [==============================] - 15s 25ms/step - loss: 1.1387 - acc: 0.8265 - val_loss: 1.4811 - val_acc: 0.7973
Epoch 53/120
588/588 [==============================] - 15s 25ms/step - loss: 1.1607 - acc: 0.8078 - val_loss: 1.5060 - val_acc: 0.7838
Epoch 54/120
588/588 [==============================] - 15s 25ms/step - loss: 1.1783 - acc: 0.8129 - val_loss: 1.4878 - val_acc: 0.8176
Epoch 55/120
588/588 [==============================] - 15s 25ms/step - loss: 1.1745 - acc: 0.8197 - val_loss: 1.4762 - val_acc: 0.8108
Epoch 56/120
588/588 [==============================] - 15s 25ms/step - loss: 1.1764 - acc: 0.8129 - val_loss: 1.4631 - val_acc: 0.7905
Epoch 57/120
588/588 [==============================] - 15s 25ms/step - loss: 1.1637 - acc: 0.8078 - val_loss: 1.4615 - val_acc: 0.7770
Epoch 58/120
588/588 [==============================] - 15s 25ms/step - loss: 1.1563 - acc: 0.8112 - val_loss: 1.4487 - val_acc: 0.7703
Epoch 59/120
588/588 [==============================] - 15s 25ms/step - loss: 1.1396 - acc: 0.8146 - val_loss: 1.4362 - val_acc: 0.7905
Epoch 60/120
588/588 [==============================] - 15s 25ms/step - loss: 1.1240 - acc: 0.8316 - val_loss: 1.4333 - val_acc: 0.8041
Epoch 61/120
588/588 [==============================] - 15s 25ms/step - loss: 1.1173 - acc: 0.8333 - val_loss: 1.4369 - val_acc: 0.8041
Epoch 62/120
588/588 [==============================] - 15s 25ms/step - loss: 1.1228 - acc: 0.8384 - val_loss: 1.4393 - val_acc: 0.8041
Epoch 63/120
588/588 [==============================] - 15s 25ms/step - loss: 1.1113 - acc: 0.8316 - val_loss: 1.4380 - val_acc: 0.8041
Epoch 64/120
588/588 [==============================] - 15s 25ms/step - loss: 1.1102 - acc: 0.8452 - val_loss: 1.4217 - val_acc: 0.8041
Epoch 65/120
588/588 [==============================] - 15s 25ms/step - loss: 1.0961 - acc: 0.8469 - val_loss: 1.4129 - val_acc: 0.7973
Epoch 66/120
588/588 [==============================] - 15s 25ms/step - loss: 1.0903 - acc: 0.8537 - val_loss: 1.4019 - val_acc: 0.8041
Epoch 67/120
588/588 [==============================] - 15s 25ms/step - loss: 1.0890 - acc: 0.8503 - val_loss: 1.3850 - val_acc: 0.8176
Epoch 68/120
588/588 [==============================] - 15s 25ms/step - loss: 1.0878 - acc: 0.8520 - val_loss: 1.4035 - val_acc: 0.7635
Epoch 69/120
588/588 [==============================] - 15s 25ms/step - loss: 1.0984 - acc: 0.8469 - val_loss: 1.4060 - val_acc: 0.8041
Epoch 70/120
588/588 [==============================] - 15s 25ms/step - loss: 1.0893 - acc: 0.8418 - val_loss: 1.3981 - val_acc: 0.7973
Epoch 71/120
588/588 [==============================] - 15s 25ms/step - loss: 1.0876 - acc: 0.8605 - val_loss: 1.3951 - val_acc: 0.8041

请注意起初acc是如何低于val_acc和后来是大于val_acc有人可以阐明这里可能发生的事情吗?谢谢

1个回答

非常有趣的问题:

1)当训练和验证准确性在训练期间发生变化时到底发生了什么

  • accuracy每次批量计算后的变化。您有 588 个批次,因此将在每个批次之后计算损失(假设每个批次有 8 个图像)。但是,您在进度条中看到的准确度是当前批次的准确度与迄今为止所有先前批次的准确度的平均值。keras.utils.generic_utils.Progbar
  • 仅在val_acc一个时期结束时计算,并且一次使用所有验证数据集计算(将其视为一个批次,因此如果您有 100 个图像进行验证,它将计算精度为单个批次的 100 个图像)

2)不同的行为意味着什么

  • 由于不同的拆分大小,accval_acc通常彼此不同。

    • 尝试与validation_split=0.01and进行相同的实验,validation_split=0.4您将看到两者accuracyval_acc将如何变化。
    • 通常验证拆分越大,两个指标就越相似,因为验证拆分将足够大以具有代表性(假设它有猫和狗,而不仅仅是猫),考虑到您需要足够的数据来正确训练. 这解释了为什么在某些情况下val_acc高于accuracy,反之亦然。
  • 过度拟合仅在图形时尚或趋势发生变化并val_acc开始下降并accuracy不断增加时发生。这意味着您的模型无法使用验证数据集(以前未见过的图像)做得更好。

我使用loss并且val_lossaccuracy. 通常,损失是相反的,因此请以相反的方式解释上面的评论(很抱歉造成混淆,但我从我目前的实验中举了这个例子)我希望它有所帮助:

在此处输入图像描述

有 2 个实验,橙色和灰色。

  • 在这两个实验中,val_loss总是略高于loss(因为我当前的验证拆分恰好也是 0.2,但通常是 0.01,val_loss 甚至更高)。

  • 在这两个实验中,loss趋势都是线性下降的,这是因为梯度下降起作用并且损失函数定义明确并且收敛。

  • 橙色实验从 epoch 20 开始就过拟合了,因为val_loss不会再下降,相反,它会开始增加。

  • 灰色实验恰到好处,两者lossval_loss在下降,虽然val_loss可能大于loss它,但它并没有过度拟合,因为它仍在下降。所以这就是为什么它仍在训练:)

这里的概念比较复杂,希望能把自己解释清楚!干杯