每次运行神经网络的结果差异?

数据挖掘 Python 神经网络 scikit-学习 rmse
2022-02-17 23:20:06

我编写了一个简单的神经网络(MLP Regressor),以适应简单的数据框列。为了获得最佳架构,我还将它定义为一个函数,以查看它是否收敛到一个模式。但是每次我运行模型时,它给我的结果都与我上次尝试的不同,我不知道为什么?由于使问题可重现相当困难,因此我无法发布数据,但可以在此处发布网络架构:

def MLP(): #After 50
    nn=30
    nl=25
    a=2
    s=0
    learn=2
    learn_in=4.22220046e-05
    max_i=1000
    return nn,nl,a,s,learn,learn_in,max_i#,



def process(df):
    y = df.iloc[:,-1]
    X = df.drop(columns=['col3'])
    X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.30, random_state=27)
    return X_train, X_test, y_train, y_test

def MLPreg(x_train, y_train):#
    nn,nl,a,s,learn,learn_in,max_i=MLP()#nl,
    act=['identity', 'logistic', 'relu','tanh'] #'identity'=Linear
    activ=act[a]
    sol=['lbfgs', 'sgd', 'adam']
    solv=sol[s]
    l_r=['constant','invscaling','adaptive']
    lr=l_r[learn]
    model = MLPRegressor(hidden_layer_sizes=(nl,nn), activation=activ, solver=solv, alpha=0.00001, batch_size='auto', 
    learning_rate=lr, learning_rate_init=learn_in, power_t=0.5, max_iter=max_i, shuffle=True, random_state=None,
    tol=0.0001, verbose=False, warm_start=False, momentum=0.9, nesterovs_momentum=True, early_stopping=False, 
    validation_fraction=0.1, beta_1=0.9, beta_2=0.999, epsilon=1e-08, n_iter_no_change=10, max_fun=15000)
#     model = MLPRegressor(max_iter = 7000)
#     param_list = {"hidden_layer_sizes": [(10,),(50,)], "activation": ["identity", "tanh", "relu"], "solver": ["lbfgs", "sgd", "adam"], "alpha": [0.00005,0.0005]}
#     gridsearchcv = GridSearchCV(estimator=model, param_grid=param_list)


    model.fit(x_train, y_train)
    return model
```
1个回答

random_state = None当您设置和shuffle=True在您的模型中时,预计会有一些差异。这导致权重被随机初始化,并且训练数据以不同的顺序使用。对于可重现的结果,您应该将其设置为整数。有关 random_state 变量,请参阅Scikit 文档