Tensorflow,Optimizer.apply_gradient:“NoneType”对象没有属性“merge_call”

数据挖掘 神经网络 深度学习 喀拉斯 张量流
2021-10-06 21:23:52

我的程序给出以下错误消息:

 AttributeError                            Traceback (most recent call last)
<ipython-input-56-8f384e36cbe9> in <module>
     49         print(loss.numpy())
     50         grads = tape.gradient(loss,g.trainable_variables)
---> 51         optimizer.apply_gradients(zip(grads,g.trainable_variables))
     52         values.append(value.numpy())
     53     print('... Value =',values[-1])

~/.local/lib/python3.6/site-packages/tensorflow/python/keras/optimizer_v2/optimizer_v2.py in apply_gradients(self, grads_and_vars, name)
    437       self._prepare(var_list)
    438 
--> 439       return distribute_ctx.get_replica_context().merge_call(
    440           self._distributed_apply,
    441           args=(grads_and_vars,),

AttributeError: 'NoneType' object has no attribute 'merge_call'

有趣的行是:

epochs = 20
batch_size = 100
learning_rate = 0.01
g = G()
optimizer = Adam(learning_rate)
values = []
matrix = tf.constant(np.tri(NT+1),dtype=tf.float32)
for epoch in range(epochs):
    print(' Epoch',epoch+1)
    batches = dataset.shuffle(sample_size).batch(batch_size,drop_remainder=True)
    for batch in batches:
        with tf.GradientTape() as tape:
            g_values = tf.reduce_sum(g(batch),2)
            g_integrals = tf.tensordot(g_values,matrix,[[1],[1]]) * T/(NT+1)
            f_values = f(batch)
            # print(f_values.numpy())
            integrand = f_values * tf.exp(-g_values)
            value = tf.reduce_mean(integrand) * T + tf.log(2.)/alpha
            loss = -value
        print(loss.numpy())
        grads = tape.gradient(loss,g.trainable_variables)
        optimizer.apply_gradients(zip(grads,g.trainable_variables))
        values.append(value.numpy())

我不明白这一点。f_values、integrand、loss、grads 都不包含 inf 或 nan,只是普通的 tf.float32 数字,同上 g.trainable_variables。在另一台 PC 上,相同的代码可以正常工作。我使用 TensorFlow,请参阅:

import numpy as np
import tensorflow as tf
tf.enable_eager_execution()
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Dense,BatchNormalization
from tensorflow.keras.optimizers import Adam

(设置 tf.enable_eager_execution 命令是因为,出于我不知道的原因,“pip install tensorflow==2.0.0-beta1”仅安装 tensorflow 1.4.0,根据

print(tf.__version__)

.) 网络架构如下:

class G(Model):
    def __init__(self,depth=2,hidden_units=32,dim=dim):
        Model.__init__(self=self)
        
        self.depth = depth
        
        self.mylayers = []
        for l in range(depth-1):
            self.mylayers.append(Dense(hidden_units,activation='tanh', \
                name='Layer{}'.format(l+1)))
        self.mylayers.append(Dense(1,activation='sigmoid',name='Layer{}'.format(depth)))
        self.BN = []
        for l in range(depth):
            self.BN.append(BatchNormalization())

    def call(self,inputs):
        data = inputs
        for l in range(self.depth):
            data = self.BN[l](data)
            data = self.mylayers[l](data)
        return data

您的帮助将不胜感激。

0个回答
没有发现任何回复~