Tensorflow - 如何设置外部进程的梯度(py_func)?

数据挖掘 Python 张量流
2021-09-26 20:26:37

我正在计算和优化在外部进程上使用的一些变量,但我收到错误“无梯度”。

代码的高度简化(未经测试)版本,但您可以理解:

def external_process (myvar):
    subprocess.call("process.sh", myvar)
    with open('result.json', 'r') as f:
        result = json.load(data, f)
    return np.array(result["result"])

myvar = tf.Variable(1.0, dtype = 'float32', trainable = True)

loss = tf.reduce_sum( tf.py_func(external_process, [myvar], [tf.float32])[0] )

optimizer = tf.train.AdamOptimizer(0.05)
train_step = optimizer.minimize(loss)
sess.run(train_step)

我看到了这个讨论,但我不完全理解:https ://github.com/tensorflow/tensorflow/issues/1095

谢谢!

2个回答

这里https://www.tensorflow.org/versions/r0.9/api_docs/python/framework.html(搜索gradient_override_map)是一个例子gradient_override_map

@tf.RegisterGradient("CustomSquare")
def _custom_square_grad(op, grad):
  # ...

with tf.Graph().as_default() as g:
  c = tf.constant(5.0)
  s_1 = tf.square(c)  # Uses the default gradient for tf.square.
  with g.gradient_override_map({"Square": "CustomSquare"}):
    s_2 = tf.square(s_2)  # Uses _custom_square_grad to compute the
                          # gradient of s_2.

因此,一个可能的解决方案可能是:

@tf.RegisterGradient("ExternalGradient")
def _custom_external_grad(unused_op, grad):
    # I don't know yet how to compute a gradient
    # From Tensorflow documentation:
    return grad, tf.neg(grad)

def external_process (myvar):
    subprocess.call("process.sh", myvar)
    with open('result.json', 'r') as f:
        result = json.load(data, f)
    return np.array(result["result"])

myvar = tf.Variable(1.0, dtype = 'float32', trainable = True)

g = tf.get_default_graph()
with g.gradient_override_map({"PyFunc": "ExternalGradient"}):
    external_data =  tf.py_func(external_process, [myvar], [tf.float32])[0]

loss =  tf.reduce_sum(external_data)

optimizer = tf.train.AdamOptimizer(0.05)
train_step = optimizer.minimize(loss)
sess.run(train_step)

正确答案是:“一个外部过程是不可微的(除非你知道每个细节,在这种情况下什么是不可能的),所以这个问题应该作为一个强化学习问题来面对”