这种强化学习环境有什么问题?

数据挖掘 机器学习 深度学习 强化学习 dqn 开放式健身房
2022-01-28 22:52:37

我正在研究以下强化学习问题:我有一瓶固定容量(比如 5 升)。在瓶子的底部有公鸡去除水。去除水的分布不固定。我们可以从瓶子中取出任意数量的水,即 [0, 5] 之间的任意连续值。

在瓶子的顶部安装了一个水龙头,用于将水注入瓶子中。RL 剂可以在瓶子中填充 [0, 1, 2, 3, 4] 升。初始瓶液位是 [0, 5] 之间的任何值。

我想在这种环境中训练代理以获得最佳的动作顺序,这样瓶子就不会变空和溢出,这意味着需要持续供应水。

动作空间 = [0, 1, 2, 3, 4] 离散空间

观察空间 = [0, 瓶子容量] 即 [0, 5] 连续空间

奖励逻辑=如果瓶子因动作空了给予负奖励;如果由于行动导致瓶子溢出,则给予负面奖励

我决定使用python来创建一个环境。

from gym import spaces
import numpy as np

class WaterEnv():
    def __init__(self, BottleCapacity = 5):
        ## CONSTANTS
        self.MinLevel = 0 # minimum water level
        self.BottleCapacity = BottleCapacity # bottle capacity
            # action space
        self.action_space = spaces.Discrete(self.BottleCapacity)
            # observation space
        self.observation_space = spaces.Box(low=self.MinLevel, high=self.BottleCapacity,
                                            shape=(1,))
        # initial bottle level
        self.initBlevel = self.observation_space.sample()

    def step(self, action):
        # water qty to remove
        WaterRemoveQty = np.random.uniform(self.MinLevel, self.BottleCapacity, 1)

        # updated water level after removal of water
        UpdatedWaterLevel = (self.initBlevel - WaterRemoveQty)
        # add water - action taken
        UpdatedWaterLevel_ = UpdatedWaterLevel +  action

        if UpdatedWaterLevel_ <= self.MinLevel:
            reward = -1
            done = True
        elif UpdatedWaterLevel_ > self.BottleCapacity:
            reward = -1
            done = True
        else:
            reward = 0.5
            done = False

        return UpdatedWaterLevel_, reward, done

    def reset(self):
        """
        Reset the initial bottle value
        """
        self.initBlevel = self.observation_space.sample()

        return self.initBlevel

import random
from collections import deque
from keras.models import Sequential
from keras.layers import Dense
from keras.optimizers import sgd

class DQNAgent:
    def __init__(self, state_size, action_size):
        self.state_size = state_size
        self.action_size = action_size
        self.memory = deque(maxlen=2000) # memory size
        self.gamma = 0.99    # discount rate
        self.epsilon = 1.0  # exploration rate
        self.epsilon_min = 0.01 # minmun exploration rate
        self.epsilon_decay = 0.99 # exploration decay
        self.learning_rate = 0.001 # learning rate
        self.model = self._build_model()

    def _build_model(self):
        # Neural Net for Deep-Q learning Model
        model = Sequential()
        model.add(Dense(256, input_dim=self.state_size, activation='relu'))
        model.add(Dense(256, activation='relu'))
        model.add(Dense(self.action_size, activation='linear'))
        model.compile(loss='mse',
                      optimizer=sgd(lr=self.learning_rate))
        return model

    def remember(self, state, action, reward, next_state, done):
        self.memory.append((state, action, reward, next_state, done))

    def act(self, state):
        if np.random.rand() <= self.epsilon:
            return random.randrange(self.action_size)
        act_values = self.model.predict(state)
        return np.argmax(act_values[0])  # returns action

    def replay(self, batch_size):
        minibatch = random.sample(self.memory, batch_size)
        for state, action, reward, next_state, done in minibatch:
            target = reward
            if not done:
                target = (reward + self.gamma *
                          np.amax(self.model.predict(next_state)[0]))
            target_f = self.model.predict(state)
            target_f[0][action] = target
            self.model.fit(state, target_f, epochs=1, verbose=0)
        if self.epsilon > self.epsilon_min:
            self.epsilon *= self.epsilon_decay

# create iSilo enviroment object
env = WaterEnv()

state_size = env.observation_space.shape[0]
action_size = env.action_space.n

minibatch = 32

# Initialize agent
agent = DQNAgent(state_size, action_size)

done = False
lReward = []  # carry the reward upto end of simulation
rewardAll = 0
XArray = [] # carry the actions upto end of simulation

EPOCHS = 1000

for e in range(EPOCHS):
        #state = np.reshape(state, [1, 1])
        # reset state in the beginning of each epoch
        state = env.reset()
        time_t = 0
        rewardAll = 0

        while True:
            # Decide action
            #state = np.reshape(state, [1, 1])
            action = agent.act(state)

            next_state,reward, done = env.step(action)
            #reward = reward if not done else -10

            # Remember the previous state, action, reward, and done
            #next_state = np.reshape(next_state, [1, state_size])
            agent.remember(state, action, reward, next_state, done)
            # remembering the action for perfrormace check
            XArray.append(action)
            # Assign next_state the new current state for the next frame.
            state = next_state

            if done:
                print("  episode: {}/{}, score: {}, e: {:.2}"
                      .format(e, EPOCHS, time_t, agent.epsilon))
                break
            rewardAll += reward

            # experience and reply
            if len(agent.memory) > minibatch:
                agent.replay(minibatch)

        lReward.append(rewardAll) # append the rewards

在运行 1000 个 epoch 后,我观察到代理没有学到任何东西。无法找出问题所在。

1个回答

我可以看到两个问题:

  1. 您的环境没有跟踪状态的变化,只是随机的成功/失败self.initBlevel,永远不会修改以反映变化。尽管您计算并返回新状态(作为变量UpdatedWaterLevel_),但这不会反馈到系统中。您将其作为下一个“状态”存储在 DQN 重放表中,但实际上并未将其作为当前状态存储在环境中。您应该这样做 - 没有它,重播表将填充不正确的值。应该有一个环境可以访问的变量代表当前状态。

  2. 您正在将系统作为一个情节问题运行,但不要为新情节的开始重新设置环境。由于上述问题,这是一个“隐藏”错误,但如果您让状态超出您定义的问题范围,则会立即成为您的问题。

鉴于问题设置,我希望代理能够学习始终将容器填充到最大可能容量(然后它会被随机请求的数量耗尽)。这将导致无限长的剧集,所以你仍然需要打折。

对于这个简单的任务,您的神经网络可能过于复杂,这可能会使学习速度变慢。但这很难说。基于当前状态和行为的预期未来折扣奖励的关系很复杂,因此您可能需要中等规模的网络来捕捉它。