量子电路上的强化学习

人工智能 神经网络 强化学习 Python 政策梯度
2021-10-19 00:56:54

我试图教一个代理使任何随机的 1-qubit 状态达到均匀叠加。所以基本上,整个电路将是State -> measurement -> new_state (|0> if 0, |1> if 1) -> Hadamard gate. 它只需要执行2 actions就这样。所以这更像是一个 RL 问题而不是 QC。

我正在使用强化学习来训练模型,但它似乎没有学到任何东西。奖励不断减少,即使在 300 万集之后,代理似乎也没有在任何地方收敛。我是这样训练的:

def get_exploration_rate(self, time_step):
        return self.epsilon_min + (self.epsilon - self.epsilon_min)*\
               math.exp(1.*time_step*self.epsilon_decay)

def act(self, data,t): #state
        rate = self.get_exploration_rate(t)
        if random.random() < rate:
            options = self.model.predict(data) #state
            options = np.squeeze(options)
            action =  random.randrange(self.action_size)
        else:
            options = self.model.predict(data) #state
            options = np.squeeze(options)
            action = options.argmax()
        return action, options, rate

def train(self):

        batch_size = 200
        t = 0                   #increment
        states, prob_actions, dlogps, drs, proj_data, reward_data =[], [], [], [], [], []
        tr_x, tr_y = [],[]
        avg_reward = []
        reward_sum = 0
        ep_number = 0
        prev_state = None
        first_step = True
        new_state = self.value
        data_inp = self.data

        while ep_number<3000000:
            prev_data = data_inp
            prev_state = new_state
            states.append(new_state)
            action, probs, rate = self.act(data_inp,t)
            prob_actions.append(probs)
            y = np.zeros([self.action_size])
            y[action] = 1
            new_state = eval(command[action])
            proj = projection(new_state, self.final_state)
            data_inp = [proj,action]
            data_inp = np.reshape(data_inp,(1,1,len(data_inp)))
            tr_x.append(data_inp)
            if(t==0):
                rw = reward(proj,0)
                drs.append(rw)
                reward_sum+=rw

            elif(t<4):
                rw = reward(new_state, self.final_state)
                drs.append(rw)
                print("present reward: ", rw)
                reward_sum+=rw
            elif(t==4):
                if not np.allclose(new_state, self.final_state):
                    rw = -1
                    drs.append(rw)
                    reward_sum+=rw
                else:
                    rw = 1
                    drs.append(rw)
                    reward_sum+=rw

            print("reward till now: ",reward_sum)
            dlogps.append(np.array(y).astype('float32') * probs)
            print("dlogps before time step: ", len(dlogps))
            print("time step: ",t)
            del(probs, action)
            t+=1
            if(t==5 or np.allclose(new_state,self.final_state)):                         #### Done State
                ep_number+=1
                ep_x = np.vstack(tr_x) #states
                ep_dlogp = np.vstack(dlogps)
                ep_reward = np.vstack(drs)
                disc_rw = discounted_reward(ep_reward,self.gamma)
                disc_rw = disc_rw.astype('float32')
                disc_rw -= np.mean(disc_rw)
                disc_rw /= np.std(disc_rw)

                tr_y_len = len(ep_dlogp)
                ep_dlogp*=disc_rw
                if ep_number % batch_size == 0:
                  input_tr_y = prob_actions - self.learning_rate * ep_dlogp
                    input_tr_y = np.reshape(input_tr_y, (tr_y_len,1,6))

                    self.model.train_on_batch(ep_x, input_tr_y)
                    tr_x, dlogps, drs, states, prob_actions, reward_data = [],[],[],[],[],[]
                env = Environment()
                new_state = env.reset()
                proj = projection(state, self.final_state)
                data_inp = [proj,5]
                data_inp = np.reshape(data_inp,(1,1,len(data_inp)))
                print("State after resetting: ", new_state)
                t=0

我尝试了各种方法,例如更改输入、奖励功能,甚至增加exploration费率。我已经分配了最大时间步长,5即使它应该在2.

我究竟做错了什么?有什么建议?

0个回答
没有发现任何回复~