Keras深度強(qiáng)化學(xué)習(xí)-- Policy Network與DQN實(shí)現(xiàn)

最近在接觸一些關(guān)深度強(qiáng)化學(xué)習(xí)(DRL)的內(nèi)容,本文是學(xué)習(xí)DRL過程中對Demo的復(fù)現(xiàn)與理解哮针。相關(guān)原理推薦李宏毅的Q-Learning強(qiáng)化學(xué)習(xí)深度強(qiáng)化學(xué)習(xí)課程得哆。

強(qiáng)化學(xué)習(xí)中有兩種重要的方法:Policy Gradients和Q-learning沥曹。其中Policy Gradients方法直接預(yù)測在某個(gè)環(huán)境下應(yīng)該采取的Action髓帽,而Q-learning方法預(yù)測某個(gè)環(huán)境下所有Action的期望值(即Q值)。一般來說驳阎,Q-learning方法只適合有少量離散取值的Action環(huán)境抗愁,而Policy Gradients方法適合有連續(xù)取值的Action環(huán)境。在與深度學(xué)習(xí)方法結(jié)合后搞隐,這兩種算法就變成了Policy Network和DQN(Deep Q-learning Network)驹愚。

Paper
Policy Gradient:Policy gradient methods for reinforcement learning with function approximation
DQN: Playing Atari with Deep Reinforcement Learning
NatureDQN:Human-level control through deep reinforcement learning

Githubhttps://github.com/xiaochus/Deep-Reinforcement-Learning-Practice

環(huán)境

  • Python 3.6
  • Tensorflow-gpu 1.8.0
  • Keras 2.2.2
  • Gym 0.10.8

Gym

Gym 是 OpenAI 發(fā)布的用于開發(fā)和比較強(qiáng)化學(xué)習(xí)算法的工具包。使用它我們可以讓 AI 智能體做很多事情劣纲,比如行走逢捺、跑動(dòng),以及進(jìn)行多種游戲癞季。在這個(gè)Demo中劫瞳,我們使用的是車桿游戲(Cart-Pole)這個(gè)小游戲。

游戲規(guī)則很簡單绷柒,游戲里面有一個(gè)小車志于,上有豎著一根桿子。小車需要左右移動(dòng)來保持桿子豎直废睦。如果桿子傾斜的角度大于15°伺绽,那么游戲結(jié)束。小車也不能移動(dòng)出一個(gè)范圍(中間到兩邊各2.4個(gè)單位長度)嗜湃。

Cart-Pole:

car.png

Cart-Pole世界包括一個(gè)沿水平軸移動(dòng)的車和一個(gè)固定在車上的桿子奈应。 在每個(gè)時(shí)間步,你可以觀察它的位置(x)购披,速度(x_dot)杖挣,角度(theta)和角速度(theta_dot)。 這是這個(gè)世界的可觀察的狀態(tài)刚陡。 在任何狀態(tài)下惩妇,車只有兩種可能的行動(dòng):向左移動(dòng)或向右移動(dòng)株汉。換句話說,Cart-Pole的狀態(tài)空間有四個(gè)維度的連續(xù)值歌殃,行動(dòng)空間有一個(gè)維度的兩個(gè)離散值乔妈。

首先安裝gym:

pip install gym

gym嘗試:

# -*- coding: utf-8 -*-

import gym
import numpy as np


def try_gym():
    # 使用gym創(chuàng)建一個(gè)CartPole環(huán)境
    # 這個(gè)環(huán)境可以接收一個(gè)action,返回執(zhí)行action后的觀測值氓皱,獎(jiǎng)勵(lì)與游戲是否結(jié)束
    env = gym.make('CartPole-v0')
    # 重置游戲環(huán)境
    env.reset()

    # 游戲輪數(shù)
    random_episodes = 0
    # 每輪游戲的Reward總和
    reward_sum = 0
    count = 0
    while random_episodes < 10:
        # 渲染顯示游戲效果
        env.render()
        # 隨機(jī)生成一個(gè)action褒翰,即向左移動(dòng)或者向右移動(dòng)。
        # 然后接收執(zhí)行action之后的反饋值
        observation, reward, done, _ = env.step(np.random.randint(0, 2))
        reward_sum += reward
        count += 1
        # 如果游戲結(jié)束匀泊,打印Reward總和,重置游戲
        if done:
            random_episodes += 1
            print("Reward for this episode was: {}, turns was: {}".format(reward_sum, count))
            reward_sum = 0
            count = 0
            env.reset()


if __name__ == '__main__':
    try_gym()

我們輸出的是每一輪游戲從開始到結(jié)束得到的Reward的總和與操作次數(shù)朵你,輸出結(jié)果如下:

Reward for this episode was: 20.0, turns was: 20
Reward for this episode was: 26.0, turns was: 26
Reward for this episode was: 18.0, turns was: 18
Reward for this episode was: 25.0, turns was: 25
Reward for this episode was: 25.0, turns was: 25
Reward for this episode was: 23.0, turns was: 23
Reward for this episode was: 29.0, turns was: 29
Reward for this episode was: 17.0, turns was: 17
Reward for this episode was: 13.0, turns was: 13
Reward for this episode was: 27.0, turns was: 27

如果使用的環(huán)境是Anoconda 3,可能會出現(xiàn)下列錯(cuò)誤:

    raise NotImplementedError('abstract')

NotImplementedError: abstract

這是由于pyglet引起的各聘,需要替換成1.2.4版本:

pip uninstall pyglet
pip install pyglet==1.2.4

Policy Network

R.Sutton在2000年提出的Policy Gradient方法是RL中學(xué)習(xí)連續(xù)的行為控制策略的經(jīng)典方法,其解決方案是通過一個(gè)概率分布函數(shù)πθ(st|θπ) 來表示每一步的最優(yōu)策略抡医,在每一步根據(jù)該概率分布進(jìn)行action采樣獲得當(dāng)前的最佳a(bǔ)ction取值躲因,即: at~πθ(st|θπ)。生成action的過程本質(zhì)上是一個(gè)隨機(jī)過程;最后學(xué)習(xí)到的策略忌傻,也是一個(gè)隨機(jī)策略(stochastic policy)大脉。

Policy Network是一種典型的蒙特卡洛方法,是在一個(gè)episode結(jié)束時(shí)對discount reward進(jìn)行學(xué)習(xí)水孩,其實(shí)現(xiàn)流程如下:

(1)首先構(gòu)建神經(jīng)網(wǎng)絡(luò)镰矿,網(wǎng)絡(luò)的輸入為obervation,網(wǎng)絡(luò)的輸出為action=1的概率俘种。
(2)在一個(gè)episode結(jié)束時(shí)(游戲勝利或死亡)秤标,將env重置,即observation恢復(fù)到了初始狀態(tài)宙刘。下一次循環(huán)時(shí)苍姜,輸入observation,輸出一個(gè)概率值p0悬包。根據(jù)概率p0選取一個(gè)action輸入到環(huán)境中衙猪,獲取到新的observation和reward。記錄[observation, action, reward]作為后續(xù)訓(xùn)練的數(shù)據(jù)布近。
(3)reward為大于0的數(shù)垫释,根據(jù)上面的action得到reward,將整個(gè)episode的reward放到一個(gè)序列里吊输,然后計(jì)算discount_reward饶号。
(4)攢夠個(gè)batch的episode,進(jìn)行梯度下降更新季蚂。損失函數(shù)分為兩部分茫船,首先使用binary_crossentropy計(jì)算action的交叉熵?fù)p失琅束,然后與discount_reward相乘得到最終損失。

使用keras實(shí)現(xiàn)的Policy Network如下所示:

# -*- coding: utf-8 -*-
import os
import gym
import numpy as np

from keras.layers import Input, Dense
from keras.models import Model
from keras.optimizers import Adam
import keras.backend as K


class PG:
    def __init__(self):
        self.model = self.build_model()
        if os.path.exists('pg.h5'):
            self.model.load_weights('pg.h5')

        self.env = gym.make('CartPole-v0')
        self.gamma = 0.95

    def build_model(self):
        """基本網(wǎng)絡(luò)結(jié)構(gòu).
        """
        inputs = Input(shape=(4,), name='ob_input')
        x = Dense(16, activation='relu')(inputs)
        x = Dense(16, activation='relu')(x)
        x = Dense(1, activation='sigmoid')(x)

        model = Model(inputs=inputs, outputs=x)

        return model

    def loss(self, y_true, y_pred):
        """損失函數(shù).
        Arguments:
            y_true: (action, reward)
            y_pred: action_prob

        Returns:
            loss: reward loss
        """
        action_pred = y_pred
        action_true, discount_episode_reward = y_true[:, 0], y_true[:, 1]
        # 二分類交叉熵?fù)p失
        action_true = K.reshape(action_true, (-1, 1))
        loss = K.binary_crossentropy(action_true, action_pred)
        # 乘上discount_reward
        loss = loss * K.flatten(discount_episode_reward)

        return loss

    def discount_reward(self, rewards):
        """Discount reward
        Arguments:
            rewards: 一次episode中的rewards
        """
        # 以時(shí)序順序計(jì)算一次episode中的discount reward
        discount_rewards = np.zeros_like(rewards, dtype=np.float32)
        cumulative = 0.
        for i in reversed(range(len(rewards))):
            cumulative = cumulative * self.gamma + rewards[i]
            discount_rewards[i] = cumulative

        # normalization,有利于控制梯度的方差
        discount_rewards -= np.mean(discount_rewards)
        discount_rewards //= np.std(discount_rewards)

        return list(discount_rewards)

    def train(self, episode, batch):
        """訓(xùn)練
        Arguments:
            episode: 游戲次數(shù)
            batch: 一個(gè)batch包含幾次episode算谈,每個(gè)batch更新一次梯度

        Returns:
            history: 訓(xùn)練記錄
        """
        self.model.compile(loss=self.loss, optimizer=Adam(lr=0.01))

        history = {'episode': [], 'Batch_reward': [], 'Episode_reward': [], 'Loss': []}

        episode_reward = 0
        states = []
        actions = []
        rewards = []
        discount_rewards = []

        for i in range(episode):
            observation = self.env.reset()
            erewards = []

            while True:
                x = observation.reshape(-1, 4)
                prob = self.model.predict(x)[0][0]
                # 根據(jù)隨機(jī)概率選擇action
                action = np.random.choice(np.array(range(2)), size=1, p=[1 - prob, prob])[0]
                observation, reward, done, _ = self.env.step(action)
                # 記錄一個(gè)episode中產(chǎn)生的數(shù)據(jù)
                states.append(x[0])
                actions.append(action)
                erewards.append(reward)
                rewards.append(reward)

                if done:
                     # 一次episode結(jié)束后計(jì)算discount rewards
                    discount_rewards.extend(self.discount_reward(erewards))
                    break
            # 保存batch個(gè)episode的數(shù)據(jù)涩禀,用這些數(shù)據(jù)更新模型
            if i != 0 and i % batch == 0: 
                batch_reward = sum(rewards)
                episode_reward = batch_reward / batch
                # 輸入X為狀態(tài), y為action與discount_rewards然眼,用來與預(yù)測出來的prob計(jì)算損失
                X = np.array(states)
                y = np.array(list(zip(actions, discount_rewards)))

                loss = self.model.train_on_batch(X, y)
    
                history['episode'].append(i)
                history['Batch_reward'].append(batch_reward)
                history['Episode_reward'].append(episode_reward)
                history['Loss'].append(loss)

                print('Episode: {} | Batch reward: {} | Episode reward: {} | loss: {:.3f}'.format(i, batch_reward, episode_reward, loss))

                episode_reward = 0
                states = []
                actions = []
                rewards = []
                discount_rewards = []

        self.model.save_weights('dpg.h5')

        return history

    def play(self):
        """使用訓(xùn)練好的模型測試游戲.
        """
        observation = self.env.reset()

        count = 0
        reward_sum = 0
        random_episodes = 0

        while random_episodes < 10:
            self.env.render()

            x = observation.reshape(-1, 4)
            prob = self.model.predict(x)[0][0]
            action = 1 if prob > 0.5 else 0
            observation, reward, done, _ = self.env.step(action)

            count += 1
            reward_sum += reward

            if done:
                print("Reward for this episode was: {}, turns was: {}".format(reward_sum, count))
                random_episodes += 1
                reward_sum = 0
                count = 0
                observation = self.env.reset()


if __name__ == '__main__':
    model = PG()
    history = model.train(5000, 5)
    model.play()

訓(xùn)練結(jié)果與測試結(jié)果如下所示艾船,可以看出隨著訓(xùn)練次數(shù)的增加,Policy Network模型在游戲中獲得Reward不斷的增加高每,并且Loss不斷降低屿岂。在完成5000次Episode的訓(xùn)練后進(jìn)行模型測試, 相比隨機(jī)操作來說Policy Network模型能達(dá)到200 reward鲸匿,由于到達(dá)200個(gè)reward之后游戲也會結(jié)束爷怀,因此Policy Network可以說是解決了這個(gè)問題。
但是根據(jù)我的實(shí)驗(yàn)带欢,Policy Network訓(xùn)練起來并不穩(wěn)定运授,模型參數(shù)初始化對訓(xùn)練效果也有著較大的影響,需要多次嘗試乔煞。有時(shí)reward收斂一段時(shí)間后又會快速下降吁朦,出現(xiàn)周期性的變化,從圖中也可以看出訓(xùn)練過程的不穩(wěn)定渡贾。

Episode: 5 | Batch reward: 120.0 | Episode reward: 24.0 | loss: -0.325
Episode: 10 | Batch reward: 67.0 | Episode reward: 13.4 | loss: -0.300
Episode: 15 | Batch reward: 128.0 | Episode reward: 25.6 | loss: -0.326
Episode: 20 | Batch reward: 117.0 | Episode reward: 23.4 | loss: -0.332
Episode: 25 | Batch reward: 122.0 | Episode reward: 24.4 | loss: -0.330
Episode: 30 | Batch reward: 97.0 | Episode reward: 19.4 | loss: -0.339
Episode: 35 | Batch reward: 120.0 | Episode reward: 24.0 | loss: -0.331
......

Episode: 4960 | Batch reward: 973.0 | Episode reward: 194.6 | loss: -0.228
Episode: 4965 | Batch reward: 1000.0 | Episode reward: 200.0 | loss: -0.224
Episode: 4970 | Batch reward: 881.0 | Episode reward: 176.2 | loss: -0.238
Episode: 4975 | Batch reward: 1000.0 | Episode reward: 200.0 | loss: -0.213
Episode: 4980 | Batch reward: 974.0 | Episode reward: 194.8 | loss: -0.229
Episode: 4985 | Batch reward: 862.0 | Episode reward: 172.4 | loss: -0.235
Episode: 4990 | Batch reward: 914.0 | Episode reward: 182.8 | loss: -0.233
Episode: 4995 | Batch reward: 737.0 | Episode reward: 147.4 | loss: -0.254

Reward for this episode was: 200.0, turns was: 200
Reward for this episode was: 200.0, turns was: 200
Reward for this episode was: 200.0, turns was: 200
Reward for this episode was: 200.0, turns was: 200
Reward for this episode was: 200.0, turns was: 200
Reward for this episode was: 200.0, turns was: 200
Reward for this episode was: 200.0, turns was: 200
Reward for this episode was: 200.0, turns was: 200
Reward for this episode was: 200.0, turns was: 200
Reward for this episode was: 200.0, turns was: 200
Policy Network

DQN

DQN是一種典型的時(shí)序差分方法逗宜,與Policy Network不同,DQN對時(shí)刻n與時(shí)刻n+1的數(shù)據(jù)進(jìn)行學(xué)習(xí)空骚,這樣話其產(chǎn)生的方差要小于蒙特卡洛方法锦溪。常用的DQN算法是在15年提出來的Nature DQN,這里使用Nature DQN為例府怯。

DQN使用單個(gè)網(wǎng)絡(luò)來進(jìn)行選擇動(dòng)作和計(jì)算目標(biāo)Q值刻诊;Nature DQN使用了兩個(gè)網(wǎng)絡(luò),一個(gè)當(dāng)前主網(wǎng)絡(luò)用來選擇動(dòng)作牺丙,更新模型參數(shù)则涯,另一個(gè)目標(biāo)網(wǎng)絡(luò)用于計(jì)算目標(biāo)Q值,兩個(gè)網(wǎng)絡(luò)的結(jié)構(gòu)是一模一樣的冲簿。目標(biāo)網(wǎng)絡(luò)的網(wǎng)絡(luò)參數(shù)不需要迭代更新粟判,而是每隔一段時(shí)間從當(dāng)前主網(wǎng)絡(luò)復(fù)制過來,即延時(shí)更新峦剔,這樣可以減少目標(biāo)Q值和當(dāng)前的Q值相關(guān)性档礁。Nature DQN和DQN相比,除了用一個(gè)新的相同結(jié)構(gòu)的目標(biāo)網(wǎng)絡(luò)來計(jì)算目標(biāo)Q值以外吝沫,其余部分基本是完全相同的呻澜。

Nature DQN的實(shí)現(xiàn)流程如下:
(1)首先構(gòu)建神經(jīng)網(wǎng)絡(luò)递礼,一個(gè)主網(wǎng)絡(luò),一個(gè)目標(biāo)網(wǎng)絡(luò)羹幸,他們的輸入都為obervation脊髓,輸出為不同action對應(yīng)的Q值。
(2)在一個(gè)episode結(jié)束時(shí)(游戲勝利或死亡)栅受,將env重置将硝,即observation恢復(fù)到了初始狀態(tài)observation,通過貪婪選擇法ε-greedy選擇action屏镊。根據(jù)選擇的action依疼,獲取到新的next_observation、reward和游戲狀態(tài)而芥。將[observation, action, reward, next_observation, done]放入到經(jīng)驗(yàn)池中涛贯。經(jīng)驗(yàn)池有一定的容量,會將舊的數(shù)據(jù)刪除蔚出。
(3)從經(jīng)驗(yàn)池中隨機(jī)選取batch個(gè)大小的數(shù)據(jù),計(jì)算出observation的Q值作為Q_target虫腋。對于done為False的數(shù)據(jù)骄酗,使用reward和next_observation計(jì)算discount_reward。然后將discount_reward更新到Q_traget中悦冀。
(4)每一個(gè)action進(jìn)行一次梯度下降更新趋翻,使用MSE作為損失函數(shù)。注意與DPG不同盒蟆,參數(shù)更新不是發(fā)生在每次游戲結(jié)束踏烙,而是發(fā)生在游戲進(jìn)行中的每一步。
(5)每個(gè)batch我們更新參數(shù)epsilon历等,egreedy的epsilon是不斷變小的讨惩,也就是隨機(jī)性不斷變小。
(6)每隔固定的步數(shù)寒屯,從主網(wǎng)絡(luò)中復(fù)制參數(shù)到目標(biāo)網(wǎng)絡(luò)荐捻。

使用keras實(shí)現(xiàn)的Nature DQN如下所示:

# -*- coding: utf-8 -*-
import os
import gym
import random
import numpy as np

from collections import deque

from keras.layers import Input, Dense
from keras.models import Model
from keras.optimizers import Adam
import keras.backend as K


class DQN:
    def __init__(self):
        self.model = self.build_model()
        self.target_model = self.build_model()
        self.update_target_model()

        if os.path.exists('dqn.h5'):
            self.model.load_weights('dqn.h5')

        # 經(jīng)驗(yàn)池
        self.memory_buffer = deque(maxlen=2000)
        # Q_value的discount rate,以便計(jì)算未來reward的折扣回報(bào)
        self.gamma = 0.95
        # 貪婪選擇法的隨機(jī)選擇行為的程度
        self.epsilon = 1.0
        # 上述參數(shù)的衰減率
        self.epsilon_decay = 0.995
        # 最小隨機(jī)探索的概率
        self.epsilon_min = 0.01

        self.env = gym.make('CartPole-v0')

    def build_model(self):
        """基本網(wǎng)絡(luò)結(jié)構(gòu).
        """
        inputs = Input(shape=(4,))
        x = Dense(16, activation='relu')(inputs)
        x = Dense(16, activation='relu')(x)
        x = Dense(2, activation='linear')(x)

        model = Model(inputs=inputs, outputs=x)

        return model

    def update_target_model(self):
        """更新target_model
        """
        self.target_model.set_weights(self.model.get_weights())

    def egreedy_action(self, state):
        """ε-greedy選擇action

        Arguments:
            state: 狀態(tài)

        Returns:
            action: 動(dòng)作
        """
        if np.random.rand() <= self.epsilon:
             return random.randint(0, 1)
        else:
            q_values = self.model.predict(state)[0]
            return np.argmax(q_values)

    def remember(self, state, action, reward, next_state, done):
        """向經(jīng)驗(yàn)池添加數(shù)據(jù)

        Arguments:
            state: 狀態(tài)
            action: 動(dòng)作
            reward: 回報(bào)
            next_state: 下一個(gè)狀態(tài)
            done: 游戲結(jié)束標(biāo)志
        """
        item = (state, action, reward, next_state, done)
        self.memory_buffer.append(item)

    def update_epsilon(self):
        """更新epsilon
        """
        if self.epsilon >= self.epsilon_min:
            self.epsilon *= self.epsilon_decay

    def process_batch(self, batch):
        """batch數(shù)據(jù)處理

        Arguments:
            batch: batch size

        Returns:
            X: states
            y: [Q_value1, Q_value2]
        """
         # 從經(jīng)驗(yàn)池中隨機(jī)采樣一個(gè)batch
        data = random.sample(self.memory_buffer, batch)
        # 生成Q_target寡夹。
        states = np.array([d[0] for d in data])
        next_states = np.array([d[3] for d in data])

        y = self.model.predict(states)
        q = self.target_model.predict(next_states)

        for i, (_, action, reward, _, done) in enumerate(data):
            target = reward
            if not done:
                target += self.gamma * np.amax(q[i])
            y[i][action] = target

        return states, y


    def train(self, episode, batch):
        """訓(xùn)練
        Arguments:
            episode: 游戲次數(shù)
            batch: batch size

        Returns:
            history: 訓(xùn)練記錄
        """
        self.model.compile(loss='mse', optimizer=Adam(1e-3))

        history = {'episode': [], 'Episode_reward': [], 'Loss': []}

        count = 0
        for i in range(episode):
            observation = self.env.reset()
            reward_sum = 0
            loss = np.infty
            done = False

            while not done:
                # 通過貪婪選擇法ε-greedy選擇action处面。
                x = observation.reshape(-1, 4)
                action = self.egreedy_action(x)
                observation, reward, done, _ = self.env.step(action)
                # 將數(shù)據(jù)加入到經(jīng)驗(yàn)池。
                reward_sum += reward
                self.remember(x[0], action, reward, observation, done)

                if len(self.memory_buffer) > batch:
                    # 訓(xùn)練
                    X, y = self.process_batch(batch)
                    loss = self.model.train_on_batch(X, y)

                    count += 1
                    # 減小egreedy的epsilon參數(shù)菩掏。
                    self.update_epsilon()

                    # 固定次數(shù)更新target_model
                    if count != 0 and count % 20 == 0:
                        self.update_target_model()

            if i % 5 == 0:
                history['episode'].append(i)
                history['Episode_reward'].append(reward_sum)
                history['Loss'].append(loss)
    
                print('Episode: {} | Episode reward: {} | loss: {:.3f} | e:{:.2f}'.format(i, reward_sum, loss, self.epsilon))

        self.model.save_weights('dqn.h5')

        return history

    def play(self):
        """使用訓(xùn)練好的模型測試游戲.
        """
        observation = self.env.reset()

        count = 0
        reward_sum = 0
        random_episodes = 0

        while random_episodes < 10:
            self.env.render()

            x = observation.reshape(-1, 4)
            q_values = self.model.predict(x)[0]
            action = np.argmax(q_values)
            observation, reward, done, _ = self.env.step(action)

            count += 1
            reward_sum += reward

            if done:
                print("Reward for this episode was: {}, turns was: {}".format(reward_sum, count))
                random_episodes += 1
                reward_sum = 0
                count = 0
                observation = self.env.reset()

        self.env.close()


if __name__ == '__main__':
    model = DQN()
    history = model.train(600, 32)
    model.play()

訓(xùn)練結(jié)果與測試結(jié)果如下所示魂角,可以看出隨著訓(xùn)練次數(shù)的增加,DQN模型在游戲中獲得Reward不斷的增加智绸,并且Loss不斷降低野揪。在batch=32的條件下500次Episode的訓(xùn)練后進(jìn)行模型測試访忿, DQN也有不錯(cuò)的表現(xiàn),如果進(jìn)一步訓(xùn)練應(yīng)該能達(dá)到和Policy Network同樣的效果囱挑。
相比Policy Network醉顽,DQN的訓(xùn)練過程更穩(wěn)定一些,但是DQN有個(gè)問題平挑,就是它并不一定能保證Q網(wǎng)絡(luò)的收斂游添。也就是說,我們不一定可以得到收斂后的Q網(wǎng)絡(luò)參數(shù)通熄,這會導(dǎo)致我們訓(xùn)練出的模型效果很差唆涝,因此也需要反復(fù)嘗試選取最好的模型。

Episode: 0 | Episode reward: 11.0 | loss: inf | e:1.00
Episode: 5 | Episode reward: 23.0 | loss: 0.816 | e:0.67
Episode: 10 | Episode reward: 18.0 | loss: 2.684 | e:0.46
Episode: 15 | Episode reward: 11.0 | loss: 3.662 | e:0.34
Episode: 20 | Episode reward: 16.0 | loss: 2.702 | e:0.23
Episode: 25 | Episode reward: 10.0 | loss: 4.092 | e:0.18
Episode: 30 | Episode reward: 12.0 | loss: 3.734 | e:0.13
...
Episode: 460 | Episode reward: 111.0 | loss: 6.325 | e:0.01
Episode: 465 | Episode reward: 180.0 | loss: 0.046 | e:0.01
Episode: 470 | Episode reward: 141.0 | loss: 0.136 | e:0.01
Episode: 475 | Episode reward: 169.0 | loss: 0.110 | e:0.01
Episode: 480 | Episode reward: 200.0 | loss: 0.095 | e:0.01
Episode: 485 | Episode reward: 200.0 | loss: 0.024 | e:0.01
Episode: 490 | Episode reward: 200.0 | loss: 0.066 | e:0.01
Episode: 495 | Episode reward: 146.0 | loss: 0.022 | e:0.01

Reward for this episode was: 200.0, turns was: 200
Reward for this episode was: 196.0, turns was: 196
Reward for this episode was: 198.0, turns was: 198
Reward for this episode was: 200.0, turns was: 200
Reward for this episode was: 199.0, turns was: 199
Reward for this episode was: 200.0, turns was: 200
Reward for this episode was: 193.0, turns was: 193
Reward for this episode was: 200.0, turns was: 200
Reward for this episode was: 189.0, turns was: 189
Reward for this episode was: 200.0, turns was: 200
DQN

對比

(1)Policy Network可以處理連續(xù)的action唇辨,而DQN則只能處理離散問題廊酣,通過枚舉的方式來實(shí)現(xiàn),連續(xù)的action只能離散化后再處理赏枚。

(2)Policy Network通過輸出的action概率值大小隨機(jī)選擇action亡驰,而DQN則通過貪婪選擇法ε-greedy選擇action。

(2)DQN的更新是一個(gè)一個(gè)的reward進(jìn)行更新饿幅,即當(dāng)前的reward只跟鄰近的一個(gè)相關(guān)凡辱;Policy Network則將一個(gè)episode的reward全部保存起來,然后用discount的方式修正reward栗恩,標(biāo)準(zhǔn)化后進(jìn)行更新顾瞻。

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
  • 序言:七十年代末料身,一起剝皮案震驚了整個(gè)濱河市,隨后出現(xiàn)的幾起案子,更是在濱河造成了極大的恐慌翻翩,老刑警劉巖颤枪,帶你破解...
    沈念sama閱讀 217,542評論 6 504
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件慢宗,死亡現(xiàn)場離奇詭異愁铺,居然都是意外死亡,警方通過查閱死者的電腦和手機(jī)蒙兰,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 92,822評論 3 394
  • 文/潘曉璐 我一進(jìn)店門客情,熙熙樓的掌柜王于貴愁眉苦臉地迎上來,“玉大人癞己,你說我怎么就攤上這事膀斋。” “怎么了痹雅?”我有些...
    開封第一講書人閱讀 163,912評論 0 354
  • 文/不壞的土叔 我叫張陵仰担,是天一觀的道長。 經(jīng)常有香客問我,道長摔蓝,這世上最難降的妖魔是什么赂苗? 我笑而不...
    開封第一講書人閱讀 58,449評論 1 293
  • 正文 為了忘掉前任,我火速辦了婚禮贮尉,結(jié)果婚禮上拌滋,老公的妹妹穿的比我還像新娘。我一直安慰自己猜谚,他們只是感情好败砂,可當(dāng)我...
    茶點(diǎn)故事閱讀 67,500評論 6 392
  • 文/花漫 我一把揭開白布。 她就那樣靜靜地躺著魏铅,像睡著了一般昌犹。 火紅的嫁衣襯著肌膚如雪。 梳的紋絲不亂的頭發(fā)上览芳,一...
    開封第一講書人閱讀 51,370評論 1 302
  • 那天斜姥,我揣著相機(jī)與錄音,去河邊找鬼沧竟。 笑死铸敏,一個(gè)胖子當(dāng)著我的面吹牛,可吹牛的內(nèi)容都是我干的悟泵。 我是一名探鬼主播杈笔,決...
    沈念sama閱讀 40,193評論 3 418
  • 文/蒼蘭香墨 我猛地睜開眼,長吁一口氣:“原來是場噩夢啊……” “哼魁袜!你這毒婦竟也來了?” 一聲冷哼從身側(cè)響起敦第,我...
    開封第一講書人閱讀 39,074評論 0 276
  • 序言:老撾萬榮一對情侶失蹤峰弹,失蹤者是張志新(化名)和其女友劉穎,沒想到半個(gè)月后芜果,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體鞠呈,經(jīng)...
    沈念sama閱讀 45,505評論 1 314
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 37,722評論 3 335
  • 正文 我和宋清朗相戀三年右钾,在試婚紗的時(shí)候發(fā)現(xiàn)自己被綠了蚁吝。 大學(xué)時(shí)的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片。...
    茶點(diǎn)故事閱讀 39,841評論 1 348
  • 序言:一個(gè)原本活蹦亂跳的男人離奇死亡舀射,死狀恐怖窘茁,靈堂內(nèi)的尸體忽然破棺而出,到底是詐尸還是另有隱情脆烟,我是刑警寧澤山林,帶...
    沈念sama閱讀 35,569評論 5 345
  • 正文 年R本政府宣布,位于F島的核電站邢羔,受9級特大地震影響驼抹,放射性物質(zhì)發(fā)生泄漏桑孩。R本人自食惡果不足惜,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 41,168評論 3 328
  • 文/蒙蒙 一框冀、第九天 我趴在偏房一處隱蔽的房頂上張望流椒。 院中可真熱鬧,春花似錦明也、人聲如沸宣虾。這莊子的主人今日做“春日...
    開封第一講書人閱讀 31,783評論 0 22
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽安岂。三九已至,卻和暖如春帆吻,著一層夾襖步出監(jiān)牢的瞬間域那,已是汗流浹背。 一陣腳步聲響...
    開封第一講書人閱讀 32,918評論 1 269
  • 我被黑心中介騙來泰國打工猜煮, 沒想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留次员,地道東北人。 一個(gè)月前我還...
    沈念sama閱讀 47,962評論 2 370
  • 正文 我出身青樓王带,卻偏偏與公主長得像淑蔚,于是被迫代替她去往敵國和親。 傳聞我的和親對象是個(gè)殘疾皇子愕撰,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 44,781評論 2 354

推薦閱讀更多精彩內(nèi)容