Chapter 6

Chapter 6: Temporal-Difference Learning

Temporal-difference (TD) learning is a combination of DP ideas and MC ideas. Like MC, it learns from sample experience without a model of the environment's dynamics. Like DP, it performs value estimation with bootstrap.
We can see that DP, MC and TD all follow the GPI framework, and they all share the same greedy policy improvement strategy, while their main differences lie in how they estimate the value functions of the policy.

TD Prediction

Both TD and constant-\alpha MC follow the same update rule with different tagret. For MC update, the target is G_t, and the update rule is V(S_t) \leftarrow V(S_t) + \alpha [G_t - V(S_t)]. For TD(0) update (TD(0) is a special case of TD(\lambda), which will be covered in later chapters), the target is R_{t+1} + \gamma V(S_{t+1}), and the update rule is V(S_t) \leftarrow V(S_t) + \alpha [R_{t+1} + \gamma V(S_{t+1}) - V(S_t)]. Both of them are sample updates because they are based on a single sample successor rather than on a complete distribution of all possible successors, which is called expected updates as in DP.
The advantages of TD compared to MC are mainly twofold. First, TD is fully incremental in step, while MC can only update after a full episode. Thus TD is usually more efficient and can also work on continuous tasks. Second, TD utilizes the structure of the MDP. Under the batch learning setting, MC actually finds the estimation which minimizes mean-squared error on the training set, while batch TD finds the estimation that would be exactly correct for the maximum-likelihood model of the MDP.
The main issue with TD is that it bootstraps, which introduces bias and many other related problems.

Sarsa: On-policy TD Control

Sarsa is an on-policy TD method which learns from a quintuple of (S_t, A_t, R_{t+1}, S_{t+1}, A_{t+1}), and the update rule is Q(S_t, A_t) \leftarrow Q(S_t, A_t) + \alpha [R_{t+1} + \gamma Q(S_{t+1}, A_{t+1}) - Q(S_t, A_t)]. Sarsa is on-policy because the update rule in expectation learns a target policy which is identity to the behavior policy.

Q-learning: Off-policy TD Control

Q-learning learns a deterministic target policy and samples episodes with an \epsilon-greedy behavior policy based on the learned Q values. The update rule is Q(S_t, A_t) \leftarrow Q(S_t, A_t) + \alpha [R_{t+1} + \gamma \max_a Q(S_{t+1}, a) - Q(S_t, A_t)]. An interesting thing here is that although Q-learning is off-policy, we don't have importance sampling here as introduced in MC. The reason is that the sampling process only happens at step t, thus will lead to an importance sampling ratio of 1.

Expected Sarsa

I think Expected Sarsa can be seen as a general form of Q-learning, where its learning target is the expected value over the next state's actions (equal to V_\pi(S_{t+1})?), i.e., Q(S_t, A_t) \leftarrow Q(S_t, A_t) + \alpha [R_{t+1} + \gamma \sum_a \pi(a|S_{t+1}) Q(S_{t+1}, a) - Q(S_t, A_t)]. It is equivalent to Q-learning if the target policy \pi is deterministic.

Maximization Bias and Double Learning

In both on-policy and off-policy learning, we use Q(\arg\max_a Q(a)) to estimate the value of the optimal action q(A^*), where A^* = \arg\max_a Q(a). This will introduce maximization bias, and double learning is a good way to avoid it. Specifically, we learn two value functions Q_1 and Q_2. We select the optimal action with Q_1, i.e., A^* = \arg\max_a Q_1(a), and estimate its value with Q_2. In this way, we have that \mathbb{E}[Q_2(A^*)] = q(A^*), as the action selection process is indepedent of the value estimation from Q_2.

Approximations in TD

Most approximations in RL comes from how different methods approximate the true learning target \mathbb{E}[G_t]. The methods used by TD have the following intrisic approximations.

  1. Sample update instead of expected update (even expected Sarsa has to sample S_{t+1}).
  2. Bootstrap.
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
  • 序言:七十年代末淑履,一起剝皮案震驚了整個(gè)濱河市,隨后出現(xiàn)的幾起案子组题,更是在濱河造成了極大的恐慌膘壶,老刑警劉巖顿涣,帶你破解...
    沈念sama閱讀 219,270評(píng)論 6 508
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件眶掌,死亡現(xiàn)場(chǎng)離奇詭異狠角,居然都是意外死亡,警方通過(guò)查閱死者的電腦和手機(jī)闭树,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 93,489評(píng)論 3 395
  • 文/潘曉璐 我一進(jìn)店門耸棒,熙熙樓的掌柜王于貴愁眉苦臉地迎上來(lái),“玉大人报辱,你說(shuō)我怎么就攤上這事与殃。” “怎么了碍现?”我有些...
    開封第一講書人閱讀 165,630評(píng)論 0 356
  • 文/不壞的土叔 我叫張陵幅疼,是天一觀的道長(zhǎng)。 經(jīng)常有香客問(wèn)我昼接,道長(zhǎng)爽篷,這世上最難降的妖魔是什么? 我笑而不...
    開封第一講書人閱讀 58,906評(píng)論 1 295
  • 正文 為了忘掉前任慢睡,我火速辦了婚禮逐工,結(jié)果婚禮上,老公的妹妹穿的比我還像新娘漂辐。我一直安慰自己泪喊,他們只是感情好,可當(dāng)我...
    茶點(diǎn)故事閱讀 67,928評(píng)論 6 392
  • 文/花漫 我一把揭開白布髓涯。 她就那樣靜靜地躺著袒啼,像睡著了一般。 火紅的嫁衣襯著肌膚如雪复凳。 梳的紋絲不亂的頭發(fā)上瘤泪,一...
    開封第一講書人閱讀 51,718評(píng)論 1 305
  • 那天,我揣著相機(jī)與錄音育八,去河邊找鬼对途。 笑死,一個(gè)胖子當(dāng)著我的面吹牛髓棋,可吹牛的內(nèi)容都是我干的实檀。 我是一名探鬼主播,決...
    沈念sama閱讀 40,442評(píng)論 3 420
  • 文/蒼蘭香墨 我猛地睜開眼按声,長(zhǎng)吁一口氣:“原來(lái)是場(chǎng)噩夢(mèng)啊……” “哼膳犹!你這毒婦竟也來(lái)了?” 一聲冷哼從身側(cè)響起签则,我...
    開封第一講書人閱讀 39,345評(píng)論 0 276
  • 序言:老撾萬(wàn)榮一對(duì)情侶失蹤须床,失蹤者是張志新(化名)和其女友劉穎,沒(méi)想到半個(gè)月后渐裂,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體豺旬,經(jīng)...
    沈念sama閱讀 45,802評(píng)論 1 317
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡钠惩,尸身上長(zhǎng)有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 37,984評(píng)論 3 337
  • 正文 我和宋清朗相戀三年,在試婚紗的時(shí)候發(fā)現(xiàn)自己被綠了族阅。 大學(xué)時(shí)的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片篓跛。...
    茶點(diǎn)故事閱讀 40,117評(píng)論 1 351
  • 序言:一個(gè)原本活蹦亂跳的男人離奇死亡,死狀恐怖坦刀,靈堂內(nèi)的尸體忽然破棺而出愧沟,到底是詐尸還是另有隱情,我是刑警寧澤鲤遥,帶...
    沈念sama閱讀 35,810評(píng)論 5 346
  • 正文 年R本政府宣布沐寺,位于F島的核電站,受9級(jí)特大地震影響渴频,放射性物質(zhì)發(fā)生泄漏芽丹。R本人自食惡果不足惜,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 41,462評(píng)論 3 331
  • 文/蒙蒙 一卜朗、第九天 我趴在偏房一處隱蔽的房頂上張望拔第。 院中可真熱鬧,春花似錦场钉、人聲如沸蚊俺。這莊子的主人今日做“春日...
    開封第一講書人閱讀 32,011評(píng)論 0 22
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽(yáng)泳猬。三九已至,卻和暖如春宇植,著一層夾襖步出監(jiān)牢的瞬間得封,已是汗流浹背。 一陣腳步聲響...
    開封第一講書人閱讀 33,139評(píng)論 1 272
  • 我被黑心中介騙來(lái)泰國(guó)打工指郁, 沒(méi)想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留忙上,地道東北人。 一個(gè)月前我還...
    沈念sama閱讀 48,377評(píng)論 3 373
  • 正文 我出身青樓闲坎,卻偏偏與公主長(zhǎng)得像疫粥,于是被迫代替她去往敵國(guó)和親。 傳聞我的和親對(duì)象是個(gè)殘疾皇子腰懂,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 45,060評(píng)論 2 355

推薦閱讀更多精彩內(nèi)容