Chapter 9

Chapter 9: On-policy Prediction with Approximation

From this chapter, we move from tabular methods to approximate methods to tackle the curse of dimension in the state space. Instead of storing a lookup table for state values in tabular methods, approximate methods learn state values with function approximation, i.e., \hat{v}(s, w) \approx v_\pi(s).
However, approximate methods are not simple combination of RL and supervised learning. Compared to tabular RL methods, approximate methods introduce the challenge of generalization, i.e., the change of w based on one state will also change the value of all other states, while the values of different states are decoupled in tabular case. In other words, with function approximation, we have lost the policy improvement theorem under the tabular case. Compared to standard supervised learning on a static distribution, function approximation in RL raises new issues such as nonstationarity (the training samples are collected online from a time-variant policy), bootstrapping (the learning target itself is dependent on the parameters), and delayed targets.
This chapter starts from the simplest case, i.e., on-policy prediction (value estimation) with approximation given a fixed policy.

The Prediction Objective

The prediction problem can be seen as a supervised learning problem, where the data distribution is the on-policy distribution \mu(s) generated by the policy \pi. The on-policy distribution is the normalized fraction of time spent in s.
Under the on-policy distribution, the learning objective is defined as \overline{VE}(w) = \sum_{s \in \mathcal{S}} \mu(s) \big[ v_\pi(s) - \hat{v}(s, w) \big]^2. However, we need to note that

Remember that our ultimate purpose--the reason we are learning a value function--is to find a better policy. The best value function for this purpose is not necessarily the best for minimizing \overline{VE}. Nevertheless, it is not yet clear what a more useful alternative goal for value prediction might be.

Stochastic-gradient and Semi-gradient Methods

If we know the true state values, then we can learn w with standard SGD as follows: w_{t+1} = w_t - \frac{1}{2} \alpha \nabla \big[ v_\pi(S_t) - \hat{v}(S_t, w_t) \big]^2 = w_t + \alpha \big[ v_\pi(S_t) - \hat{v}(S_t, w_t) \big] \nabla \hat{v}(S_t, w_t). However, the challenge in RL is that we don't have a ground-true v_\pi(S_t) as in supervised learning. Instead, we need to use a backup estimation U_t as the target.
If U_t is an unbiased estimate, like in MC (U_t = G_t), then w_t is guaranteed to converge to a local minimum under the usual stochastic approximation conditions for decreasing \alpha.
However, for TD, our alternative target R + \gamma \hat{v}(S', w) is not independent of w_t. Consequently, we can not apply standard SGD, but use semi-gradient methods for update, i.e., only take into account the gradient of w w.r.t. the current estimate, while ignore its gradient w.r.t. the target part. Altough semi-gradient methods converge less robustly, they do converge reliably in the linear case, and more importantly, they typically enable significantly faster and fully continual and online learning.

Linear Methods and Least-Squares TD

When the approximation function is linear, we can write the semi-gradient update explicitly as:
\begin{aligned} w_{t+1} &= w_t + \alpha (R_{t+1} + \gamma w_t^T x_{t+1} - w_t^T x_t) x_t \\ &= w_t + \alpha \big[ R_{t+1} x_t - x_t(x_t - \gamma x_{t+1})^T w_t \big] \end{aligned} In expectation, we have \mathbb{E}[w_{t+1}|w_t] = w_t + \alpha (b - Aw_t), where b = \mathbb{E} [R_{t+1} x_t], and A = \mathbb{E}[x_t (x_t - \gamma x_{t+1})^T]. Thus the converged solution, i.e., the TD fixed point, satisfies w_{\text{TD}} = A^{-1}b. Consequently, instead of using iterative algorithm like SGD, we can directly compute the closed-form solution for linear methods. This is known as the least-squared TD algorithm, and its complexity is O(d^2), where d is the state space dimension. Matrix inverse typically requires a complexity of O(d^3). However, the matrix \hat{A} is the sum of vector outer product, thus its inverse can be computed more efficiently using the Sherman-Morrison formula.

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
  • 序言:七十年代末魁袜,一起剝皮案震驚了整個(gè)濱河市,隨后出現(xiàn)的幾起案子,更是在濱河造成了極大的恐慌够庙,老刑警劉巖,帶你破解...
    沈念sama閱讀 206,723評(píng)論 6 481
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件冀瓦,死亡現(xiàn)場(chǎng)離奇詭異缚窿,居然都是意外死亡调鲸,警方通過(guò)查閱死者的電腦和手機(jī),發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 88,485評(píng)論 2 382
  • 文/潘曉璐 我一進(jìn)店門剩拢,熙熙樓的掌柜王于貴愁眉苦臉地迎上來(lái)线得,“玉大人,你說(shuō)我怎么就攤上這事徐伐」峁常” “怎么了?”我有些...
    開(kāi)封第一講書人閱讀 152,998評(píng)論 0 344
  • 文/不壞的土叔 我叫張陵办素,是天一觀的道長(zhǎng)角雷。 經(jīng)常有香客問(wèn)我,道長(zhǎng)性穿,這世上最難降的妖魔是什么勺三? 我笑而不...
    開(kāi)封第一講書人閱讀 55,323評(píng)論 1 279
  • 正文 為了忘掉前任,我火速辦了婚禮需曾,結(jié)果婚禮上吗坚,老公的妹妹穿的比我還像新娘。我一直安慰自己呆万,他們只是感情好商源,可當(dāng)我...
    茶點(diǎn)故事閱讀 64,355評(píng)論 5 374
  • 文/花漫 我一把揭開(kāi)白布。 她就那樣靜靜地躺著谋减,像睡著了一般牡彻。 火紅的嫁衣襯著肌膚如雪。 梳的紋絲不亂的頭發(fā)上出爹,一...
    開(kāi)封第一講書人閱讀 49,079評(píng)論 1 285
  • 那天庄吼,我揣著相機(jī)與錄音,去河邊找鬼严就。 笑死总寻,一個(gè)胖子當(dāng)著我的面吹牛,可吹牛的內(nèi)容都是我干的梢为。 我是一名探鬼主播废菱,決...
    沈念sama閱讀 38,389評(píng)論 3 400
  • 文/蒼蘭香墨 我猛地睜開(kāi)眼,長(zhǎng)吁一口氣:“原來(lái)是場(chǎng)噩夢(mèng)啊……” “哼抖誉!你這毒婦竟也來(lái)了殊轴?” 一聲冷哼從身側(cè)響起,我...
    開(kāi)封第一講書人閱讀 37,019評(píng)論 0 259
  • 序言:老撾萬(wàn)榮一對(duì)情侶失蹤袒炉,失蹤者是張志新(化名)和其女友劉穎旁理,沒(méi)想到半個(gè)月后,有當(dāng)?shù)厝嗽跇?shù)林里發(fā)現(xiàn)了一具尸體我磁,經(jīng)...
    沈念sama閱讀 43,519評(píng)論 1 300
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡孽文,尸身上長(zhǎng)有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 35,971評(píng)論 2 325
  • 正文 我和宋清朗相戀三年驻襟,在試婚紗的時(shí)候發(fā)現(xiàn)自己被綠了。 大學(xué)時(shí)的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片芋哭。...
    茶點(diǎn)故事閱讀 38,100評(píng)論 1 333
  • 序言:一個(gè)原本活蹦亂跳的男人離奇死亡沉衣,死狀恐怖,靈堂內(nèi)的尸體忽然破棺而出减牺,到底是詐尸還是另有隱情豌习,我是刑警寧澤,帶...
    沈念sama閱讀 33,738評(píng)論 4 324
  • 正文 年R本政府宣布拔疚,位于F島的核電站肥隆,受9級(jí)特大地震影響,放射性物質(zhì)發(fā)生泄漏稚失。R本人自食惡果不足惜栋艳,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 39,293評(píng)論 3 307
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望句各。 院中可真熱鬧吸占,春花似錦、人聲如沸凿宾。這莊子的主人今日做“春日...
    開(kāi)封第一講書人閱讀 30,289評(píng)論 0 19
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽(yáng)菌湃。三九已至问拘,卻和暖如春遍略,著一層夾襖步出監(jiān)牢的瞬間惧所,已是汗流浹背。 一陣腳步聲響...
    開(kāi)封第一講書人閱讀 31,517評(píng)論 1 262
  • 我被黑心中介騙來(lái)泰國(guó)打工绪杏, 沒(méi)想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留下愈,地道東北人。 一個(gè)月前我還...
    沈念sama閱讀 45,547評(píng)論 2 354
  • 正文 我出身青樓蕾久,卻偏偏與公主長(zhǎng)得像势似,于是被迫代替她去往敵國(guó)和親。 傳聞我的和親對(duì)象是個(gè)殘疾皇子僧著,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 42,834評(píng)論 2 345

推薦閱讀更多精彩內(nèi)容