[Chapter 2] Value Iteration and Policy Iteration

We now know the most important thing for computing an optimal policy is to compute the value function. But how? (The following contents are all based on infinite horizon problems.) The solution to this problem can be roughly divided into two categories: Value Iteration and Policy Iteration.

Value Iteration

Based on the formula of expected utility with discounted factor:

U^{\pi}(s)=E[\sum_{t=1}^{\infty}{\gamma^t R(s_t)}]

and the transition function P(s^′|s,a) defined in MDP model, there is an equation for the value function to intutively statisfy:

V(s)=R(s)+\gamma max_{a \in A(s)}?\sum_{s^′}{P(s^′│s,a)V(s^′)}

which is called Bellman equation.

Unfortunately, it's very difficult to solve the Bellman euqation, since there is a max operator, so that's why we need value iteration method.

Based on the Bellman equation, we can get the Bellman updata:

U_{t+1}(s) \leftarrow R(s)+\gamma max_{a \in A(s)}? \sum_{s^′}{P(s^′│s,a)U_t(s^′)}

Where t represents the iteration time steps.

The value iteration algorithm can be described as following:

image

We can initialize all utilities for all states as 0, and using the Bellman update formula to compute new utilities step by step until it converges (all values reach unique fixed points. This will save much more time than solve the Bellman equations directly.

Policy Iteration

Now, think about this, we are updating the values/utilities for each state in the first method, but in policy iteration, we initialize and update the policies. This is based on that sometimes, to find the optimal policy, we don't really need to find the highly accurate value function, e.g. if one action is clearly better than others. So we give up computing the values using Bellman update, we initialize the policies as {\pi}_0 and then update them. To do this, we alternate the following two main steps:

  • Policy evaluation: given the current policy {\pi}_t, calculate the next-step utilities U_{t+1}:

U_{t+1}(s) \leftarrow R(s)+\gamma \sum_{s^′}{P(s^′│s,{\pi}_t(s))U_t(s^′)}

This update formula is similar but simpler than Bellman update formula, there is no need to compute the maximum value among all possible actions, but using the actions given by the policy at time step t.

  • Policy inrovement: Using the calculated one step look-ahead utilities U_{t+1}(s) to compute new policy {\pi}_{t+1}.

To improve the policy, we need to choose another better policy to replace the current one, to do so, we need to introduce the action-value function or Q-function for policy {\pi}:

Q^{\pi}(s,a)=R(s)+\sum_{s^′}{P(s^′│s,a)U^{\pi}(s^′)}

The main different for Q-function in comparison to the value function is that the Q-function is the expected utility after determining the exact action a. Suppose the size of the state space and action space is |S| and |A| respectively, then for each policy {\pi}, there will be |S| value functions, one for each state, and will be |S| \times |A| Q-functions, one for each state-action pair.

The policy improvement theorem can be very easy then: suppose a new policy {\pi}′ that for all s \in S:

Q^{\pi}(s,{\pi}′(s)) \geq U^{\pi}(s)

Then for all s \in S, there will be:

U^{\pi}′(s) \geq U^{\pi}(s)

In this case, we can say that {\pi}′ is better than {\pi}, so we can improve the policy from {\pi} to {\pi}′.

Alternatively do the above two steps until there is no change in policy, we can get the optimal policy, this is the policy iteration algorithm, describing as following:

image

More Accurate Policy Iteration Methods

The policy iteration method stated above with both doing policy evaluation and policy improvement one step is called generalized policy iteration, which is a very general and common method. However, there are some other more accurate methods based on more accurate policy evaluation.

  • For each step of policy evaluation, not only update the utilities one time step, but using the following set of equations to solve the accurate utilities:

U_t(s)=R(s)+\gamma \sum_{s^′}{P(s^′│s,{\pi}_t(s))U_t(s^′)}

For N states, there are N equations and they can be solved in O(N^3) time using some basic linear algebra knowledge. This will be much more complex but much more accurate, however, it's a bit too much for almost all problems, we ususlly don't need the most accurate utilities.

  • Another method does k steps iterations in the policy evaluation step instead of to convergence or only one step, which is called modified policy iteration. k can be defined according to the environment and the problem.
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
  • 序言:七十年代末,一起剝皮案震驚了整個濱河市公般,隨后出現(xiàn)的幾起案子弛秋,更是在濱河造成了極大的恐慌器躏,老刑警劉巖,帶你破解...
    沈念sama閱讀 216,591評論 6 501
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件蟹略,死亡現(xiàn)場離奇詭異,居然都是意外死亡遏佣,警方通過查閱死者的電腦和手機挖炬,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 92,448評論 3 392
  • 文/潘曉璐 我一進店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來状婶,“玉大人意敛,你說我怎么就攤上這事√懦妫” “怎么了草姻?”我有些...
    開封第一講書人閱讀 162,823評論 0 353
  • 文/不壞的土叔 我叫張陵,是天一觀的道長稍刀。 經(jīng)常有香客問我撩独,道長,這世上最難降的妖魔是什么账月? 我笑而不...
    開封第一講書人閱讀 58,204評論 1 292
  • 正文 為了忘掉前任综膀,我火速辦了婚禮,結(jié)果婚禮上局齿,老公的妹妹穿的比我還像新娘剧劝。我一直安慰自己,他們只是感情好抓歼,可當(dāng)我...
    茶點故事閱讀 67,228評論 6 388
  • 文/花漫 我一把揭開白布讥此。 她就那樣靜靜地躺著,像睡著了一般谣妻。 火紅的嫁衣襯著肌膚如雪萄喳。 梳的紋絲不亂的頭發(fā)上,一...
    開封第一講書人閱讀 51,190評論 1 299
  • 那天拌禾,我揣著相機與錄音取胎,去河邊找鬼。 笑死湃窍,一個胖子當(dāng)著我的面吹牛闻蛀,可吹牛的內(nèi)容都是我干的。 我是一名探鬼主播您市,決...
    沈念sama閱讀 40,078評論 3 418
  • 文/蒼蘭香墨 我猛地睜開眼觉痛,長吁一口氣:“原來是場噩夢啊……” “哼!你這毒婦竟也來了茵休?” 一聲冷哼從身側(cè)響起薪棒,我...
    開封第一講書人閱讀 38,923評論 0 274
  • 序言:老撾萬榮一對情侶失蹤手蝎,失蹤者是張志新(化名)和其女友劉穎,沒想到半個月后俐芯,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體棵介,經(jīng)...
    沈念sama閱讀 45,334評論 1 310
  • 正文 獨居荒郊野嶺守林人離奇死亡,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點故事閱讀 37,550評論 2 333
  • 正文 我和宋清朗相戀三年吧史,在試婚紗的時候發(fā)現(xiàn)自己被綠了邮辽。 大學(xué)時的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片。...
    茶點故事閱讀 39,727評論 1 348
  • 序言:一個原本活蹦亂跳的男人離奇死亡贸营,死狀恐怖吨述,靈堂內(nèi)的尸體忽然破棺而出,到底是詐尸還是另有隱情钞脂,我是刑警寧澤揣云,帶...
    沈念sama閱讀 35,428評論 5 343
  • 正文 年R本政府宣布,位于F島的核電站冰啃,受9級特大地震影響邓夕,放射性物質(zhì)發(fā)生泄漏。R本人自食惡果不足惜亿笤,卻給世界環(huán)境...
    茶點故事閱讀 41,022評論 3 326
  • 文/蒙蒙 一翎迁、第九天 我趴在偏房一處隱蔽的房頂上張望。 院中可真熱鬧净薛,春花似錦汪榔、人聲如沸。這莊子的主人今日做“春日...
    開封第一講書人閱讀 31,672評論 0 22
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽。三九已至燃领,卻和暖如春士聪,著一層夾襖步出監(jiān)牢的瞬間,已是汗流浹背猛蔽。 一陣腳步聲響...
    開封第一講書人閱讀 32,826評論 1 269
  • 我被黑心中介騙來泰國打工剥悟, 沒想到剛下飛機就差點兒被人妖公主榨干…… 1. 我叫王不留,地道東北人曼库。 一個月前我還...
    沈念sama閱讀 47,734評論 2 368
  • 正文 我出身青樓区岗,卻偏偏與公主長得像,于是被迫代替她去往敵國和親毁枯。 傳聞我的和親對象是個殘疾皇子慈缔,可洞房花燭夜當(dāng)晚...
    茶點故事閱讀 44,619評論 2 354

推薦閱讀更多精彩內(nèi)容