Chapter 5

Chapter 5: Monte Carlo Methods

Monte Carlo (MC) methods are learning methods for estimating value functions and discovering optimal policies by averaging complete returns of sample experience. Some high-level characteristics include:

  1. Unlike DP, MC does not require the model dynamics. Instead, it learns from (simulated) experience. This is valuable because in many cases, it is easy to generate sample tracjectories but infeasible to obtain the model dynamics in explicit form.
  2. MC needs compete returns, thus is only suitable for episodic tasks. It can be incremental in an episode-by-episode sense, but not in a step-by-step (online) sense.
  3. MC does not bootstrap, thus is unbiased. It also provides another advantage that we can only estimate the value of the states we are interested in, regardless of the size of the state space.
    Following the general framework of GPI, MC also consists of a value estimation (prediction) phase and a policy improvement (control) phase.

Monte Carlo Prediction

This phase learns the value functions for a given policy.
For estimating state value v_\pi(s), it works by simply averaging the returns of s in different sample episodes. It can be further categorized as first-visit MC and every-visit MC, both guaranteed to converge as the number of visits to s goes to infinity.
As we can not derive the optimal policy based on state values without knowing the model dynamics, it is more important to estimation state-action values q(s, a) in MC. An important problem here is how to maintain exploration so that every state-action pair will be visited to for value estimation to find the optimal policy. This will be discussed in the next part.
The backup diagram of MC is a single line of the sampled trajectory.

Monte Carlo Control

MC is guaranteed to converge to the optimal policy under the following two assumptions:

  1. Each state-action pair has a non-zero probability of being visited;
  2. The policy evaluation can be done with a infinite number of episodes.

However, these two assumptions are usually unrealistic in real-world problems. A solution to tackle the second assumption is to approximate with a finite number of episodes at the cost of higher variance.
Theoritically, a solution to the first assumption is exploring start, which ensures that every state-action pair will be visited at the episode start. However, this is infeasible in most cases, as we can not generate arbitrary trajectories as we like. The solutions to this problem include on-policy methods and off-policy methods.

On-policy Control

On-policy methods directly improve the policy that is used to generate data. It ensures exploration by learning a \epsilon-greedy policy instead of a deterministic policy. The cost here is that we can only obtain the optimal \epsilon-soft policy instead of the truly optimal policy.

Off-policy Control

Off-policy methods learn an optimal target policy \pi which is different from a behavior policy b used to generate sample trajectories. We can use a soft behavior policy to ensure exploration while still learn a deterministic target policy.
The key challenge here is that we can not directly average the returns from the behavior policy, as it does not reflect the value functions of the target policy, i.e., \mathbb{E}[G_t | S_t=s] = v_b(s).
To solve this problem, we introduce importance sampling which reweights the samples from the behavior policy to align with the sample distribution under the target policy. The importance sampling ratio is defined as: \rho_{t:T-1} = \frac{\prod_{k=t}^{T-1} \pi(A_k|S_k)p(S_{k+1}|S_k, A_k)}{\prod_{k=t}^{T-1} b(A_k|S_k)p(S_{k+1}|S_k, A_k)} = \prod_{k=t}^{T-1} \frac{\pi(A_k|S_k)}{b(A_k|S_k)}. Then we have \mathbb{E}[\rho_{t:T-1} G_t | S_t = s] = v_\pi(s). There are two approaches to get an estimation of v_\pi(s). The first is ordinary importance sampling in the form of V(s) = \frac{\sum_{k=1}^{n-1} W_k G_k}{n-1}, while the second is weighted importance sampling in the form of V(s) = \frac{\sum_{k=1}^{n-1} W_k G_k}{\sum_{k=1}^{n-1} W_k}, where we use W_k to represent the importance sampling ratio for simplicity. Ordinary importance sampling is unbiased, but has much larger or even infinite variance. Weighted importance sampling has bounded variance, and its bias reduces as the number of samples increases, thus is strongly preferrred in practice. Moreover, the weighted estimator can be expressed in an incremental way as V_{n+1} = V_n + \frac{W_n}{C_n} \big [ G_n - V_n \big ], where C_{n+1}=C_n + W_{n+1}.
The off-policy MC control is shown in the figure below. A problem here is highlighted in the last two rows. When the target policy is deterministic, the importance sampling ratio will become zero when the bahavior policy takes an action which is not optimal in the current target policy. When nongreedy actions are common, it will only learns from the tails of episodes and greatly slow down the learning process.

Off-policy MC control.

Approximations in MC

  1. Truncated policy evaluation with only a finite number of sample episodes.
  2. Use returns of previous sampled trajectories under different policies for policy evaluation.

Under these approximations, is it still wise to fully trust the estimated values to determine a greedy policy improvement?

?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
  • 序言:七十年代末,一起剝皮案震驚了整個濱河市瞪醋,隨后出現(xiàn)的幾起案子己沛,更是在濱河造成了極大的恐慌县好,老刑警劉巖,帶你破解...
    沈念sama閱讀 206,723評論 6 481
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件躏仇,死亡現(xiàn)場離奇詭異,居然都是意外死亡,警方通過查閱死者的電腦和手機(jī)煞檩,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 88,485評論 2 382
  • 文/潘曉璐 我一進(jìn)店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來栅贴,“玉大人斟湃,你說我怎么就攤上這事¢苁恚” “怎么了凝赛?”我有些...
    開封第一講書人閱讀 152,998評論 0 344
  • 文/不壞的土叔 我叫張陵,是天一觀的道長坛缕。 經(jīng)常有香客問我墓猎,道長,這世上最難降的妖魔是什么赚楚? 我笑而不...
    開封第一講書人閱讀 55,323評論 1 279
  • 正文 為了忘掉前任毙沾,我火速辦了婚禮,結(jié)果婚禮上直晨,老公的妹妹穿的比我還像新娘搀军。我一直安慰自己,他們只是感情好勇皇,可當(dāng)我...
    茶點故事閱讀 64,355評論 5 374
  • 文/花漫 我一把揭開白布罩句。 她就那樣靜靜地躺著,像睡著了一般敛摘。 火紅的嫁衣襯著肌膚如雪门烂。 梳的紋絲不亂的頭發(fā)上,一...
    開封第一講書人閱讀 49,079評論 1 285
  • 那天兄淫,我揣著相機(jī)與錄音屯远,去河邊找鬼。 笑死捕虽,一個胖子當(dāng)著我的面吹牛慨丐,可吹牛的內(nèi)容都是我干的。 我是一名探鬼主播泄私,決...
    沈念sama閱讀 38,389評論 3 400
  • 文/蒼蘭香墨 我猛地睜開眼房揭,長吁一口氣:“原來是場噩夢啊……” “哼备闲!你這毒婦竟也來了?” 一聲冷哼從身側(cè)響起捅暴,我...
    開封第一講書人閱讀 37,019評論 0 259
  • 序言:老撾萬榮一對情侶失蹤恬砂,失蹤者是張志新(化名)和其女友劉穎,沒想到半個月后蓬痒,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體泻骤,經(jīng)...
    沈念sama閱讀 43,519評論 1 300
  • 正文 獨居荒郊野嶺守林人離奇死亡,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點故事閱讀 35,971評論 2 325
  • 正文 我和宋清朗相戀三年梧奢,在試婚紗的時候發(fā)現(xiàn)自己被綠了狱掂。 大學(xué)時的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片。...
    茶點故事閱讀 38,100評論 1 333
  • 序言:一個原本活蹦亂跳的男人離奇死亡粹断,死狀恐怖符欠,靈堂內(nèi)的尸體忽然破棺而出,到底是詐尸還是另有隱情瓶埋,我是刑警寧澤希柿,帶...
    沈念sama閱讀 33,738評論 4 324
  • 正文 年R本政府宣布,位于F島的核電站养筒,受9級特大地震影響曾撤,放射性物質(zhì)發(fā)生泄漏。R本人自食惡果不足惜晕粪,卻給世界環(huán)境...
    茶點故事閱讀 39,293評論 3 307
  • 文/蒙蒙 一挤悉、第九天 我趴在偏房一處隱蔽的房頂上張望。 院中可真熱鬧巫湘,春花似錦装悲、人聲如沸。這莊子的主人今日做“春日...
    開封第一講書人閱讀 30,289評論 0 19
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽。三九已至阅嘶,卻和暖如春属瓣,著一層夾襖步出監(jiān)牢的瞬間,已是汗流浹背讯柔。 一陣腳步聲響...
    開封第一講書人閱讀 31,517評論 1 262
  • 我被黑心中介騙來泰國打工抡蛙, 沒想到剛下飛機(jī)就差點兒被人妖公主榨干…… 1. 我叫王不留,地道東北人魂迄。 一個月前我還...
    沈念sama閱讀 45,547評論 2 354
  • 正文 我出身青樓粗截,卻偏偏與公主長得像,于是被迫代替她去往敵國和親捣炬。 傳聞我的和親對象是個殘疾皇子慈格,可洞房花燭夜當(dāng)晚...
    茶點故事閱讀 42,834評論 2 345

推薦閱讀更多精彩內(nèi)容

  • 久違的晴天怠晴,家長會。 家長大會開好到教室時浴捆,離放學(xué)已經(jīng)沒多少時間了。班主任說已經(jīng)安排了三個家長分享經(jīng)驗稿械。 放學(xué)鈴聲...
    飄雪兒5閱讀 7,495評論 16 22
  • 今天感恩節(jié)哎选泻,感謝一直在我身邊的親朋好友。感恩相遇美莫!感恩不離不棄页眯。 中午開了第一次的黨會,身份的轉(zhuǎn)變要...
    迷月閃星情閱讀 10,551評論 0 11
  • 可愛進(jìn)取厢呵,孤獨成精窝撵。努力飛翔,天堂翱翔襟铭。戰(zhàn)爭美好碌奉,孤獨進(jìn)取。膽大飛翔寒砖,成就輝煌赐劣。努力進(jìn)取,遙望哩都,和諧家園魁兼。可愛游走...
    趙原野閱讀 2,716評論 1 1
  • 在妖界我有個名頭叫胡百曉漠嵌,無論是何事咐汞,只要找到胡百曉即可有解決的辦法。因為是只狐貍大家以訛傳訛叫我“傾城百曉”儒鹿,...
    貓九0110閱讀 3,255評論 7 3