[Chapter 5] Reinforcement Learning (3) Function Approximation and Going Deep

Function Approximation

While we are learning the Q-functions, but how to represent or record the Q-values? For discrete and finite state space and action space, we can use a big table with size of |S| \times |A| to represent the Q-values for all (s,a) pairs. However, if the state space or action space is very huge, or actually, usually they are continuous and infinite, a tabular method doesn't work anymore.

We need function approximation to represent utility and Q-functions with some parameters {\theta} to be learnt. Also take the grid environment as our example, we can represent the state using a pair of coordiantes (x,y), then one simple function approximation can be like this:

\hat{U}_{\theta} (x,y)={\theta}_0+{\theta}_1 x+{\theta}_2 y

Of course, you can design more complex functions when you have a much larger state space.

In this case, our reinforcement learning agent turns to learn the parameters {\theta} to approximate the evaluation functions (\hat{U}_{\theta} or \hat{Q}_{\theta}).

For Monte Carlo learning, we can collect a set of training samples (trails) with input and label, then this turns to be a supervised learning problem. With squared error and linear function, we get a standard linear regression problem.

For Temporal Difference learning, the agent aims to adjust the parameters to reduce the temporal difference (to reduce the TD error. To update the parameters using gradient decrease method:

  • For SARSA (on-policy method):

{\theta}_i \leftarrow {\theta}_i+{\alpha}(R(s)+{\gamma}\hat{Q}_{\theta} (s^′,a′)?\hat{Q}_{\theta}(s,a)) \frac{\partial {\hat{Q}_{\theta} (s,a)}}{\partial{{\theta}_i}}

  • For Q-learning (off-policy method):

{\theta}_i \leftarrow {\theta}_i+{\alpha}(R(s)+{\gamma} max_{a'}{\hat{Q}_{\theta} (s^′,a′)}?\hat{Q}_{\theta}(s,a)) \frac{\partial {\hat{Q}_{\theta} (s,a)}}{\partial{{\theta}_i}}

Going Deep

One of the greatest advancement in reinforcement learning is to combine it with deep learning. As we have stated above, mostly, we cannot use a tabular method to represent the evaluation functions, we need approximation! I know what you want to say: you must have thought that deep network is a good function approximation. We have input for a network and output the Q-values or utilities, that's it! So, using deep network in RL is deep reinforcement learning (DRL).

Why we need deep network?

  • Firstly, for the environment that has nearly infinite state space, the deep network can hold a large set of parameters {\theta} to be learnt and can map a large set of states to their expected Q-values.
  • Secondly, for some environment with complex observation, only deep network can solve them. For example, if the observation is an RGB image, we need convolutional neural network (CNN) in the first layers to read them; if the observation is a piece of audio, we need recurrent neural network (RNN) in the first layers.
  • Nowadays, designing and training a deep neural network can be done much easier based on the advanced hardware and software technology.

One of the DRL algorithms is Deep Q-learning Network (DQN), we have the pseudo code here but will not go into it:

image
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
  • 序言:七十年代末麦萤,一起剝皮案震驚了整個(gè)濱河市削锰,隨后出現(xiàn)的幾起案子坡贺,更是在濱河造成了極大的恐慌厦滤,老刑警劉巖,帶你破解...
    沈念sama閱讀 216,651評(píng)論 6 501
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件稿茉,死亡現(xiàn)場離奇詭異锹锰,居然都是意外死亡,警方通過查閱死者的電腦和手機(jī)漓库,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 92,468評(píng)論 3 392
  • 文/潘曉璐 我一進(jìn)店門恃慧,熙熙樓的掌柜王于貴愁眉苦臉地迎上來,“玉大人渺蒿,你說我怎么就攤上這事痢士。” “怎么了茂装?”我有些...
    開封第一講書人閱讀 162,931評(píng)論 0 353
  • 文/不壞的土叔 我叫張陵怠蹂,是天一觀的道長。 經(jīng)常有香客問我少态,道長城侧,這世上最難降的妖魔是什么? 我笑而不...
    開封第一講書人閱讀 58,218評(píng)論 1 292
  • 正文 為了忘掉前任彼妻,我火速辦了婚禮嫌佑,結(jié)果婚禮上,老公的妹妹穿的比我還像新娘侨歉。我一直安慰自己屋摇,他們只是感情好,可當(dāng)我...
    茶點(diǎn)故事閱讀 67,234評(píng)論 6 388
  • 文/花漫 我一把揭開白布幽邓。 她就那樣靜靜地躺著炮温,像睡著了一般。 火紅的嫁衣襯著肌膚如雪牵舵。 梳的紋絲不亂的頭發(fā)上柒啤,一...
    開封第一講書人閱讀 51,198評(píng)論 1 299
  • 那天,我揣著相機(jī)與錄音棋枕,去河邊找鬼。 笑死妒峦,一個(gè)胖子當(dāng)著我的面吹牛重斑,可吹牛的內(nèi)容都是我干的。 我是一名探鬼主播肯骇,決...
    沈念sama閱讀 40,084評(píng)論 3 418
  • 文/蒼蘭香墨 我猛地睜開眼窥浪,長吁一口氣:“原來是場噩夢(mèng)啊……” “哼祖很!你這毒婦竟也來了?” 一聲冷哼從身側(cè)響起漾脂,我...
    開封第一講書人閱讀 38,926評(píng)論 0 274
  • 序言:老撾萬榮一對(duì)情侶失蹤假颇,失蹤者是張志新(化名)和其女友劉穎,沒想到半個(gè)月后骨稿,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體笨鸡,經(jīng)...
    沈念sama閱讀 45,341評(píng)論 1 311
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 37,563評(píng)論 2 333
  • 正文 我和宋清朗相戀三年坦冠,在試婚紗的時(shí)候發(fā)現(xiàn)自己被綠了形耗。 大學(xué)時(shí)的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片。...
    茶點(diǎn)故事閱讀 39,731評(píng)論 1 348
  • 序言:一個(gè)原本活蹦亂跳的男人離奇死亡辙浑,死狀恐怖激涤,靈堂內(nèi)的尸體忽然破棺而出,到底是詐尸還是另有隱情判呕,我是刑警寧澤倦踢,帶...
    沈念sama閱讀 35,430評(píng)論 5 343
  • 正文 年R本政府宣布,位于F島的核電站侠草,受9級(jí)特大地震影響辱挥,放射性物質(zhì)發(fā)生泄漏。R本人自食惡果不足惜梦抢,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 41,036評(píng)論 3 326
  • 文/蒙蒙 一般贼、第九天 我趴在偏房一處隱蔽的房頂上張望。 院中可真熱鬧奥吩,春花似錦哼蛆、人聲如沸。這莊子的主人今日做“春日...
    開封第一講書人閱讀 31,676評(píng)論 0 22
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽。三九已至端衰,卻和暖如春叠洗,著一層夾襖步出監(jiān)牢的瞬間,已是汗流浹背旅东。 一陣腳步聲響...
    開封第一講書人閱讀 32,829評(píng)論 1 269
  • 我被黑心中介騙來泰國打工灭抑, 沒想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留,地道東北人抵代。 一個(gè)月前我還...
    沈念sama閱讀 47,743評(píng)論 2 368
  • 正文 我出身青樓腾节,卻偏偏與公主長得像,于是被迫代替她去往敵國和親。 傳聞我的和親對(duì)象是個(gè)殘疾皇子案腺,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 44,629評(píng)論 2 354

推薦閱讀更多精彩內(nèi)容