轉(zhuǎn)一篇關(guān)于KL divergence與cross entropy的區(qū)別的回答

KL divergence is a natural way to measure the difference between two probability distributions. The entropy H(p) of a distribution p gives the minimum possible number of bits per message that would be needed (on average) to losslessly encode events drawn from p. Achieving this bound would require using an optimal code designed for p, which assigns shorter code words to higher probability events. D_{KL}(p \parallel q) can be interpreted as the expected number of extra bits per message needed to encode events drawn from true distribution p, if using an optimal code for distribution q rather than p. It has some nice properties for comparing distributions. For example, if p and q are equal, then the KL divergence is 0.

The cross entropy H(p,q) can be interpreted as the number of bits per message needed (on average) to encode events drawn from true distribution p, if using an optimal code for distribution q. Note the difference: D_{KL}(p \parallel q) measures the average number of extra bits per message, whereas H(p,q) measures the average number of total bits per message. It's true that, for fixed p, H(p,q) will grow as q becomes increasingly different from p. But, if p isn't held fixed, it's hard to interpret H(p,q) as an absolute measure of the difference, because it grows with the entropy of p.

KL divergence and cross entropy are related as:

D_{KL}(p \parallel q)=H(p,q)?H(p)

We can see from this expression that, when p and q are equal, the cross entropy is not zero; rather, it's equal to the entropy of p.

Cross entropy commonly shows up in loss functions in machine learning. In many of these situations, p is treated as the 'true' distribution, and q as the model that we're trying to optimize. For example, in classification problems, the commonly used cross entropy loss (aka log loss), measures the cross entropy between the empirical distribution of the labels (given the inputs) and the distribution predicted by the classifier. The empirical distribution for each data point simply assigns probability 1 to the class of that data point, and 0 to all other classes. Side note: The cross entropy in this case turns out to be proportional to the negative log likelihood, so minimizing it is equivalent maximizing the likelihood.

Note that p (the empirical distribution in this example) is fixed. So, it would be equivalent to say that we're minimizing the KL divergence between the empirical distribution and the predicted distribution. As we can see in the expression above, the two are related by the additive term H(p) (the entropy of the empirical distribution). Because p is fixed, H(p) doesn't change with the parameters of the model, and can be disregarded in the loss function. We might still want to talk about the KL divergence for theoretical/philosophical reasons but, in this case, they're equivalent from the perspective of solving the optimization problem. This may not be true for other uses of cross entropy and KL divergence, where p might vary.

t-SNE fits a distribution p in the input space. Each data point is mapped into the embedding space, where corresponding distribution q is fit. The algorithm attempts to adjust the embedding to minimize D_{KL}(p \parallel q). As above, p is held fixed. So, from the perspective of the optimization problem, minimizing the KL divergence and minimizing the cross entropy are equivalent. Indeed, van der Maaten and Hinton (2008) say in section 2: "A natural measure of the faithfulness with which q_{j|i} models p_{j|i} is the Kullback-Leibler divergence (which is in this case equal to the cross-entropy up to an additive constant)."

van der Maaten and Hinton (2008). Visualizing data using t-SNE.

原文見StackExchange questions #265966

?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
  • 序言:七十年代末,一起剝皮案震驚了整個濱河市彭谁,隨后出現(xiàn)的幾起案子吸奴,更是在濱河造成了極大的恐慌,老刑警劉巖缠局,帶你破解...
    沈念sama閱讀 218,941評論 6 508
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件则奥,死亡現(xiàn)場離奇詭異,居然都是意外死亡狭园,警方通過查閱死者的電腦和手機读处,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 93,397評論 3 395
  • 文/潘曉璐 我一進店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來唱矛,“玉大人罚舱,你說我怎么就攤上這事∫锴” “怎么了管闷?”我有些...
    開封第一講書人閱讀 165,345評論 0 356
  • 文/不壞的土叔 我叫張陵,是天一觀的道長窃肠。 經(jīng)常有香客問我包个,道長,這世上最難降的妖魔是什么冤留? 我笑而不...
    開封第一講書人閱讀 58,851評論 1 295
  • 正文 為了忘掉前任碧囊,我火速辦了婚禮,結(jié)果婚禮上搀菩,老公的妹妹穿的比我還像新娘呕臂。我一直安慰自己,他們只是感情好肪跋,可當我...
    茶點故事閱讀 67,868評論 6 392
  • 文/花漫 我一把揭開白布。 她就那樣靜靜地躺著土砂,像睡著了一般州既。 火紅的嫁衣襯著肌膚如雪。 梳的紋絲不亂的頭發(fā)上萝映,一...
    開封第一講書人閱讀 51,688評論 1 305
  • 那天吴叶,我揣著相機與錄音,去河邊找鬼序臂。 笑死蚌卤,一個胖子當著我的面吹牛实束,可吹牛的內(nèi)容都是我干的。 我是一名探鬼主播逊彭,決...
    沈念sama閱讀 40,414評論 3 418
  • 文/蒼蘭香墨 我猛地睜開眼咸灿,長吁一口氣:“原來是場噩夢啊……” “哼!你這毒婦竟也來了侮叮?” 一聲冷哼從身側(cè)響起避矢,我...
    開封第一講書人閱讀 39,319評論 0 276
  • 序言:老撾萬榮一對情侶失蹤,失蹤者是張志新(化名)和其女友劉穎囊榜,沒想到半個月后审胸,有當?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體,經(jīng)...
    沈念sama閱讀 45,775評論 1 315
  • 正文 獨居荒郊野嶺守林人離奇死亡卸勺,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點故事閱讀 37,945評論 3 336
  • 正文 我和宋清朗相戀三年砂沛,在試婚紗的時候發(fā)現(xiàn)自己被綠了。 大學時的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片曙求。...
    茶點故事閱讀 40,096評論 1 350
  • 序言:一個原本活蹦亂跳的男人離奇死亡碍庵,死狀恐怖,靈堂內(nèi)的尸體忽然破棺而出圆到,到底是詐尸還是另有隱情怎抛,我是刑警寧澤,帶...
    沈念sama閱讀 35,789評論 5 346
  • 正文 年R本政府宣布芽淡,位于F島的核電站马绝,受9級特大地震影響,放射性物質(zhì)發(fā)生泄漏挣菲。R本人自食惡果不足惜富稻,卻給世界環(huán)境...
    茶點故事閱讀 41,437評論 3 331
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望白胀。 院中可真熱鬧椭赋,春花似錦、人聲如沸或杠。這莊子的主人今日做“春日...
    開封第一講書人閱讀 31,993評論 0 22
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽向抢。三九已至认境,卻和暖如春,著一層夾襖步出監(jiān)牢的瞬間挟鸠,已是汗流浹背叉信。 一陣腳步聲響...
    開封第一講書人閱讀 33,107評論 1 271
  • 我被黑心中介騙來泰國打工, 沒想到剛下飛機就差點兒被人妖公主榨干…… 1. 我叫王不留艘希,地道東北人硼身。 一個月前我還...
    沈念sama閱讀 48,308評論 3 372
  • 正文 我出身青樓硅急,卻偏偏與公主長得像,于是被迫代替她去往敵國和親佳遂。 傳聞我的和親對象是個殘疾皇子营袜,可洞房花燭夜當晚...
    茶點故事閱讀 45,037評論 2 355

推薦閱讀更多精彩內(nèi)容

  • rljs by sennchi Timeline of History Part One The Cognitiv...
    sennchi閱讀 7,334評論 0 10
  • 文/東成西九 || 東哥說職場 今天是日更第108天 || 轉(zhuǎn)載請聯(lián)系作者连茧,違者必究 古時候的門當戶對是指男女雙方...
    東成西九閱讀 2,077評論 4 44
  • 正態(tài)分布公式 numpy中API:numpy.random.normal(loc=0.0, scale=1.0, ...
    鵝鵝鵝_閱讀 1,374評論 0 0
  • 2018.07.30 倫敦的地鐵 搭乘地鐵回住處,再次穿行在地鐵站里感受各種管道巍糯。 帶極速轉(zhuǎn)彎的站臺 很長很長的樓...
    一愿天行閱讀 378評論 0 1
  • 踩著高跟鞋啸驯,5:35開始打車,5:53才上車祟峦,踩著8cm的高跟鞋狂奔到腿軟趕6:10的車前往上悍6罚~ 想著千萬不能說...
    雨文_yuwencc1009閱讀 485評論 0 1