<p><span style="font-size:16px"><span style="font-size:16px">本文介紹基于</span><span style="font-size:16px">Randomly Perturb</span><span style="font-size:16px">互信息最大化的圖譜預(yù)訓(xùn)練模型</span><span style="font-size:16px">GraphCL</span><span style="font-size:16px">(</span><span style="font-size:16px">NIPS 2020</span><span style="font-size:16px">)办铡,介紹模型核心點和模型思路,完整匯報</span><span style="font-size:16px">ppt</span><span style="font-size:16px">獲取請關(guān)注公眾號【AI機器學(xué)習(xí)與知識圖譜】回復(fù)關(guān)鍵字:</span></span><strong><span style="font-size:16px">GraphCL</span></strong></p><p>
</p><p><span style="font-size:18px"><strong>一咒精、背景知識</strong></span></p><p><span style="font-size:16px"><b>圖譜預(yù)訓(xùn)練作用</b><span>:圖神經(jīng)網(wǎng)絡(luò)</span><span>(GNNs)</span><span>已被證明是建模圖結(jié)構(gòu)數(shù)據(jù)的強大工具。然而漏益,訓(xùn)練</span><span>GNN</span><span>模型通常需要大量的特定任務(wù)的標(biāo)記數(shù)據(jù)违施,而獲取這些數(shù)據(jù)往往非常昂貴。利用自監(jiān)督</span><span>GNN</span><span>模型對未標(biāo)記圖譜數(shù)據(jù)進(jìn)行預(yù)訓(xùn)練是減少標(biāo)記工作的一種有效方法尚蝌,然后將預(yù)訓(xùn)練學(xué)習(xí)到的模型可用在只有少量標(biāo)簽圖譜數(shù)據(jù)的下游任務(wù)迎变。</span></span></p><p>
</p><p><span style="font-size:16px"><b>大規(guī)模圖譜預(yù)訓(xùn)練:</b><span>大規(guī)模知識圖譜預(yù)訓(xùn)練方案都需要遵守以下幾個套路:首先需要進(jìn)行子圖采樣,使用子圖進(jìn)行模型訓(xùn)練飘言;其次采用自監(jiān)督學(xué)習(xí)模式衣形,</span><span>Mask</span><span>圖中的節(jié)點或邊然后進(jìn)行訓(xùn)練;計算</span><span>Loss</span><span>時需要進(jìn)行負(fù)采樣姿鸿,圖規(guī)模大無法基于全部負(fù)樣例谆吴。</span></span></p><p><b>
</b></p><p><span style="font-size:16px"><b>對比學(xué)習(xí)</b><b>VS</b><b>生成式學(xué)習(xí)</b><span>:請參考上一篇有詳細(xì)解釋,。</span></span></p><p>
</p><p><span style="font-size:18px"><strong>二、GraphCL模型</strong></span></p><p><span style="font-size:16px"><span>
</span></span></p><p><span style="font-size:16px"><span>GraphCL</span>是一個基于對比學(xué)習(xí)的自監(jiān)督圖譜預(yù)訓(xùn)練模型咙好,<span>GraphCL</span>模型對一個節(jié)點得到兩個隨機擾動的<span>L-hop</span>的<span>Subgraph</span>岗宣,通過最大化兩個<span>Subgraph</span>之間的相似度來進(jìn)行自監(jiān)督學(xué)習(xí)。關(guān)注以下三個問題腻菇。</span></p><p><b>
</b></p><p><span style="font-size:16px"><b>問題</b><b>1</b><b>:</b><span>A Stochastic Perturbation</span>,如何獲得一個節(jié)點兩個<span>L-Hop</span>的子圖昔馋?對個一個節(jié)點完整的<span>L-Hop Subgraph</span>筹吐,本文通過以概率<span>p</span>隨機丟邊的方式來生成不同的子圖結(jié)構(gòu)。</span></p><p><b>
</b></p><p><span style="font-size:16px"><b>問題</b><b>2</b><b>:</b><b>A </b><span>GNN based Encoder</span>秘遏,使用何種圖神經(jīng)網(wǎng)絡(luò)對兩個<span>L-Hop Subgraph</span>進(jìn)行表征丘薛?簡單的<span>GCN</span>模型(<span>Hamiltonet al. 2017</span>),匯聚函數(shù)使用<span>mean-pooling propagation rule</span>邦危,但對于<span>Transductive</span>和<span>Inductive Learning</span>會不一樣洋侨。<span>Transductive Learning</span>時匯聚公式如下:</span></p><div class="image-package"><img src="https://upload-images.jianshu.io/upload_images/26011021-acbaaea73bb0ddf0.jpeg" img-data="{"format":"jpeg","size":2925,"height":40,"width":277}" class="uploaded-img" style="min-height:200px;min-width:200px;" width="auto" height="auto"/>
</div><p><span style="font-size:16px"><span>Inductive Learning</span><span>時匯聚公式如下:</span></span>
</p><div class="image-package"><img src="https://upload-images.jianshu.io/upload_images/26011021-cabab5c0cdb6f667.jpeg" img-data="{"format":"jpeg","size":8926,"height":140,"width":327}" class="uploaded-img" style="min-height:200px;min-width:200px;" width="auto" height="auto"/>
</div><p>
</p><p><span style="font-size:16px"><b>問題</b><b>3</b><b>:</b><span>A Contrastive Loss Function</span>舍扰,損失函數(shù)如何定義?首先對兩個<span>L-Hop Subgraph</span>相似度計算使用的是余弦相似度希坚,損失函數(shù)是<span>Based on a normalized temperature-scaled cross entropy</span>边苹,如下公式所示,其中1_([u≠v])指標(biāo)函數(shù)表示當(dāng)u≠v時為<span>1</span>吏够,反之為<span>0</span>勾给,τ是一個<span>temperature parameter</span>。</span></p><div class="image-package"><img src="https://upload-images.jianshu.io/upload_images/26011021-ac6a40a28411a3c2.jpeg" img-data="{"format":"jpeg","size":2507,"height":49,"width":233}" class="uploaded-img" style="min-height:200px;min-width:200px;" width="auto" height="auto"/>
</div><p/><p>
</p><div class="image-package"><img src="https://upload-images.jianshu.io/upload_images/26011021-8236db804cb03615.jpeg" img-data="{"format":"jpeg","size":9017,"height":70,"width":706}" class="uploaded-img" style="min-height:200px;min-width:200px;" width="auto" height="auto"/>
</div><p>
</p><p><span style="font-size:16px"><strong><span>GraphCL</span><span>模型運行步驟</span></strong></span>
</p><p>
</p><p><span style="font-size:16px"><span>對一個采樣的</span><span>Mini-Batch</span>B锅知,<span>GraphCL</span><span>模型執(zhí)行步驟如下所示:</span><span>
1</span><span>播急、對于</span>B<span>中的節(jié)點</span><span>u</span><span>,定義</span>(X_u,A_u)<span>是節(jié)點</span><span>u</span><span>的</span><span>L</span><span>跳子圖售睹,包含從</span><span>u</span><span>到</span><span>L</span><span>跳內(nèi)所有節(jié)點和邊及其對應(yīng)的特征信息桩警;</span></span></p><p><span style="font-size:16px"><span>2</span><span>、按照之前介紹的擾動策略得到節(jié)點</span><span>u</span><span>的兩個擾動的</span><span>L-Hop</span><span>子圖</span>t_1,t_2<span>昌妹,如下公示所示:</span><span>
</span></span></p><div class="image-package"><img src="https://upload-images.jianshu.io/upload_images/26011021-7774c5dbe0747bd9.jpeg" img-data="{"format":"jpeg","size":4735,"height":71,"width":245}" class="uploaded-img" style="min-height:200px;min-width:200px;" width="auto" height="auto"/>
</div><p><span style="font-size:16px"><span>3</span><span>捶枢、使用</span><span>GraphEncoder </span>f<span>在</span>t_1,t_2<span>上</span><span>,如下公式所示:</span></span></p><div class="image-package"><img src="https://upload-images.jianshu.io/upload_images/26011021-8b275480c0d27203.jpeg" img-data="{"format":"jpeg","size":3970,"height":70,"width":207}" class="uploaded-img" style="min-height:200px;min-width:200px;" width="auto" height="auto"/>
</div><p><span style="font-size:16px"><span>4</span><span>飞崖、使用如下的</span><span>Loss Function</span><span>來訓(xùn)練更新</span><span>Graph Encoder </span><span>f</span><span>的</span><span>模型參數(shù)</span>
</span></p><div class="image-package"><img src="https://upload-images.jianshu.io/upload_images/26011021-70a2d0a32cdf0222.jpeg" img-data="{"format":"jpeg","size":2279,"height":65,"width":158}" class="uploaded-img" style="min-height:200px;min-width:200px;" width="auto" height="auto"/>
</div><p><span style="font-size:16px"><span style="font-size:16px">5烂叔、GraphCL</span><span style="font-size:16px">模型結(jié)構(gòu)圖如下所示:</span>
</span></p><div class="image-package"><img src="https://upload-images.jianshu.io/upload_images/26011021-5e5596c5bf6b5cc9.jpeg" img-data="{"format":"jpeg","size":14436,"height":225,"width":595}" class="uploaded-img" style="min-height:200px;min-width:200px;" width="auto" height="auto"/>
</div><p>
</p><p><span style="font-size:18px"><strong>三、結(jié)論</strong></span></p><p>
</p><p><span style="font-size:16px"><span style="font-size:16px">結(jié)論:在</span><span style="font-size:16px">Transductive Learning</span><span style="font-size:16px">和</span><span style="font-size:16px">Inductive Learning</span><span style="font-size:16px">兩個方面固歪,都證明</span><span style="font-size:16px">GraphCL</span><span style="font-size:16px">模型在許多節(jié)點分類基準(zhǔn)上顯著優(yōu)于最先進(jìn)的無監(jiān)督學(xué)習(xí)蒜鸡。</span></span></p><div class="image-package"><img src="https://upload-images.jianshu.io/upload_images/26011021-0f7b984ea33ced92.jpeg" img-data="{"format":"jpeg","size":34521,"height":195,"width":740}" class="uploaded-img" style="min-height:200px;min-width:200px;" width="auto" height="auto"/>
</div><div class="image-package"><img src="https://upload-images.jianshu.io/upload_images/26011021-90921a9bd38c228d.jpeg" img-data="{"format":"jpeg","size":53311,"height":269,"width":823}" class="uploaded-img" style="min-height:200px;min-width:200px;" width="auto" height="auto"/>
</div><div class="image-package"><img src="https://upload-images.jianshu.io/upload_images/26011021-8796bd43f3fed294.jpeg" img-data="{"format":"jpeg","size":44397,"height":272,"width":754}" class="uploaded-img" style="min-height:200px;min-width:200px;" width="auto" height="auto"/>
</div><p>
</p><p>
</p><p><span style="font-size:18px"><strong>往期精彩</strong></span></p><p><span>【知識圖譜系列】Over-Smoothing 2020綜述</span></p><p><span style="font-size:14px">【知識圖譜系列】基于2D卷積的知識圖譜嵌入</span></p><p><span style="font-size:14px">【知識圖譜系列】自適應(yīng)深度和廣度圖神經(jīng)網(wǎng)絡(luò)模型</span></p><p><span style="font-size:14px">【知識圖譜系列】知識圖譜的神經(jīng)符號邏輯推理</span></p><p/><p><span style="font-size:14px">【知識圖譜系列】多關(guān)系神經(jīng)網(wǎng)絡(luò)CompGCN</span>
</p><p/><p>【知識圖譜系列】知識圖譜表示學(xué)習(xí)綜述 | 近30篇優(yōu)秀論文串講
</p><p/><p>【面經(jīng)系列】八位碩博大佬的字節(jié)之旅</p>
知識圖譜系列 Randomly Perturb的圖譜預(yù)訓(xùn)練
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
- 文/潘曉璐 我一進(jìn)店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來脊另,“玉大人导狡,你說我怎么就攤上這事≠送矗” “怎么了?”我有些...
- 文/不壞的土叔 我叫張陵独郎,是天一觀的道長踩麦。 經(jīng)常有香客問我枚赡,道長,這世上最難降的妖魔是什么谓谦? 我笑而不...
- 正文 為了忘掉前任贫橙,我火速辦了婚禮,結(jié)果婚禮上反粥,老公的妹妹穿的比我還像新娘卢肃。我一直安慰自己,他們只是感情好才顿,可當(dāng)我...
- 文/花漫 我一把揭開白布莫湘。 她就那樣靜靜地躺著,像睡著了一般郑气。 火紅的嫁衣襯著肌膚如雪幅垮。 梳的紋絲不亂的頭發(fā)上,一...
- 文/蒼蘭香墨 我猛地睜開眼潮峦,長吁一口氣:“原來是場噩夢啊……” “哼!你這毒婦竟也來了歹叮?” 一聲冷哼從身側(cè)響起跑杭,我...
- 正文 年R本政府宣布,位于F島的核電站乌叶,受9級特大地震影響盆偿,放射性物質(zhì)發(fā)生泄漏。R本人自食惡果不足惜准浴,卻給世界環(huán)境...
- 文/蒙蒙 一事扭、第九天 我趴在偏房一處隱蔽的房頂上張望。 院中可真熱鬧乐横,春花似錦求橄、人聲如沸。這莊子的主人今日做“春日...
- 文/蒼蘭香墨 我抬頭看了看天上的太陽。三九已至匾南,卻和暖如春啃匿,著一層夾襖步出監(jiān)牢的瞬間,已是汗流浹背蛆楞。 一陣腳步聲響...
推薦閱讀更多精彩內(nèi)容
- 隨著知識圖譜在人工智能各個領(lǐng)域的廣泛使用,知識圖譜受到越來越多AI研究人員的關(guān)注和學(xué)習(xí)孩等,已經(jīng)成為人工智能邁向認(rèn)知系...
- 本文分享一篇知識圖譜表示學(xué)習(xí)匯報 ppt 艾君,將知識圖譜表示學(xué)習(xí)方法粗略分為四大類,涉及將近 30 篇優(yōu)秀論...
- awesome-knowledge-graph 整理知識圖譜相關(guān)學(xué)習(xí)資料肄方,提供系統(tǒng)化的知識圖譜學(xué)習(xí)路徑冰垄。 目錄 理...
- 本文分享一篇多關(guān)系知識圖譜表示學(xué)習(xí)匯報 ppt ,介紹近幾年及 2020 新出的共七篇處理異質(zhì)圖的模型权她。歡...
- 作為人工智能時代最重要的知識表示方式之一虹茶,知識圖譜能夠打破不同場景下的數(shù)據(jù)隔離,為搜索隅要、推薦蝴罪、問答、解釋與決策等應(yīng)...