NLP 數(shù)據(jù)增強

在機器學習領(lǐng)域,個人覺得有一個大前提:數(shù)據(jù)是永遠不夠的脯倒。雖然現(xiàn)在有很多吹噓大數(shù)據(jù)实辑,在自然語言處理領(lǐng)域,標注數(shù)據(jù)尤其匱乏藻丢,而且標注的質(zhì)量也非常難控制剪撬。在這種情況下,數(shù)據(jù)增強是非常必要的悠反,這對于模型的robustness和generalization都非常重要残黑。

在不同NLP領(lǐng)域都有一些特定的數(shù)據(jù)增強的方法.?

Task-independent data augmentation for NLP

Data augmentation aims to create additional training data by producing variations of existing training examples through transformations, which can mirror those encountered in the real world. In Computer Vision (CV), common augmentation techniques aremirroring, random cropping, shearing, etc. Data augmentation is super useful in CV. For instance, it has been used to great effect in AlexNet (Krizhevsky et al., 2012) [1] to combat overfitting and in most state-of-the-art models since. In addition, data augmentation makes intuitive sense as it makes the training data more diverse and should thus increase a model’s generalization ability.

However, in NLP, data augmentation is not widely used. In my mind, this is for two reasons:

Data in NLP is discrete. This prevents us from applying simple transformations directly to the input data. Most recently proposed augmentation methods in CV focus on such transformations, e.g. domain randomization (Tobin et al., 2017) [2].

Small perturbations may change the meaning. Deleting a negation may change a sentence’s sentiment, while modifying a word in a paragraph might inadvertently change the answer to a question about that paragraph. This is not the case in CV where perturbing individual pixels does not change whether an image is a cat or dog and even stark changes such as interpolation of different images can be useful (Zhang et al., 2017) [3].

Existing approaches that I am aware of are either rule-based (Li et al., 2017) [5] or task-specific, e.g. for parsing (Wang and Eisner, 2016) [6] or zero-pronoun resolution (Liu et al., 2017) [7]. Xie et al. (2017) [39] replace words with samples from different distributions for language modelling and Machine Translation. Recent work focuses on creating adversarial examples either by replacing words or characters (Samanta and Mehta, 2017; Ebrahimi et al., 2017) [8,9], concatenation (Jia and Liang, 2017) [11], or adding adversarial perturbations (Yasunaga et al., 2017) [10]. An adversarial setup is also used by Li et al. (2017) [16] who train a system to produce sequences that are indistinguishable from human-generated dialogue utterances.

Back-translation (Sennrich et al., 2015; Sennrich et al., 2016) [12,13] is a common data augmentation method in Machine Translation (MT) that allows us to incorporate monolingual training data. For instance, when training a EN→FR system, monolingual French text is translated to English using an FR→EN system; the synthetic parallel data can then be used for training. Back-translation can also be used for paraphrasing (Mallinson et al., 2017) [14]. Paraphrasing has been used for data augmentation for QA (Dong et al., 2017) [15], but I am not aware of its use for other tasks.

back translation.Translate the targeted sentence into source sentence and then use synthetic sentence pairs as additional training data.?Improving Neural Machine Translation Models with Monolingual Data

Joint Learning.?Joint Training for Neural Machine Translation Models with Monolingual Data??

Dual Learning.?Dual Learning for Machine Translation



Another method that is close to paraphrasing is generating sentences from a continuous space using a variational autoencoder (Bowman et al., 2016; Guu et al., 2017) [17,19]. If the representations are disentangled as in (Hu et al., 2017) [18], then we are also not too far from style transfer (Shen et al., 2017) [20].

There are a few research directions that would be interesting to pursue:

Evaluation study:Evaluate a range of existing data augmentation methods as well as techniques that have not been widely used for augmentation such as paraphrasing and style transfer on a diverse range of tasks including text classification and sequence labelling. Identify what types of data augmentation are robust across task and which are task-specific. This could be packaged as a software library to make future benchmarking easier (thinkCleverHansfor NLP).

Data augmentation with style transfer:Investigate if style transfer can be used to modify various attributes of training examples for more robust learning.

Learn the augmentation:Similar to Dong et al. (2017) we could learn either to paraphrase or to generate transformations for a particular task.

Learn a word embedding space for data augmentation:A typical word embedding space clusters synonyms and antonyms together; using nearest neighbours in this space for replacement is thus infeasible. Inspired by recent work (Mrk?i? et al., 2017) [21], we could specialize the word embedding space to make it more suitable for data augmentation.

Adversarial data augmentation:Related to recent work in interpretability (Ribeiro et al., 2016) [22], we could change the most salient words in an example, i.e. those that a model depends on for a prediction. This still requires a semantics-preserving replacement method, however.


Tutorial

Robust, Unbiased Natural Language Processing

(未完待續(xù)...)

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請聯(lián)系作者
  • 序言:七十年代末,一起剝皮案震驚了整個濱河市斋否,隨后出現(xiàn)的幾起案子梨水,更是在濱河造成了極大的恐慌,老刑警劉巖茵臭,帶你破解...
    沈念sama閱讀 217,406評論 6 503
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件疫诽,死亡現(xiàn)場離奇詭異,居然都是意外死亡旦委,警方通過查閱死者的電腦和手機奇徒,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 92,732評論 3 393
  • 文/潘曉璐 我一進店門,熙熙樓的掌柜王于貴愁眉苦臉地迎上來缨硝,“玉大人逼龟,你說我怎么就攤上這事∽菲希” “怎么了?”我有些...
    開封第一講書人閱讀 163,711評論 0 353
  • 文/不壞的土叔 我叫張陵奕短,是天一觀的道長宜肉。 經(jīng)常有香客問我,道長翎碑,這世上最難降的妖魔是什么谬返? 我笑而不...
    開封第一講書人閱讀 58,380評論 1 293
  • 正文 為了忘掉前任,我火速辦了婚禮日杈,結(jié)果婚禮上遣铝,老公的妹妹穿的比我還像新娘。我一直安慰自己莉擒,他們只是感情好酿炸,可當我...
    茶點故事閱讀 67,432評論 6 392
  • 文/花漫 我一把揭開白布。 她就那樣靜靜地躺著涨冀,像睡著了一般填硕。 火紅的嫁衣襯著肌膚如雪。 梳的紋絲不亂的頭發(fā)上,一...
    開封第一講書人閱讀 51,301評論 1 301
  • 那天扁眯,我揣著相機與錄音壮莹,去河邊找鬼。 笑死姻檀,一個胖子當著我的面吹牛命满,可吹牛的內(nèi)容都是我干的。 我是一名探鬼主播绣版,決...
    沈念sama閱讀 40,145評論 3 418
  • 文/蒼蘭香墨 我猛地睜開眼胶台,長吁一口氣:“原來是場噩夢啊……” “哼!你這毒婦竟也來了僵娃?” 一聲冷哼從身側(cè)響起概作,我...
    開封第一講書人閱讀 39,008評論 0 276
  • 序言:老撾萬榮一對情侶失蹤,失蹤者是張志新(化名)和其女友劉穎默怨,沒想到半個月后讯榕,有當?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體,經(jīng)...
    沈念sama閱讀 45,443評論 1 314
  • 正文 獨居荒郊野嶺守林人離奇死亡匙睹,尸身上長有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點故事閱讀 37,649評論 3 334
  • 正文 我和宋清朗相戀三年易遣,在試婚紗的時候發(fā)現(xiàn)自己被綠了。 大學時的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片惋啃。...
    茶點故事閱讀 39,795評論 1 347
  • 序言:一個原本活蹦亂跳的男人離奇死亡歹苦,死狀恐怖,靈堂內(nèi)的尸體忽然破棺而出梦谜,到底是詐尸還是另有隱情丘跌,我是刑警寧澤,帶...
    沈念sama閱讀 35,501評論 5 345
  • 正文 年R本政府宣布唁桩,位于F島的核電站闭树,受9級特大地震影響,放射性物質(zhì)發(fā)生泄漏荒澡。R本人自食惡果不足惜报辱,卻給世界環(huán)境...
    茶點故事閱讀 41,119評論 3 328
  • 文/蒙蒙 一、第九天 我趴在偏房一處隱蔽的房頂上張望单山。 院中可真熱鬧碍现,春花似錦、人聲如沸米奸。這莊子的主人今日做“春日...
    開封第一講書人閱讀 31,731評論 0 22
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽悴晰。三九已至辩棒,卻和暖如春,著一層夾襖步出監(jiān)牢的瞬間,已是汗流浹背一睁。 一陣腳步聲響...
    開封第一講書人閱讀 32,865評論 1 269
  • 我被黑心中介騙來泰國打工钻弄, 沒想到剛下飛機就差點兒被人妖公主榨干…… 1. 我叫王不留,地道東北人者吁。 一個月前我還...
    沈念sama閱讀 47,899評論 2 370
  • 正文 我出身青樓窘俺,卻偏偏與公主長得像,于是被迫代替她去往敵國和親复凳。 傳聞我的和親對象是個殘疾皇子瘤泪,可洞房花燭夜當晚...
    茶點故事閱讀 44,724評論 2 354

推薦閱讀更多精彩內(nèi)容

  • 今天一天都過得比較休閑,基本上都在玩烏賊育八,磨煉技術(shù)对途。之前好長一段時間沒有好好玩這個游戲,今天剛開始玩的時候發(fā)現(xiàn)沒有...
    鈐魚擺擺閱讀 318評論 1 1
  • 營業(yè)到凌晨零點二十分左右髓棋,最后一位客人離去后实檀,唯一的店員小史也拎著四只后廚剩下的大閘蟹下班滾蛋,路于心把正在睡...
    路于心閱讀 511評論 3 3
  • 愚人節(jié)是國外節(jié)日中我覺得最有必要在中國過一下的節(jié)按声,因為我是一個富有逗逼精神的女漢子膳犹。。签则。须床。。題記 4月的第一天是春...
    意境的亦樹閱讀 286評論 2 3