Stanford cs231n Assignment #1 (b) 實(shí)現(xiàn)SVM

上一章完成了一個(gè)KNN Classifier兴猩,這一章就來(lái)到了熟悉又陌生的SVM...感覺自己雖然以前用過(guò)SVM鞭衩,但是從來(lái)沒有真正搞懂過(guò)敲茄,就著這門好課鞏固一下吧充包!

1. Preprocessing

和上一章不同的是秕岛,先visualize mean image:

Paste_Image.png

然后從所有image中減去這個(gè)mean image,這個(gè)數(shù)據(jù)預(yù)處理過(guò)程是為了統(tǒng)一數(shù)據(jù)的量級(jí)误证。對(duì)于圖像而言,像素值一定在0-255之間修壕,所以直接減去mean就可以了愈捅,但是如果是其他數(shù)據(jù),通常用標(biāo)準(zhǔn)化或者歸一化來(lái)做慈鸠。下面代碼為預(yù)處理過(guò)程:

# second: subtract the mean image from train and test data
X_train -= mean_image
X_val -= mean_image
X_test -= mean_image
X_dev -= mean_image

增加bias

# third: append the bias dimension of ones (i.e. bias trick) so that our SVM
# only has to worry about optimizing a single weight matrix W.
X_train = np.hstack([X_train, np.ones((X_train.shape[0], 1))])
X_val = np.hstack([X_val, np.ones((X_val.shape[0], 1))])
X_test = np.hstack([X_test, np.ones((X_test.shape[0], 1))])
X_dev = np.hstack([X_dev, np.ones((X_dev.shape[0], 1))])

print X_train.shape, X_val.shape, X_test.shape, X_dev.shape

2. implement a fully-vectorized loss function for the SVM

注意SVM的loss function, 這里的delta設(shè)置為1(即SVM所要求的間隔):

Paste_Image.png

naive implementation

首先看一下naive的for loop解法蓝谨,只要正常計(jì)算 dL/dW就可以了,這個(gè)沒什么難度。需要稍微解釋一下的是譬巫,代碼中的dW其實(shí)是實(shí)際意義的dL/dW咖楣。
按照求導(dǎo)表達(dá)式為:
dW[:,j] += X[i].transpose()
dW[:,y[i]] -= X[i]

def svm_loss_naive(W, X, y, reg):
  """
  Structured SVM loss function, naive implementation (with loops).

  Inputs have dimension D, there are C classes, and we operate on minibatches
  of N examples.

  Inputs:
  - W: A numpy array of shape (D, C) containing weights.
  - X: A numpy array of shape (N, D) containing a minibatch of data.
  - y: A numpy array of shape (N,) containing training labels; y[i] = c means
    that X[i] has label c, where 0 <= c < C.
  - reg: (float) regularization strength

  Returns a tuple of:
  - loss as single float
  - gradient with respect to weights W; an array of same shape as W
  """
  dW = np.zeros(W.shape) # initialize the gradient as zero

  # compute the loss and the gradient
  num_classes = W.shape[1]
  num_train = X.shape[0]
  loss = 0.0
  for i in xrange(num_train):
    scores = X[i].dot(W)
    correct_class_score = scores[y[i]]
    for j in xrange(num_classes):
      if j == y[i]:
        continue
      margin = scores[j] - correct_class_score + 1 # note delta = 1
      # dW[:,j] += X[i].transpose()
      if margin > 0:
        loss += margin
        dW[:,j] += X[i].transpose()
        dW[:,y[i]] -= X[i]

  # Right now the loss is a sum over all training examples, but we want it
  # to be an average instead so we divide by num_train.
  loss /= num_train
  dW /= num_train

  # Add regularization to the loss.
  loss += 0.5 * reg * np.sum(W * W)
  dW += reg * np.sum(W)

  return loss, dW

vectorized implementation

接下來(lái)看vectorized version:

def svm_loss_vectorized(W, X, y, reg):
  """
  Structured SVM loss function, vectorized implementation.

  Inputs and outputs are the same as svm_loss_naive.
  """
  loss = 0.0
  dW = np.zeros(W.shape) # initialize the gradient as zero
  num_train = X.shape[0]
  num_classes = W.shape[1]

  #############################################################################
  # TODO:                                                                     #
  # Implement a vectorized version of the structured SVM loss, storing the    #
  # result in loss.                                                           #
  #############################################################################
  scores = X.dot(W)
  # true labels
  s_yi = scores[np.arange(num_train), y]
  mat = scores - np.tile(s_yi, (num_classes,1)).transpose() + 1
  loss_mat = np.maximum(np.zeros((num_train, num_classes)), mat)
  # loss_mat[loss_mat<0] = 0    # this worked out as well
  loss_mat[np.arange(num_train), y] = 0
  loss = np.sum(loss_mat)/num_train
  loss += 0.5 * reg * np.sum(W * W)

    #############################################################################
  # TODO:                                                                     #
  # Implement a vectorized version of the gradient for the structured SVM     #
  # loss, storing the result in dW.                                           #
  #                                                                           #
  # Hint: Instead of computing the gradient from scratch, it may be easier    #
  # to reuse some of the intermediate values that you used to compute the     #
  # loss.                                                                     #
  #############################################################################
 
  # I don't know what's wrong about the following commented code
  #############################################################################
  # loss_pos = np.array(np.nonzero(loss_mat))
  # print loss_pos, loss_pos.shape
  # dW[ :, y[loss_pos[0,:]] ] -= X[ loss_pos[0,:],: ].transpose()
  # dW[ :, loss_pos[1,:] ] += X[ loss_pos[0,:],: ].transpose()
  # dW /= num_train
  # dW += reg * W

# Binarize into integers
  binary = loss_mat
  binary[loss_mat > 0] = 1

  # Perform the two operations simultaneously
  # (1) for all j: dW[j,:] = sum_{i, j produces positive margin with i} X[:,i].T
  # (2) for all i: dW[y[i],:] = sum_{j != y_i, j produces positive margin with i} -X[:,i].T
  col_sum = np.sum(binary, axis=1)
  binary[range(num_train), y] = -col_sum[range(num_train)]
  dW = np.dot(X.T, binary)

  # Divide
  dW /= num_train

  # Regularize
  dW += reg*W

  return loss, dW

一開始我自己實(shí)現(xiàn)的代碼是代碼中我說(shuō)我不知道哪里錯(cuò)了的部分..到現(xiàn)在我也覺得是對(duì)的,還沒有看出到底哪里錯(cuò)了芦昔,如果有人剛好看到這篇文章愿意指正出來(lái)诱贿,我會(huì)非常感謝您。第二種方法是在github上看到的一種方法咕缎,也很巧妙珠十,搬過(guò)來(lái)用了發(fā)現(xiàn)代碼跑的結(jié)果是對(duì)的,但是還是不明白為什么自己那個(gè)方法是錯(cuò)的QAQ...

3. Stochastic Gradient Descent (SGD)

每一次迭代訓(xùn)練的時(shí)候隨機(jī)選取一個(gè)batch_size的數(shù)據(jù)個(gè)數(shù)凭豪。在給定的樣本集合M中焙蹭,隨機(jī)取出副本N代替原始樣本M來(lái)作為全集,對(duì)模型進(jìn)行訓(xùn)練嫂伞,這種訓(xùn)練由于是抽取部分?jǐn)?shù)據(jù)孔厉,所以有較大的幾率得到的是,一個(gè)局部最優(yōu)解帖努,但是一個(gè)明顯的好處是撰豺,如果在樣本抽取合適范圍內(nèi),既會(huì)求出結(jié)果然磷,而且速度還快郑趁。這個(gè)理解摘自:http://www.cnblogs.com/gongxijun/p/5890548.html

順便發(fā)現(xiàn)了一個(gè)這個(gè)課程的bug,就是前面單純實(shí)現(xiàn)svm的optimization的時(shí)候和后面做SGD的時(shí)候要求輸入的數(shù)據(jù)維度是反著的……所以做這里的時(shí)候還把前面給改了……anyway……

class LinearClassifier(object):

  def __init__(self):
    self.W = None

  def train(self, X, y, learning_rate=1e-3, reg=1e-5, num_iters=100,
            batch_size=200, verbose=False):
    """
    Train this linear classifier using stochastic gradient descent.

    Inputs:
    - X: A numpy array of shape (N, D) containing training data; there are N
      training samples each of dimension D.
    - y: A numpy array of shape (N,) containing training labels; y[i] = c
      means that X[i] has label 0 <= c < C for C classes.
    - learning_rate: (float) learning rate for optimization.
    - reg: (float) regularization strength.
    - num_iters: (integer) number of steps to take when optimizing
    - batch_size: (integer) number of training examples to use at each step.
    - verbose: (boolean) If true, print progress during optimization.

    Outputs:
    A list containing the value of the loss function at each training iteration.
    """
    num_train, dim = X.shape
    num_classes = np.max(y) + 1 # assume y takes values 0...K-1 where K is number of classes
    if self.W is None:
      # lazily initialize W
      self.W = 0.001 * np.random.randn(dim, num_classes)

    # Run stochastic gradient descent to optimize W
    loss_history = []
    for it in xrange(num_iters):
      X_batch = None
      y_batch = None

      #########################################################################
      # TODO:                                                                 #
      # Sample batch_size elements from the training data and their           #
      # corresponding labels to use in this round of gradient descent.        #
      # Store the data in X_batch and their corresponding labels in           #
      # y_batch; after sampling X_batch should have shape (dim, batch_size)   #
      # and y_batch should have shape (batch_size,)                           #
      #                                                                       #
      # Hint: Use np.random.choice to generate indices. Sampling with         #
      # replacement is faster than sampling without replacement.              #
      #########################################################################
      num_random = np.random.choice(num_train, batch_size, replace=True)
      X_batch = X[num_random, :].transpose()
      # print X_batch.shape
      y_batch = y[num_random]
      #########################################################################
      #                       END OF YOUR CODE                                #
      #########################################################################

      # evaluate loss and gradient
      loss, grad = self.loss(X_batch, y_batch, reg)
      loss_history.append(loss)

      # perform parameter update
      #########################################################################
      # TODO:                                                                 #
      # Update the weights using the gradient and the learning rate.          #
      #########################################################################
      self.W += -grad * learning_rate
      #########################################################################
      #                       END OF YOUR CODE                                #
      #########################################################################

      if verbose and it % 100 == 0:
        print 'iteration %d / %d: loss %f' % (it, num_iters, loss)

    return loss_history

注意按照grad的反方向調(diào)整W!你想啊姿搜,這個(gè)grad代表的意義是寡润,每正向改變w的值,會(huì)造成最終的目標(biāo)函數(shù)有多大的改變舅柜。比如這個(gè)grad是一個(gè)正數(shù)梭纹,那么自變量就是正向作用,它越大則目標(biāo)函數(shù)值越大致份,那么我們?yōu)榱说玫綐O小值变抽,是不是就應(yīng)該按照反方向來(lái)對(duì)權(quán)值(自變量)進(jìn)行調(diào)整呢?恩~

Paste_Image.png

4. Play with hyperparameters

至于怎么決定那些learning_rate, regularization_parameter的大小氮块,就是用cross-validation集做驗(yàn)證的事了绍载,將不同的參數(shù)用train來(lái)訓(xùn)練好后用X_val, y_val來(lái)做驗(yàn)證,然后選出正確率最高的一組參數(shù)滔蝉。這里就不細(xì)說(shuō)了击儡,一些dirty work...最后的正確率也就是0.35-0.4罷了,這個(gè)svm的單層訓(xùn)練器的有效性可想而知嘛蝠引。想說(shuō)的是最后的一步可視化:

Paste_Image.png

的確是阳谍,很難看呢V瘛!矫夯!

5 Hinge Loss

The Hinge Loss 定義為 E(z) = max(0,1-z)鸽疾。Hinge loss是凸的,但是因?yàn)閷?dǎo)數(shù)不連續(xù)训貌,還有一些變種制肮,比如 Squared Hing Loss (L2 SVM)。Hinge loss是下圖的綠線旺订。黑線是0-1函數(shù)弄企,紅線是log loss(基于最大似然的負(fù)log)。

4DFDU.png

一般來(lái)講区拳,Hinge于soft-margin svm算法拘领;log于LR算法(Logistric Regression);squared loss樱调,也就是最小二乘法约素,于線性回歸(Liner Regression);基于指數(shù)函數(shù)的loss于Boosting笆凌。

最后編輯于
?著作權(quán)歸作者所有,轉(zhuǎn)載或內(nèi)容合作請(qǐng)聯(lián)系作者
  • 序言:七十年代末圣猎,一起剝皮案震驚了整個(gè)濱河市,隨后出現(xiàn)的幾起案子乞而,更是在濱河造成了極大的恐慌送悔,老刑警劉巖,帶你破解...
    沈念sama閱讀 218,284評(píng)論 6 506
  • 序言:濱河連續(xù)發(fā)生了三起死亡事件爪模,死亡現(xiàn)場(chǎng)離奇詭異欠啤,居然都是意外死亡,警方通過(guò)查閱死者的電腦和手機(jī)屋灌,發(fā)現(xiàn)死者居然都...
    沈念sama閱讀 93,115評(píng)論 3 395
  • 文/潘曉璐 我一進(jìn)店門洁段,熙熙樓的掌柜王于貴愁眉苦臉地迎上來(lái),“玉大人共郭,你說(shuō)我怎么就攤上這事祠丝。” “怎么了除嘹?”我有些...
    開封第一講書人閱讀 164,614評(píng)論 0 354
  • 文/不壞的土叔 我叫張陵写半,是天一觀的道長(zhǎng)。 經(jīng)常有香客問(wèn)我尉咕,道長(zhǎng)污朽,這世上最難降的妖魔是什么? 我笑而不...
    開封第一講書人閱讀 58,671評(píng)論 1 293
  • 正文 為了忘掉前任龙考,我火速辦了婚禮蟆肆,結(jié)果婚禮上,老公的妹妹穿的比我還像新娘晦款。我一直安慰自己炎功,他們只是感情好,可當(dāng)我...
    茶點(diǎn)故事閱讀 67,699評(píng)論 6 392
  • 文/花漫 我一把揭開白布缓溅。 她就那樣靜靜地躺著蛇损,像睡著了一般。 火紅的嫁衣襯著肌膚如雪坛怪。 梳的紋絲不亂的頭發(fā)上淤齐,一...
    開封第一講書人閱讀 51,562評(píng)論 1 305
  • 那天,我揣著相機(jī)與錄音袜匿,去河邊找鬼更啄。 笑死,一個(gè)胖子當(dāng)著我的面吹牛居灯,可吹牛的內(nèi)容都是我干的祭务。 我是一名探鬼主播,決...
    沈念sama閱讀 40,309評(píng)論 3 418
  • 文/蒼蘭香墨 我猛地睜開眼怪嫌,長(zhǎng)吁一口氣:“原來(lái)是場(chǎng)噩夢(mèng)啊……” “哼义锥!你這毒婦竟也來(lái)了?” 一聲冷哼從身側(cè)響起岩灭,我...
    開封第一講書人閱讀 39,223評(píng)論 0 276
  • 序言:老撾萬(wàn)榮一對(duì)情侶失蹤拌倍,失蹤者是張志新(化名)和其女友劉穎,沒想到半個(gè)月后噪径,有當(dāng)?shù)厝嗽跇淞掷锇l(fā)現(xiàn)了一具尸體柱恤,經(jīng)...
    沈念sama閱讀 45,668評(píng)論 1 314
  • 正文 獨(dú)居荒郊野嶺守林人離奇死亡,尸身上長(zhǎng)有42處帶血的膿包…… 初始之章·張勛 以下內(nèi)容為張勛視角 年9月15日...
    茶點(diǎn)故事閱讀 37,859評(píng)論 3 336
  • 正文 我和宋清朗相戀三年熄云,在試婚紗的時(shí)候發(fā)現(xiàn)自己被綠了膨更。 大學(xué)時(shí)的朋友給我發(fā)了我未婚夫和他白月光在一起吃飯的照片。...
    茶點(diǎn)故事閱讀 39,981評(píng)論 1 348
  • 序言:一個(gè)原本活蹦亂跳的男人離奇死亡缴允,死狀恐怖荚守,靈堂內(nèi)的尸體忽然破棺而出,到底是詐尸還是另有隱情练般,我是刑警寧澤矗漾,帶...
    沈念sama閱讀 35,705評(píng)論 5 347
  • 正文 年R本政府宣布,位于F島的核電站薄料,受9級(jí)特大地震影響敞贡,放射性物質(zhì)發(fā)生泄漏。R本人自食惡果不足惜摄职,卻給世界環(huán)境...
    茶點(diǎn)故事閱讀 41,310評(píng)論 3 330
  • 文/蒙蒙 一誊役、第九天 我趴在偏房一處隱蔽的房頂上張望获列。 院中可真熱鬧,春花似錦蛔垢、人聲如沸击孩。這莊子的主人今日做“春日...
    開封第一講書人閱讀 31,904評(píng)論 0 22
  • 文/蒼蘭香墨 我抬頭看了看天上的太陽(yáng)巩梢。三九已至,卻和暖如春艺玲,著一層夾襖步出監(jiān)牢的瞬間括蝠,已是汗流浹背。 一陣腳步聲響...
    開封第一講書人閱讀 33,023評(píng)論 1 270
  • 我被黑心中介騙來(lái)泰國(guó)打工饭聚, 沒想到剛下飛機(jī)就差點(diǎn)兒被人妖公主榨干…… 1. 我叫王不留忌警,地道東北人。 一個(gè)月前我還...
    沈念sama閱讀 48,146評(píng)論 3 370
  • 正文 我出身青樓若治,卻偏偏與公主長(zhǎng)得像慨蓝,于是被迫代替她去往敵國(guó)和親。 傳聞我的和親對(duì)象是個(gè)殘疾皇子端幼,可洞房花燭夜當(dāng)晚...
    茶點(diǎn)故事閱讀 44,933評(píng)論 2 355

推薦閱讀更多精彩內(nèi)容