The Image Classification Pipeline
The image classification pipeline. We’ve seen that the task in Image Classification is to take an array of pixels that represents a single image and assign a label to it. Our complete pipeline can be formalized as follows:
Input: Our input consists of a set of N images, each labeled with one of K different classes. We refer to this data as the training set.
Learning: Our task is to use the training set to learn what every one of the classes looks like. We refer to this step as training a classifier, or learning a model.
Evaluation: In the end, we evaluate the quality of the classifier by asking it to predict labels for a new set of images that it has never seen before. We will then compare the true labels of these images to the ones predicted by the classifier. Intuitively, we’re hoping that a lot of the predictions match up with the true answers (which we call the ground truth).
其實就是
- 輸入TrainingSet的圖片和標簽
- 讓機器去學習這個模型model,training分類器
- 利用第二步得到的classifier算法對輸入的test圖片進行處理裂问,然后輸出得到機器估計的test圖片的標簽
Data-Driven Approach
provide the computer with many examples of each class and then develop learning algorithms that look at these examples and learn about the visual appearance of each class
也就是利用大量數(shù)據(jù),開發(fā)學習算法芜飘,令電腦能夠讀取和獲得數(shù)據(jù)中的共性务豺,然后利用這些共性去進行下一步的判斷
最近鄰分配器
思想即為求出每一個training矩陣和同一個test矩陣之間的distance磨总,然后找到其中最小的distance,這個即為最近的鄰笼沥,找到這個最近的鄰之后蚪燕,按照其對應(yīng)的屬性對test進行分類娶牌。
k-NN classifier
每次分類的時候,得到最接近的k個鄰馆纳,然后根據(jù)這k個鄰按照一定的權(quán)重比較后诗良,得到比較合理的值
例如k=5時候
得到最后接近的5個鄰為[cat,cat,cat,cat,dog]那么這個時候應(yīng)該得把該test的類標記為cat,因為與cat的相關(guān)性更大鲁驶。
代碼思想
- 讀入一個四維數(shù)組(1全部的train2px3R4G5B),然后reshape降成二維數(shù)組(1包含數(shù)組全部的train2一張圖片包含px個數(shù)*3(即RGB))xtr.shape[0]代表總圖片數(shù)
- zeros初始化數(shù)組,用xrange()返回而不用range()返回是因為xrange是一個生成器鉴裹,效率遠高于range這種返回整個數(shù)組的函數(shù)
for i in xrange(num_test):
-
distances = np.sum(np.abs(self.Xtr - X[i,:]), axis=1);
每行減去同一個test然后求和得到一組distance數(shù)組 -
min_index = np.argmin(distances)
得到數(shù)組中最小元素的下標 -
Ypred[i] = self.ytr[min_index]
將下標對應(yīng)的label賦值給predict的label
Hyperparameters超參數(shù)
The k-nearest neighbor classifier requires a setting for k. But what number works best? Additionally, we saw that there are many different distance functions we could have used: L1 norm, L2 norm, there are many other choices we didn’t even consider (e.g. dot products). These choices are called hyperparameters and they come up very often in the design of many Machine Learning algorithms that learn from data
感覺超參數(shù)就是一種未定的參數(shù),例如knn里面的k或者里面的distance或者px點之間的關(guān)系等等各種無限可能的參數(shù)≡客洌現(xiàn)在學的比較少径荔,所以了解也不多,以后再慢慢深入吧脆霎。
如果我們使用從頭到尾使用同一組數(shù)據(jù)去調(diào)試超參數(shù)总处,很有可能會出現(xiàn)過擬合現(xiàn)象(overfit,一個假設(shè)在訓練數(shù)據(jù)上能夠獲得比其他假設(shè)更好的擬合睛蛛,但是在訓練數(shù)據(jù)外的數(shù)據(jù)集 上卻不能很好的擬合數(shù)據(jù)鹦马。
為了實現(xiàn)更好的算法,逐漸調(diào)整和測試超參數(shù)是很有必要的忆肾。
Validation 檢驗用的數(shù)據(jù)
抽出數(shù)據(jù)的小一部分去作為對training結(jié)果的檢測荸频,即在數(shù)據(jù)集里面抽出一小部分作為假的test,這樣做的好處在于即時檢驗客冈。
Cross-Validation
這是一種hyperparameter tuning试溯,是一種對Hyperparameter進行調(diào)試和修正的方法。
一種情況是缺少數(shù)據(jù)的時候才用郊酒。
是將一組數(shù)據(jù)分成N組遇绞,然后將這N組數(shù)據(jù)輪流當做validation去使用。
可以提高精確度燎窘。但是會使用大量的時間和空間資源摹闽。
在最后的實際估計的時候不會去浪費資源使用,但是選擇合適hyperparameters的時候要用褐健。(個人覺得這個一定要用付鹿,因為精確估算的時候作用比較大)
NNC的局限性
在數(shù)據(jù)是低維度的時候比較有用,但在圖像處理這種高維度的作用不大蚜迅,而且有很多干擾因素舵匾。通過像素差異去判斷圖像是很不合適的。對每一個像素點都取樣的話谁不,很可能會因為背景或者大體顏色相同就判斷為同一個類型坐梯。但是這種思想挺重要的