1. 歸類:
聚類(clustering) 屬于非監(jiān)督學(xué)習(xí)(unsupervised learning)
無類別標(biāo)記(class label)
2. 舉例:
3. K-means 算法:
3.1 Clustering 中的經(jīng)典算法罚攀,數(shù)據(jù)挖掘十大經(jīng)典算法之一
3.2 算法接受參數(shù) k 往衷;然后將事先輸入的n個(gè)數(shù)據(jù)對(duì)象劃分為 k個(gè)聚類以便使得所獲得的聚類滿足:同一聚類中的對(duì)象相似度較高;而不同聚類中的對(duì)象相似度較小迅栅。
3.3 算法思想:
以空間中k個(gè)點(diǎn)為中心進(jìn)行聚類簿晓,對(duì)最靠近他們的對(duì)象歸類培廓。通過迭代的方法此衅,逐次更新各聚類中心 的值,直至得到最好的聚類結(jié)果
3.4 算法描述:
(1)適當(dāng)選擇c個(gè)類的初始中心悉患;
(2)在第k次迭代中残家,對(duì)任意一個(gè)樣本,求其到c各中心的距離售躁,將該樣本歸到距離最短的中心所在的類坞淮;
(3)利用均值等方法更新該類的中心值;
(4)對(duì)于所有的c個(gè)聚類中心陪捷,如果利用(2)(3)的迭代法更新后回窘,值保持不變,則迭代結(jié)束市袖, 否則繼續(xù)迭代毫玖。
3.5 算法流程:
輸入:k, data[n];
(1) 選擇k個(gè)初始中心點(diǎn),例如c[0]=data[0],…c[k-1]=data[k-1];
(2) 對(duì)于data[0]….data[n], 分別與c[0]…c[k-1]比較凌盯,假定與c[i]差值最少,就標(biāo)記為i;
(3) 對(duì)于所有標(biāo)記為i點(diǎn)烹玉,重新計(jì)算c[i]={ 所有標(biāo)記為i的data[j]之和}/標(biāo)記為i的個(gè)數(shù)驰怎;
(4) 重復(fù)(2)(3),直到所有c[i]值的變化小于給定閾值。
4. 舉例
優(yōu)點(diǎn):速度快二打,簡(jiǎn)單
缺點(diǎn):最終結(jié)果跟初始點(diǎn)選擇相關(guān)县忌,容易陷入局部最優(yōu),需直到k值
Reference:http://croce.ggf.br/dados/K%20mean%20Clustering1.pdf
5.代碼
import numpy as np
# Function: K Means
# -------------
# K-Means is an algorithm that takes in a dataset and a constant
# k and returns k centroids (which define clusters of data in the
# dataset which are similar to one another).
def kmeans(X, k, maxIt):
numPoints, numDim = X.shape
dataSet = np.zeros((numPoints, numDim + 1))
dataSet[:, :-1] = X
# Initialize centroids randomly
centroids = dataSet[np.random.randint(numPoints, size = k), :]
centroids = dataSet[0:2, :]
#Randomly assign labels to initial centorid
centroids[:, -1] = range(1, k +1)
# Initialize book keeping vars.
iterations = 0
oldCentroids = None
# Run the main k-means algorithm
while not shouldStop(oldCentroids, centroids, iterations, maxIt):
print "iteration: \n", iterations
print "dataSet: \n", dataSet
print "centroids: \n", centroids
# Save old centroids for convergence test. Book keeping.
oldCentroids = np.copy(centroids)
iterations += 1
# Assign labels to each datapoint based on centroids
updateLabels(dataSet, centroids)
# Assign centroids based on datapoint labels
centroids = getCentroids(dataSet, k)
# We can get the labels too by calling getLabels(dataSet, centroids)
return dataSet
# Function: Should Stop
# -------------
# Returns True or False if k-means is done. K-means terminates either
# because it has run a maximum number of iterations OR the centroids
# stop changing.
def shouldStop(oldCentroids, centroids, iterations, maxIt):
if iterations > maxIt:
return True
return np.array_equal(oldCentroids, centroids)
# Function: Get Labels
# -------------
# Update a label for each piece of data in the dataset.
def updateLabels(dataSet, centroids):
# For each element in the dataset, chose the closest centroid.
# Make that centroid the element's label.
numPoints, numDim = dataSet.shape
for i in range(0, numPoints):
dataSet[i, -1] = getLabelFromClosestCentroid(dataSet[i, :-1], centroids)
def getLabelFromClosestCentroid(dataSetRow, centroids):
label = centroids[0, -1];
minDist = np.linalg.norm(dataSetRow - centroids[0, :-1])
for i in range(1 , centroids.shape[0]):
dist = np.linalg.norm(dataSetRow - centroids[i, :-1])
if dist < minDist:
minDist = dist
label = centroids[i, -1]
print "minDist:", minDist
return label
# Function: Get Centroids
# -------------
# Returns k random centroids, each of dimension n.
def getCentroids(dataSet, k):
# Each centroid is the geometric mean of the points that
# have that centroid's label. Important: If a centroid is empty (no points have
# that centroid's label) you should randomly re-initialize it.
result = np.zeros((k, dataSet.shape[1]))
for i in range(1, k + 1):
oneCluster = dataSet[dataSet[:, -1] == i, :-1]
result[i - 1, :-1] = np.mean(oneCluster, axis = 0)
result[i - 1, -1] = i
return result
x1 = np.array([1, 1])
x2 = np.array([2, 1])
x3 = np.array([4, 3])
x4 = np.array([5, 4])
testX = np.vstack((x1, x2, x3, x4))
result = kmeans(testX, 2, 10)
print "final result:"
print result
????????????【注】:本文為麥子學(xué)院機(jī)器學(xué)習(xí)課程的學(xué)習(xí)筆記