1. 取樣
數(shù)據(jù)量很大的時候,想要先選取少量數(shù)據(jù)來觀察一下細(xì)節(jié)。
indices = [100,200,300]
# 把sample原來的序號去掉重新分配
samples = pd.DataFrame(data.loc[indices], columns = data.keys()).reset_index(drop = True)
print "Chosen samples:"
display(samples)
2. Split數(shù)據(jù)
用 sklearn.cross_validation.train_test_split
將數(shù)據(jù)分為 train 和 test 集墅冷。
sklearn
from sklearn import cross_validation
X = new_data
y = data['Milk']
X_train, X_test, y_train, y_test = cross_validation.train_test_split(X, y, test_size = 0.25, random_state = 0)
print len(X_train), len(X_test), len(y_train), len(y_test)
分離出 Features & Label
有時候原始數(shù)據(jù)并不指出誰是label晌柬,自己判斷
# Store the 'Survived' feature in a new variable and remove it from the dataset
outcomes = full_data['Survived']
data = full_data.drop('Survived', axis=1)
3. 用 train 來訓(xùn)練模型识埋,用 test 來檢驗
用 Decision Tree 來做個例子
sklearn
from sklearn import tree
regressor = tree.DecisionTreeRegressor()
regressor = regressor.fit(X_train, y_train)
score = regressor.score(X_test, y_test)
4. 判斷 feature 間的關(guān)聯(lián)程度
pd.scatter_matrix(data, alpha = 0.3, figsize = (14, 8), diagonal = 'kde');
5. scaling
當(dāng)數(shù)據(jù)不符合正態(tài)分布的時候咪笑,需要做 scaling 的處理。常用的方法是取log钾军。
pd.scatter_matrix(log_data, alpha = 0.3, figsize = (14,8), diagonal = 'kde');
scaling前后對比圖:
6. Outliers
方法之一是 Tukey 方法鳄袍,小于 Q1 – (1.5 × IQR) 或者大于 Q3 + (1.5 × IQR) 就被看作是outlier。
先把各個 feature 的 outlier 列出來并排好序:
for feature in log_data.keys():
Q1 = np.percentile(log_data[feature], 25)
Q3 = np.percentile(log_data[feature], 75)
step = 1.5 * (Q3 - Q1)
print "Outliers for feature '{}':".format(feature)
print Q1, Q3, step
display(log_data[~((log_data[feature]>=Q1-step) & (log_data[feature]<=Q3+step))].sort([feature]))
再配合 boxplot 觀察吏恭,到底哪些 outlier 需要被移除:
plt.figure()
plt.boxplot([log_data.Fresh, log_data.Milk, log_data.Grocery, log_data.Frozen, log_data.Detergents_Paper, log_data.Delicassen], 0, 'gD');
我是 不會停的蝸牛 Alice
85后全職主婦
喜歡人工智能拗小,行動派
創(chuàng)造力,思考力樱哼,學(xué)習(xí)力提升修煉進(jìn)行中
歡迎您的喜歡哀九,關(guān)注和評論!