前言
此程序基于良/惡性腫瘤預(yù)測實(shí)驗(yàn)。
分別用LogisticRegression模型和SGDClassifier模型實(shí)現(xiàn)預(yù)測任務(wù)心铃。
本程序可以流暢運(yùn)行于Python3.6環(huán)境,但是Python2.x版本需要修正的地方也已經(jīng)在注釋中說明颅筋。
requirements:pandas,numpy,scikit-learn
想查看其他經(jīng)典算法實(shí)現(xiàn)可以關(guān)注查看本人其他文集田轧。
實(shí)驗(yàn)結(jié)果分析
LogisticRegression比起SGDClassifier在測試機(jī)上表現(xiàn)有更高的準(zhǔn)確性,這是因?yàn)镾cikit-learn中采用解析的方式精確計(jì)算LogisticRegression的參數(shù)蔚携,而使用梯度法估計(jì)SGDClassifier的參數(shù)。
相比之下克饶,前者計(jì)算時間長但是模型性能略高酝蜒;后者采用隨機(jī)梯度上升算法估計(jì)模型參數(shù),計(jì)算時間短矾湃,但是產(chǎn)出的模型性能略低亡脑。一般而言,對于訓(xùn)練數(shù)據(jù)規(guī)模在10萬量級以上的數(shù)據(jù)邀跃,考慮到時間的耗用霉咨,更適合使用隨機(jī)梯度算法對模型進(jìn)行估計(jì)。
程序源碼
import pandas as pd
import numpy as np
# features column names
column_names = ['Sample code number','Clump Thickness','Uniformity of Cell Size' ,'Uniformity of Cell Shape','Marginal Adhesion',
'Single Epithelial Cell Size','Bare Nuclei','Bland Chromatin','Normal Nucleoli','Mitoses','Class']
#read data from csv file
data = pd.read_csv('./breast-cancer-wisconsin.data',names=column_names)
#Data preprocessing
#replace all ? with standard missing value
data = data.replace(to_replace='?',value=np.nan)
#drop all data rows which have any missing feature
data=data.dropna(how='any')
# data.to_csv('./text.csv')# save data to csv file
#notes:you should use cross_valiation instead of model_valiation in python 2.7
#from sklearn.cross_validation import train_test_split #DeprecationWarning
from sklearn.model_selection import train_test_split #use train_test_split module of sklearn.model_valiation to split data
#take 25 percent of data randomly for testing,and others for training
X_train,X_test,y_train,y_test = train_test_split(data[column_names[1:10]],data[column_names[10]],test_size=0.25,random_state=33)
#check the numbers and category distribution of the test samples
# print(y_train.value_counts())
# print(y_test.value_counts())
#import relative package
from sklearn.preprocessing import StandardScaler
from sklearn.linear_model import LogisticRegression
from sklearn.linear_model import SGDClassifier
#standardizing data in train set and test set
ss = StandardScaler()
X_train = ss.fit_transform(X_train)
X_test = ss.transform(X_test)
#initializing logisticregression and sgdcclassifier model
lr=LogisticRegression()
#notes:the default parameters in python2.7 are max_iter=5 tol=none,you can don't specify the parameters of sgdclassifier
#sgdc=SGDClassifier() #DeprecationWarning
sgdc=SGDClassifier(max_iter=5,tol=None)
#call fit function to trainning arguments ofmodel
lr.fit(X_train,y_train)
#save the prediction of test set in variable
lr_y_predict=lr.predict(X_test)
sgdc.fit(X_train,y_train)
sgdc_y_predict=sgdc.predict(X_test)
#performance analysis
from sklearn.metrics import classification_report
#get accuracy by the score function in LR model
print('Accuracy of LR Classifier:',lr.score(X_test,y_test))
#get? precision ,recall and f1-score from classification_report module
print(classification_report(y_test,lr_y_predict,target_names=['Benign','Malignant']))
#get accuracy by the score function in SGD classifier
print('Accuracy of SGD Classifier:',sgdc.score(X_test,y_test))
#get? precision ,recall and f1-score from classification_report module
print(classification_report(y_test,sgdc_y_predict,target_names=['Benign','Malignant']))
Ubuntu16.04 Python3.6 程序輸出結(jié)果:
Accuracy of LR Classifier: 0.9883040935672515
? ? ? ? ? ? precision? ? recall? f1-score? support
? ? Benign? ? ? 0.99? ? ? 0.99? ? ? 0.99? ? ? 100
? Malignant? ? ? 0.99? ? ? 0.99? ? ? 0.99? ? ? ? 71
avg / total? ? ? 0.99? ? ? 0.99? ? ? 0.99? ? ? 171
Accuracy of SGD Classifier: 0.9824561403508771
? ? ? ? ? ? precision? ? recall? f1-score? support
? ? Benign? ? ? 1.00? ? ? 0.97? ? ? 0.98? ? ? 100
? Malignant? ? ? 0.96? ? ? 1.00? ? ? 0.98? ? ? ? 71
avg / total? ? ? 0.98? ? ? 0.98? ? ? 0.98? ? ? 171
數(shù)據(jù)下載地址
歡迎指正錯誤拍屑,包括英語和程序錯誤途戒。有問題也歡迎提問,一起加油一起進(jìn)步僵驰。
本程序完全是本人逐字符輸入的勞動結(jié)果棺滞,轉(zhuǎn)載請注明出處裁蚁。