Outline
1.算法思想
2.概念解釋
3.Sklearn Code
Part 1.算法思想:
目的:計(jì)算logistic Function,根據(jù)概率閾值設(shè)定進(jìn)行二元分類
核心步驟:算出概率(P) & 提升精度(log-loss)
具體實(shí)施:
step1. f = m1x1+m2x2+……+b -->
step2. log-odds = log(f) -->
step3. P = sigmoid(log-odds) -->
step4. according to threshold紫岩,give the classification -->
step5. minimize log-loss(gradient descent)
Part 2.概念解釋
- sigmoid function:
- log-loss Function:
如何理解log-loss Fuction:針對(duì)某一個(gè)sample泉蝌,當(dāng)y=1時(shí)揩晴,loss function對(duì)應(yīng)以下左圖;當(dāng)y=0時(shí)诅愚,loss function對(duì)應(yīng)以下右圖。
y=1/0的概率越高刹前,loss越小雌桑。這是符合算法邏輯要求的。所有樣本相加拣技,再利用梯度下降的算法耍目,獲得對(duì)于整體而言的最低的loss。
- Threshold:
1.閾值通常設(shè)為0.5
2.如下第一幅圖所示莫辨。下圖可以想象為根據(jù)大量實(shí)際數(shù)據(jù)和logistic regression算出來的概率與實(shí)際預(yù)測有無癌癥的hist-gram分布毅访。
3.當(dāng)我們調(diào)整閾值0.5-->0.4,無癌癥但被預(yù)測為有癌癥的數(shù)量增加(誤判增加俺抽,精度減少),有癌癥確診增加(被預(yù)測為無癌癥的數(shù)量減少磷斧,漏判減少,召回率增加)冕末。這個(gè)案例說明閾值的設(shè)定侣颂,往往是依據(jù)需求選擇的,重大疾病檢出更多的是需要召回率藻肄,而非精度拒担。
Part 3.Sklearn Code:
import numpy as np
from sklearn.linear_model import LogisticRegression
from exam import hours_studied_scaled, passed_exam, exam_features_scaled_train, exam_features_scaled_test, passed_exam_2_train, passed_exam_2_test, guessed_hours_scaled
# Create and fit logistic regression model here
model = LogisticRegression()
model.fit(hours_studied_scaled,passed_exam)
# Save the model coefficients and intercept here
calculated_coefficients = model.coef_
intercept = model.intercept_
print(calculated_coefficients)
print(intercept)
# Predict the probabilities of passing for next semester's students here
passed_predictions = model.predict_proba(guessed_hours_scaled)
# Create a new model on the training data with two features here
model_2 = LogisticRegression()
model_2.fit(exam_features_scaled_train,passed_exam_2_train)
# Predict whether the students will pass here
passed_predictions_2 = model_2.predict(exam_features_scaled_test)
print(passed_predictions_2)
print(passed_exam_2_test)