邏輯回歸
邏輯回歸
任務(wù)一 可視化數(shù)據(jù)(選擇)
在ex2.m文件中已經(jīng)導(dǎo)入了ex2data1.txt中的數(shù)據(jù)蛋叼,其代碼如下:
data = load('ex2data1.txt');
X = data(:, [1, 2]);
y = data(:, 3);
我們只需在plotData.m文件中,將plotData()函數(shù)代碼補(bǔ)充完整台囱,代碼如下:
positive = find(y==1);
negative = find(y==0);
plot(X(positive, 1), X(positive, 2), 'k+', 'LineWidth', 2, 'MarkerSize', 7);
plot(X(negative, 1), X(negative, 2), 'ko', 'MarkerFaceColor', 'y','MarkerSize', 7);
其中棋返,此代碼中涉及到的plot()函數(shù)的應(yīng)用可查看本人的Octave教程(四)或自行查閱相關(guān)文檔俺驶。
運(yùn)行該任務(wù)部分代碼,其結(jié)果如下圖所示:
任務(wù)二 代價(jià)函數(shù)與梯度下降算法
在ex2.m文件中已經(jīng)將相關(guān)參數(shù)初始化代碼以及函數(shù)調(diào)用代碼寫(xiě)好却嗡,其代碼如下:
[m, n] = size(X);
% Add intercept term to x and X_test
X = [ones(m, 1) X];
% Initialize fitting parameters
initial_theta = zeros(n + 1, 1);
% Compute and display initial cost and gradient
[cost, grad] = costFunction(initial_theta, X, y);
我們只需在costFunction.m將代價(jià)函數(shù)和梯度下降算法相關(guān)代碼補(bǔ)充完整即可。不過(guò)在此之前嘹承,我們需要在sigmoid.m文件中將sigmoid()函數(shù)補(bǔ)充完整窗价。
首先,我們將要用到的公式列舉一下:
- 假設(shè)函數(shù)hθ(x):
- 代價(jià)函數(shù)J(θ):
其向量化后為:
- 梯度下降算法:
其向量化后為:
然后赶撰,我們?cè)趕igmoid.m文件中舌镶,根據(jù)假設(shè)函數(shù)hθ(x)公式鍵入如下代碼:
g = 1 ./ (1+exp(-z));
最后柱彻,我們?cè)赾ostFunction.m文件中,將代價(jià)函數(shù)J(θ)和梯度下降算法分別補(bǔ)充完整餐胀,其代碼分別如下:
代價(jià)函數(shù)J(θ)
J = (-y'*log(sigmoid(X*theta))-(1-y)'*log(1-sigmoid(X*theta))) / m;
梯度下降算法
grad = (X'*(sigmoid(X*theta)-y)) / m;
運(yùn)行該部分代碼哟楷,其結(jié)果為:
Cost at initial theta (zeros): 0.693147
Expected cost (approx): 0.693
Gradient at initial theta (zeros):
-0.100000
-12.009217
-11.262842
Expected gradients (approx):
-0.1000
-12.0092
-11.2628
Cost at test theta: 0.218330
Expected cost (approx): 0.218
Gradient at test theta:
0.042903
2.566234
2.646797
Expected gradients (approx):
0.043
2.566
2.647
任務(wù)三 高級(jí)優(yōu)化算法
在ex2.m文件中已經(jīng)將使用fminunc()函數(shù)的相關(guān)代碼寫(xiě)好,我們只需運(yùn)行即可否灾,其代碼如下:
% Set options for fminunc
options = optimset('GradObj', 'on', 'MaxIter', 400);
% Run fminunc to obtain the optimal theta
% This function will return theta and the cost
[theta, cost] = ...
fminunc(@(t)(costFunction(t, X, y)), initial_theta, options);
% Print theta to screen
fprintf('Cost at theta found by fminunc: %f\n', cost);
fprintf('Expected cost (approx): 0.203\n');
fprintf('theta: \n');
fprintf(' %f \n', theta);
fprintf('Expected theta (approx):\n');
fprintf(' -25.161\n 0.206\n 0.201\n');
% Plot Boundary
plotDecisionBoundary(theta, X, y);
% Put some labels
hold on;
% Labels and Legend
xlabel('Exam 1 score')
ylabel('Exam 2 score')
% Specified in plot order
legend('Admitted', 'Not admitted')
hold off;
該任務(wù)運(yùn)行結(jié)果為:
Cost at theta found by fminunc: 0.203498
Expected cost (approx): 0.203
theta:
-25.161272
0.206233
0.201470
Expected theta (approx):
-25.161
0.206
0.201
任務(wù)四 邏輯回歸的預(yù)測(cè)
根據(jù)邏輯函數(shù)g(z)可知:
- 當(dāng)z≥0.5時(shí)卖擅,我們可以預(yù)測(cè)y=1
- 當(dāng)z﹤0.5時(shí),我們可以預(yù)測(cè)y=0
因此墨技,根據(jù)以上結(jié)論惩阶,我們可在predict.m文件中將predict()函數(shù)代碼補(bǔ)充完整,其代碼如下:
p(sigmoid( X * theta) >= 0.5) = 1;
p(sigmoid( X * theta) < 0.5) = 0;
此處代碼可拆成如下代碼便于理解:
k = find(sigmoid( X * theta) >= 0.5 );
p(k)= 1;
d = find(sigmoid( X * theta) < 0.5 );
p(d)= 0;
該任務(wù)的運(yùn)行結(jié)果為:
For a student with scores 45 and 85, we predict an admission probability of 0.776289
Expected value: 0.775 +/- 0.002
Train Accuracy: 89.000000
Expected accuracy (approx): 89.0
正則化的邏輯回歸
任務(wù)一 可視化數(shù)據(jù)
由于ex2_reg.m文件和plotData.m文件中都已將相關(guān)代碼寫(xiě)好扣汪,我們只需運(yùn)行該任務(wù)代碼即可断楷,其運(yùn)行結(jié)果為:
任務(wù)二 代價(jià)函數(shù)與梯度下降算法
正則化的代價(jià)函數(shù)J(θ):
正則化的梯度下降算法:
根據(jù)上述公式,我們可在costFunctionReg.m文件中將代價(jià)函數(shù)和梯度下降算法補(bǔ)充完整崭别,其代碼如下:
theta_s = [0; theta(2:end)];
J= (-1 * sum( y .* log( sigmoid(X*theta) ) + (1 - y ) .* log( (1 - sigmoid(X*theta)) ) ) / m) + (lambda / (2*m) * (theta_s' * theta_s));
grad = ( X' * (sigmoid(X*theta) - y ) )/ m + ((lambda/m)*theta_s);
其運(yùn)行結(jié)果為:
Cost at initial theta (zeros): 0.693147
Expected cost (approx): 0.693
Gradient at initial theta (zeros) - first five values only:
0.008475
0.018788
0.000078
0.050345
0.011501
Expected gradients (approx) - first five values only:
0.0085
0.0188
0.0001
0.0503
0.0115
Program paused. Press enter to continue.
Cost at test theta (with lambda = 10): 3.164509
Expected cost (approx): 3.16
Gradient at test theta - first five values only:
0.346045
0.161352
0.194796
0.226863
0.092186
Expected gradients (approx) - first five values only:
0.3460
0.1614
0.1948
0.2269
0.0922
任務(wù)三 高級(jí)優(yōu)化算法
其代碼已經(jīng)寫(xiě)好冬筒,我們只需運(yùn)行即可,其結(jié)果為:
Train Accuracy: 83.050847
Expected accuracy (with lambda = 1): 83.1 (approx)
任務(wù)四 選擇正則化參數(shù)λ(選做)
我們分別令正則化參數(shù)λ=0, 10, 100茅主,其結(jié)果分別為:
λ=0
Train Accuracy: 86.440678
λ=10
Train Accuracy: 74.576271
λ=100
Train Accuracy: 61.016949
其中舞痰,關(guān)于圖像繪制請(qǐng)自行查看plotDecisionBoundary.m文件。