本節(jié)開始說一下DNN分類的pytorch實(shí)現(xiàn)颈将,先說一下二分類
流程還是跟前面一樣
graph TD
A[數(shù)據(jù)導(dǎo)入] --> B[數(shù)據(jù)拆分]
B[數(shù)據(jù)拆分] --> C[Tensor轉(zhuǎn)換]
C[Tensor轉(zhuǎn)換] --> D[數(shù)據(jù)重構(gòu)]
D[數(shù)據(jù)重構(gòu)] --> E[模型定義]
E[模型定義] --> F[模型訓(xùn)練]
F[模型訓(xùn)練] --> G[結(jié)果展示]
代碼
1 數(shù)據(jù)導(dǎo)入
我們使用最常見的iris數(shù)據(jù)集
data = pd.read_csv('./iris.csv')
data.columns = ["f1","f2","f3","f4","label"]
data = data.head(99)
data
因?yàn)閕ris鳶尾花數(shù)據(jù)集是一個(gè)三分類的數(shù)據(jù)晨炕,我們只去前99條數(shù)據(jù),這樣的話就只有兩個(gè)分類了。
2.數(shù)據(jù)拆分
from sklearn.model_selection import train_test_split
train,test = train_test_split(data, train_size=0.7)
train_x = train[[c for c in data.columns if c != 'label']].values
test_x = test[[c for c in data.columns if c != 'label']].values
train_y = train.label.values.reshape(-1, 1)
test_y = test.label.values.reshape(-1, 1)
3.To Tensor
train_x = torch.from_numpy(train_x).type(torch.FloatTensor)
test_x = torch.from_numpy(test_x).type(torch.FloatTensor)
train_y = torch.from_numpy(train_y).type(torch.FloatTensor)
test_y = torch.from_numpy(test_y).type(torch.FloatTensor)
train_x.shape, train_y.shape
#(torch.Size([69, 4]), torch.Size([69, 1]))
4.數(shù)據(jù)重構(gòu)
from torch.utils.data import TensorDataset
from torch.utils.data import DataLoader
train_ds = TensorDataset(train_x, train_y)
train_dl = DataLoader(train_ds, batch_size=batch, shuffle=True)
test_ds = TensorDataset(test_x, test_y)
test_dl = DataLoader(test_ds, batch_size=batch * 2)
5.網(wǎng)絡(luò)定義
from torch import nn
import torch.nn.functional as F
class DNN(nn.Module):
def __init__(self):
super().__init__()
self.hidden1 = nn.Linear(4, 64)
self.hidden2 = nn.Linear(64, 64)
self.hidden3 = nn.Linear(64, 1)
def forward(self, input):
x = F.relu(self.hidden1(input))
x = F.relu(self.hidden2(x))
x = torch.sigmoid(self.hidden3(x))
return x
#二分類準(zhǔn)確率計(jì)算函數(shù)
def accuracy(out, yb):
preds = (out>0.5).type(torch.IntTensor)
return (preds == yb).float().mean()
def get_model():
model = DNN()
return model, torch.optim.Adam(model.parameters(), lr=lr)
loss_fn = nn.BCELoss()
model, opt = get_model()
model#查看網(wǎng)絡(luò)結(jié)構(gòu)
DNN(
(hidden1): Linear(in_features=4, out_features=64, bias=True)
(hidden2): Linear(in_features=64, out_features=64, bias=True)
(hidden3): Linear(in_features=64, out_features=1, bias=True)
)
我們也可以根據(jù)上節(jié)課內(nèi)容可視化一下
6. 訓(xùn)練
train_loss = []
train_acc = []
test_loss = []
test_acc = []
for epoch in range(epochs+1):
model.train()
for xb, yb in train_dl:
pred = model(xb)
loss = loss_fn(pred, yb)
loss.backward()
opt.step()
opt.zero_grad()
if epoch%1==0:
model.eval()
with torch.no_grad():
train_epoch_loss = sum(loss_fn(model(xb), yb) for xb, yb in train_dl)
test_epoch_loss = sum(loss_fn(model(xb), yb) for xb, yb in test_dl)
acc_mean_train = np.mean([accuracy(model(xb), yb) for xb, yb in train_dl])
acc_mean_val = np.mean([accuracy(model(xb), yb) for xb, yb in test_dl])
train_loss.append(train_epoch_loss.data.item() / len(test_dl))
test_loss.append(test_epoch_loss.data.item() / len(test_dl))
train_acc.append(acc_mean_train)
test_acc.append(acc_mean_val)
template = ("epoch:{:2d}, 訓(xùn)練損失:{:.5f}, 訓(xùn)練準(zhǔn)確率:{:.1f},驗(yàn)證損失:{:.5f}, 驗(yàn)證準(zhǔn)確率:{:.1f}")
print(template.format(epoch, train_epoch_loss.data.item() / len(test_dl), acc_mean_train*100, test_epoch_loss.data.item() / len(test_dl), acc_mean_val*100))
print('訓(xùn)練完成')
epoch: 0, 訓(xùn)練損失:3.09122, 訓(xùn)練準(zhǔn)確率:57.0,驗(yàn)證損失:0.68206, 驗(yàn)證準(zhǔn)確率:36.7
epoch: 1, 訓(xùn)練損失:2.87476, 訓(xùn)練準(zhǔn)確率:54.3,驗(yàn)證損失:0.69797, 驗(yàn)證準(zhǔn)確率:36.7
epoch: 2, 訓(xùn)練損失:2.62978, 訓(xùn)練準(zhǔn)確率:61.0,驗(yàn)證損失:0.59363, 驗(yàn)證準(zhǔn)確率:36.7
epoch: 3, 訓(xùn)練損失:2.30378, 訓(xùn)練準(zhǔn)確率:100.0,驗(yàn)證損失:0.50508, 驗(yàn)證準(zhǔn)確率:100.0
epoch: 4, 訓(xùn)練損失:2.05582, 訓(xùn)練準(zhǔn)確率:100.0,驗(yàn)證損失:0.44803, 驗(yàn)證準(zhǔn)確率:100.0
epoch: 5, 訓(xùn)練損失:1.76421, 訓(xùn)練準(zhǔn)確率:100.0,驗(yàn)證損失:0.38924, 驗(yàn)證準(zhǔn)確率:100.0
epoch: 6, 訓(xùn)練損失:1.54745, 訓(xùn)練準(zhǔn)確率:100.0,驗(yàn)證損失:0.32642, 驗(yàn)證準(zhǔn)確率:100.0
......
epoch:98, 訓(xùn)練損失:0.00304, 訓(xùn)練準(zhǔn)確率:100.0,驗(yàn)證損失:0.00067, 驗(yàn)證準(zhǔn)確率:100.0
epoch:99, 訓(xùn)練損失:0.00311, 訓(xùn)練準(zhǔn)確率:100.0,驗(yàn)證損失:0.00067, 驗(yàn)證準(zhǔn)確率:100.0
epoch:100, 訓(xùn)練損失:0.00300, 訓(xùn)練準(zhǔn)確率:100.0,驗(yàn)證損失:0.00068, 驗(yàn)證準(zhǔn)確率:100.0
訓(xùn)練完成
7.查看結(jié)果
import matplotlib.pyplot as plt
#損失值
plt.plot(range(len(train_loss)), train_loss, label='train_loss')
plt.plot(range(len(test_loss)), test_loss, label='test_loss')
plt.legend()
# 準(zhǔn)確率
plt.plot(range(len(train_acc)), train_acc, label='train_acc')
plt.plot(range(len(test_acc)), test_acc, label='test_acc')
plt.legend()