1. 程式人生 > 其它 >MLP(SGD or Adam) Perceptron Neural Network Working by Pytorch(including data preprocessing)

MLP(SGD or Adam) Perceptron Neural Network Working by Pytorch(including data preprocessing)

通過MLP多層感知機神經網路訓練模型,使之能夠根據sonar的六十個特徵成功預測物體是金屬還是石頭。由於是簡單的linearr線性仿射層,所以網路模型的匹配度並不高。

這是我的第一篇隨筆,就拿這個來練練手吧(O(∩_∩)O)。

相關檔案可到github下載。本案例採用python編寫。(Juypter notebook)

首先匯入所需的工具包

 1 import numpy as np   
 2 import pandas as pd
 3 import matplotlib.pyplot as plt
 4 import seaborn as sns
 5 import torch 
 6
%matplotlib inline 7 8 plt.rcParams['figure.figsize'] = (4, 4) 9 plt.rcParams['figure.dpi'] = 150 10 plt.rcParams['lines.linewidth'] = 3 11 sns.set() 12 #初始化定義

相關工具包可到官網檢視其功能。接下來進入資料的預處理部分。

傳統的csv檔案一般帶有特徵標誌,例如下面的’tips.csv‘。

1 data = sns.load_dataset("tips")
2 data.head(5)

結果如下:

而現在要訓練的資料是不帶有total_bill,tip,sex這些特徵標誌的。

所以要在read_csv的時候加入header=None用於預設建立一個索引。

origin_data = pd.read_csv('sonar.csv',header=None ) 
origin_data.head(5)

此時資料集建立完畢,結果如下:

0.02000.03710.04280.02070.09540.09860.15390.16010.31090.2111...0.00270.00650.01590.00720.01670.01800.00840.00900.0032R
0 0.0453 0.0523 0.0843 0.0689 0.1183 0.2583 0.2156 0.3481 0.3337 0.2872 ... 0.0084 0.0089 0.0048 0.0094 0.0191 0.0140 0.0049 0.0052 0.0044 R
1 0.0262 0.0582 0.1099 0.1083 0.0974 0.2280 0.2431 0.3771 0.5598 0.6194 ... 0.0232 0.0166 0.0095 0.0180 0.0244 0.0316 0.0164 0.0095 0.0078 R
2 0.0100 0.0171 0.0623 0.0205 0.0205 0.0368 0.1098 0.1276 0.0598 0.1264 ... 0.0121 0.0036 0.0150 0.0085 0.0073 0.0050 0.0044 0.0040 0.0117 R
3 0.0762 0.0666 0.0481 0.0394 0.0590 0.0649 0.1209 0.2467 0.3564 0.4459 ... 0.0031 0.0054 0.0105 0.0110 0.0015 0.0072 0.0048 0.0107 0.0094 R
4 0.0286 0.0453 0.0277 0.0174 0.0384 0.0990 0.1201 0.1833 0.2105 0.3039 ... 0.0045 0.0014 0.0038 0.0013 0.0089 0.0057 0.0027 0.0051 0.0062 R

5 rows × 61 columns

該資料集有61列,其中最後一列應作為所要預測的資料。而觀察最後一列可以看到資料為字元型別,而這在訓練模

型時是不允許的,故將第六十一列提取並將字元R改為1,M改為0,即用1代表R,用0代表M,達到訓練模型的要求。

程式碼如下:

y_data = origin_data.iloc[:,60]
y_data.head(5)#分出需要預測的資料並檢驗
y_data.shape

呼叫y_data.shape檢視共有多少個數據,以呼叫迴圈修改R、M。該資料集共有208個數據。程式碼如下:

Y=y_data.copy()#由於DataFrame複製會報警,故採用copy
   for i in range(208):
        
        if(y_data[i]=='R'):
            Y[i]=1
        else:
            Y[i]=0
        #將資料R轉化為1,資料M轉化為0

而後提取資料前六十列作為x資料集用於預測Y。在提取後,將x資料進行標準化處理(之前就是因為沒有標準化而導致訓練的模型loss曲線上下跌宕)。程式碼如下:

1 from sklearn.preprocessing import scale
2 x_data=origin_data.iloc[:,:-1]
3 x_data = scale(x_data)

而後將資料x_data,y_data分為訓練集和測試集,分割比例為4:1(size=0.2)。將train,test集打包成dataset。這裡為了減少GPU的負載,採用Mini-Batch分割資料,呼叫了dataloader自動將資料集分割成10個batch。

 1 x_data=x_data
 2 y_data=Y
 3 x_data = np.array(x_data).reshape(208,60)
 4 y_data = np.array(y_data).reshape(208,)
 5 y_data = y_data.tolist()#重新轉化為list形式方便split
 6 x_data = x_data.tolist()
 7 #split為train和test集合
 8 from sklearn.model_selection import train_test_split
 9 from sklearn.preprocessing import OneHotEncoder
10 #X_train,X_test,y_train,y_test = train_test_split(x_data,y_data,test_size=0.2)
11 X_train, X_test, y_train, y_test = train_test_split(x_data, y_data, test_size=0.2)
12 from torch.utils.data import TensorDataset, DataLoader
13 train_dataset = TensorDataset(torch.Tensor(X_train), 
14                               torch.LongTensor(y_train))
15 
16 test_dataset = TensorDataset(torch.Tensor(X_test), 
17                               torch.LongTensor(y_test))#封裝打包
18 TRAIN_SIZE = np.array(X_train).shape[0]
19 BATCH_SIZE = 10
20 NUM_EPOCH = 200
21 iters_per_epoch = TRAIN_SIZE // BATCH_SIZE
22 #採用mini——batch進行迭代,將訓練資料分為10份,共迭代200次,共200*int(166/10)=3200次
23 train_loader = DataLoader(train_dataset, batch_size=BATCH_SIZE, shuffle=True)
24 test_loader = DataLoader(test_dataset, batch_size=BATCH_SIZE, shuffle=True)
25 #打包成loader形式自動分割樣本

MLP模型定義類程式碼如下:(應用了nn.sequential序列構建模型,採用了三層hidden_layer,且中間採用ReLu Function啟用函式,最後採用輸出在[0,1]之間的Softmax啟用函式,模型較簡單)。

 1 from torch import nn#nn.sequiential()
 2 class MLP(nn.Module):
 3     
 4     def __init__(self, in_dim, hid_dim1, hid_dim2,hid_dim3, out_dim):
 5         super(MLP, self).__init__()
 6         self.layers = nn.Sequential(
 7                         nn.Linear(in_dim, hid_dim1),
 8                         nn.ReLU(),
 9                         nn.Linear(hid_dim1, hid_dim2),
10                         nn.ReLU(),
11                         nn.Linear(hid_dim2,hid_dim3),
12                         nn.ReLU(),
13                         nn.Linear(hid_dim3, out_dim),
14                         nn.Softmax(dim=1))
15         
16     def forward(self, x):
17         y = self.layers(x)
18         return y

建立一個以SGD為優化器的迭代網路模型,程式碼如下:

1 net = MLP(in_dim=60, hid_dim1=300, hid_dim2=180,hid_dim3=60, out_dim=10)
2 criterion = nn.CrossEntropyLoss()#採用交叉熵進行loss反饋
3 from torch import optim
4 optimizer = optim.SGD(params=net.parameters(), lr=0.1)#學習率0.1,SGD隨機梯度下降優化器
5 optimizer.zero_grad()# 每次優化前都要清空梯度,這裡先清空防止意外發生
 1 #SGD迭代
 2 train_loss_history = []
 3 test_acc_history = []
 4 
 5 for epoch in range(NUM_EPOCH):
 6     
 7     for i, data in enumerate(train_loader):
 8         
 9         inputs, labels = data
10         
11         optimizer.zero_grad()
12         outputs = net(inputs)
13                 
14         loss = criterion(outputs, labels)
15         loss.backward()
16         
17         optimizer.step()
18         
19         train_loss = loss.tolist()
20         train_loss_history.append(train_loss)
21         
22         if (i+1) % iters_per_epoch == 0:
23             print("[{}, {}] Loss: {}".format(epoch+1, i+1, train_loss))
24     
25     total = 0
26     correct = 0
27     for data in test_loader:
28         inputs, labels = data
29         outputs = net(inputs)
30         _, preds = torch.max(outputs.data, 1)
31         
32         total += labels.size(0)
33         correct += (preds == labels).sum()
34 
35     print("Accuracy: {:.2f}%".format(100.0 * correct / total))

用loss_history列表record了所有的loss資料,此時呼叫matlab.pyplot包畫出loss曲線圖

1 import matplotlib.pyplot as plt
2 plt.plot(train_loss_history)

輸出如下:

[<matplotlib.lines.Line2D at 0x25be01fcdf0>]
若採用Adam優化器,則程式碼與結果如下:
 1 from torch import optim
 2 net = MLP(in_dim=60, hid_dim1=540, hid_dim2=180,hid_dim3=30, out_dim=10)#調整了隱藏層引數
 3 optimizer = optim.Adam(params=net.parameters(), lr=0.001)#更換為Adam優化器
 4 criterion = nn.CrossEntropyLoss()
 5 
 6 train_loader = DataLoader(train_dataset, batch_size=BATCH_SIZE, shuffle=True)
 7 test_loader = DataLoader(test_dataset, batch_size=BATCH_SIZE, shuffle=True)
 8 train_loss_history = []
 9 test_acc_history = []
10 #Adam優化器迭代
11 for epoch in range(NUM_EPOCH):
12     
13     for i, data in enumerate(train_loader):
14         
15         inputs, labels = data
16         
17         optimizer.zero_grad()
18         outputs = net(inputs)
19                 
20         loss = criterion(outputs, labels)
21         loss.backward()
22         
23         optimizer.step()
24         
25         train_loss = loss.tolist()
26         train_loss_history.append(train_loss)
27         
28         if (i+1) % iters_per_epoch == 0:
29             print("[{}, {}] Loss: {}".format(epoch+1, i+1, train_loss))
30     
31     total = 0
32     correct = 0
33     for data in test_loader:
34         inputs, labels = data
35         outputs = net(inputs)
36         _, preds = torch.max(outputs.data, 1)
37         
38         total += labels.size(0)
39         correct += (preds == labels).sum()
40 
41     print("Accuracy: {:.2f}%".format(100.0 * correct / total))
1 import matplotlib.pyplot as plt
2 plt.plot(train_loss_history)
[<matplotlib.lines.Line2D at 0x25be08b49d0>]
模型訓練完畢後,可通過將所有資料匯入模型訓練得出Confusion Matrix以檢視效能指標,根據自己的實際需求調整模型以達到更優化的效能。 這裡僅貼上畫Adam模型的Matrix的程式碼。中間過程請仿照上述程式碼自行擬定。
1 #畫confusion_matrix
2 from sklearn.metrics import confusion_matrix
3 cm = confusion_matrix(y_data, total_down)
4 sns.heatmap(cm, annot=True, fmt = "d", cmap = "Blues", annot_kws={"size": 20}, cbar = False)
5 plt.ylabel('True')
6 plt.xlabel('Predicted')
7 sns.set(font_scale = 2)

Matrix如下:

通過簡單計算得到Precision,Sensitivity,Accuracy,Specificity效能指標

 1 TP=77
 2 FN=34
 3 FP=45
 4 TN=52
 5 Accuracy= (TP+TN)/(TP+TN+FP+FN)
 6 Precison = TP/(TP+FP)
 7 Sensitivity = TP/(TP+FN)
 8 Specificity = TN/(TN+FP)
 9 print("Accuracy is:{}  Precision is:{}  Sensitivity is:{}  Specificity is:{}".format(Accuracy,Precison,Sensitivity,Specificity))
10 #計算評估指標

輸出如下:

Accuracy is:0.6201923076923077  Precision is:0.6311475409836066  Sensitivity is:0.6936936936936937  Specificity is:0.5360824742268041

本模型採用IPython編寫,如用Pycharm等請自行刪除一些程式碼。