1. 程式人生 > 實用技巧 >深度學習與Pytorch入門實戰(七)Visdom視覺化工具

深度學習與Pytorch入門實戰(七)Visdom視覺化工具

筆記摘抄

1. 安裝visdom

安裝教程

2. 開啟監聽程序

python -m visdom.server

3. 訪問

用chrome瀏覽器訪問url連線:http://localhost:8097

4. 視覺化訓練

  • 在之前定義網路結構(參考上一節)的基礎上加上Visdom視覺化。

  • 在訓練-測試的迭代過程之前,定義兩條曲線,在訓練-測試的過程中 再不斷填充點 以實現 曲線隨著訓練動態增長

from visdom import Visdom

viz = Visdom()

viz.line([0.], [0.], win='train_loss', opts=dict(title='train loss'))

viz.line([[0.0, 0.0]], [0.], win='test', opts=dict(title='test loss&acc.', legend=['loss', 'acc.']))
  • 第二行Visdom(env="xxx")引數env來設定環境視窗的名稱,這裡什麼都沒傳(在預設的main視窗下)。

  • viz.line的 前兩個引數 是曲線的Y和X的座標(前面是縱軸後面才是橫軸)

  • 設定了不同的 win引數,它們就會在不同的視窗中展示,

  • 第四行定義的是 測試集的loss 和 acc兩條曲線,所以在X等於0時,Y給了兩個初始值。

開始訓練:

  • 為了知道訓練了多少個batch,設定一個全域性的計數器:
global_step = 0
  • 在每個batch訓練完後,為訓練曲線新增點,來讓曲線實時增長:
global_step += 1

viz.line([loss.item()], [global_step], win='train_loss', update='append')
  • 這裡用 win引數 來選擇是哪條曲線,用update='append'的方式新增曲線的增長點,前面是Y座標,後面是X座標。

  • 在每次測試結束後,並在另外兩個視窗(用win引數設定)中展示影象(.images) 和 預測值(文字用.text):

viz.line([[test_loss, correct / len(test_loader.dataset)]], [global_step], 
         win='test', update='append')

viz.images(data.view(-1, 1, 28, 28), win='x')

viz.text(str(pred.detach().numpy()), win='pred', opts=dict(title='pred'))

完整程式碼:

import  torch
import  torch.nn as nn
import  torch.nn.functional as F
import  torch.optim as optim
from   torchvision import datasets, transforms
from visdom import Visdom

# 超引數
batch_size=200
learning_rate=0.01
epochs=10

# 獲取訓練資料
train_loader = torch.utils.data.DataLoader(
    datasets.MNIST('../data', train=True, download=True,          #train=True則得到的是訓練集
                   transform=transforms.Compose([                 #transform進行資料預處理
                       transforms.ToTensor(),                     #轉成Tensor型別的資料
#                        transforms.Normalize((0.1307,), (0.3081,)) # 進行資料標準化(減去均值除以方差),如果要顯示資料就不要標準化
                   ])),
    batch_size=batch_size, shuffle=True)                          #按batch_size分出一個batch維度在最前面,shuffle=True打亂順序

# 獲取測試資料
test_loader = torch.utils.data.DataLoader(
    datasets.MNIST('../data', train=False, transform=transforms.Compose([
        transforms.ToTensor(),
#         transforms.Normalize((0.1307,), (0.3081,))
    ])),
    batch_size=batch_size, shuffle=True)


class MLP(nn.Module):

    def __init__(self):
        super(MLP, self).__init__()

        self.model = nn.Sequential(         #定義網路的每一層,
            nn.Linear(784, 200),
            nn.LeakyReLU(inplace=True),
            nn.Linear(200, 200),
            nn.LeakyReLU(inplace=True),
            nn.Linear(200, 10),
            nn.LeakyReLU(inplace=True),
        )

    def forward(self, x):
        x = self.model(x)
        return x


net = MLP()
# 定義sgd優化器,指明優化引數、學習率,net.parameters()得到這個類所定義的網路的引數[[w1,b1,w2,b2,...]
optimizer = optim.SGD(net.parameters(), lr=learning_rate)
criteon = nn.CrossEntropyLoss()

# 定義兩條曲線,在訓練-測試的過程中
# 再不斷填充點 以實現 曲線隨著訓練動態增長
viz = Visdom()
viz.line([0.], [0.], win='train_loss', opts=dict(title='train loss'))
# 定義的是 測試集的loss 和 acc兩條曲線,所以在X等於0時,Y給了兩個初始值。
viz.line([[0.0, 0.0]], [0.], win='test', opts=dict(title='test loss&acc.',
                                                   legend=['loss', 'acc.']))
global_step = 0


for epoch in range(epochs):

    for batch_idx, (data, target) in enumerate(train_loader):
        data = data.view(-1, 28*28)          # 將二維的圖片資料攤平[樣本數,784]

        logits = net(data)                   # 前向傳播
        loss = criteon(logits, target)       # nn.CrossEntropyLoss()自帶Softmax

        optimizer.zero_grad()                # 梯度資訊清空
        loss.backward()                      # 反向傳播獲取梯度
        optimizer.step()                     # 優化器更新

        global_step += 1
        # 繪製訓練集的loss的圖
        viz.line([loss.item()], [global_step], win='train_loss', update='append')


        if batch_idx % 100 == 0:             # 每100個batch輸出一次資訊
            print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
                epoch, batch_idx * len(data), len(train_loader.dataset),
                       100. * batch_idx / len(train_loader), loss.item()))


    test_loss = 0
    correct = 0                                         # correct記錄正確分類的樣本數
    for data, target in test_loader:
        data = data.view(-1, 28 * 28)
        logits = net(data)
        test_loss += criteon(logits, target).item()     # 其實就是criteon(logits, target)的值,標量

        pred = logits.argmax(dim=1)                
        correct += pred.eq(target.data).float().sum().item()

    # 更新訓練集的 loss 和 accuracy
#     print('test accuracy: ', correct / len(test_loader.dataset))   
    viz.line([[test_loss, 100. * correct / len(test_loader.dataset)]], [global_step], 
             win='test', update='append')
    viz.images(data.view(-1, 1, 28, 28), win='x', opts = dict(title = 'Real Image'))
    viz.text(str(pred.detach().cpu().numpy()), win='pred',
             opts=dict(title='pred'))


    test_loss /= len(test_loader.dataset)
    print('\nTest set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
        test_loss, correct, len(test_loader.dataset),
        100. * correct / len(test_loader.dataset)))
view result
Train Epoch: 0 [0/60000 (0%)]	Loss: 2.300833
Train Epoch: 0 [20000/60000 (33%)]	Loss: 2.282752
Train Epoch: 0 [40000/60000 (67%)]	Loss: 2.260350
test accuracy:  0.4326

Test set: Average loss: 0.0111, Accuracy: 4326.0/10000 (43%)

Train Epoch: 1 [0/60000 (0%)]	Loss: 2.243046
Train Epoch: 1 [20000/60000 (33%)]	Loss: 2.179665
Train Epoch: 1 [40000/60000 (67%)]	Loss: 2.034719
test accuracy:  0.5407

Test set: Average loss: 0.0093, Accuracy: 5407.0/10000 (54%)

Train Epoch: 2 [0/60000 (0%)]	Loss: 1.852168
Train Epoch: 2 [20000/60000 (33%)]	Loss: 1.656816
Train Epoch: 2 [40000/60000 (67%)]	Loss: 1.399417
test accuracy:  0.7035

Test set: Average loss: 0.0060, Accuracy: 7035.0/10000 (70%)

Train Epoch: 3 [0/60000 (0%)]	Loss: 1.263554
Train Epoch: 3 [20000/60000 (33%)]	Loss: 1.032210
Train Epoch: 3 [40000/60000 (67%)]	Loss: 0.860783
test accuracy:  0.8003

Test set: Average loss: 0.0038, Accuracy: 8003.0/10000 (80%)

Train Epoch: 4 [0/60000 (0%)]	Loss: 0.833112
Train Epoch: 4 [20000/60000 (33%)]	Loss: 0.625544
Train Epoch: 4 [40000/60000 (67%)]	Loss: 0.610665
test accuracy:  0.8596

Test set: Average loss: 0.0027, Accuracy: 8596.0/10000 (86%)

Train Epoch: 5 [0/60000 (0%)]	Loss: 0.657896
Train Epoch: 5 [20000/60000 (33%)]	Loss: 0.530274
Train Epoch: 5 [40000/60000 (67%)]	Loss: 0.471292
test accuracy:  0.8795

Test set: Average loss: 0.0023, Accuracy: 8795.0/10000 (88%)

Train Epoch: 6 [0/60000 (0%)]	Loss: 0.418883
Train Epoch: 6 [20000/60000 (33%)]	Loss: 0.421887
Train Epoch: 6 [40000/60000 (67%)]	Loss: 0.429563
test accuracy:  0.8908

Test set: Average loss: 0.0020, Accuracy: 8908.0/10000 (89%)

Train Epoch: 7 [0/60000 (0%)]	Loss: 0.429593
Train Epoch: 7 [20000/60000 (33%)]	Loss: 0.368680
Train Epoch: 7 [40000/60000 (67%)]	Loss: 0.389523
test accuracy:  0.8963

Test set: Average loss: 0.0019, Accuracy: 8963.0/10000 (90%)

Train Epoch: 8 [0/60000 (0%)]	Loss: 0.382745
Train Epoch: 8 [20000/60000 (33%)]	Loss: 0.352603
Train Epoch: 8 [40000/60000 (67%)]	Loss: 0.347548
test accuracy:  0.8982

Test set: Average loss: 0.0018, Accuracy: 8982.0/10000 (90%)

Train Epoch: 9 [0/60000 (0%)]	Loss: 0.346902
Train Epoch: 9 [20000/60000 (33%)]	Loss: 0.367318
Train Epoch: 9 [40000/60000 (67%)]	Loss: 0.369855
test accuracy:  0.9021

Test set: Average loss: 0.0017, Accuracy: 9021.0/10000 (90%)