神經網路學習筆記5
C5 誤差反向傳播
計算圖
構建計算圖,從左向右進行計算。(正向傳播)
區域性計算:無論全域性發生了什麼,都只能根據與自己相關的資訊輸出接下來的結果
計算圖優點:可以將中間的計算結果全部儲存起來。只有這些無法令人信服,可以通過反向傳播高效計算倒數。
計算圖的反向傳播:沿著與正方向相反的方向,乘上區域性導數。
簡單層實現:
把構建神經網路的“層”實現為一個類
如:負責sigmoid函式的sigmoid,負責矩陣乘積的Affine,以層為單位進行實現。
因此,這裡也以層為單位進行乘法節點、加法節點
啟用函式層的實現:
RelU
y={x,(x>0);0,(x<=0)}
如果正向傳播時輸入x大於0,反向傳播會將上游的值原封不動傳給下游。如果正向傳播x的值小於0,反向傳播中傳給下游的訊號將停留在此處。
在神經網路的實現中,一般假定forward()和backward()的引數是Numpy陣列
Relu類有例項變數mask,該變數由True/False構成的Numpy陣列,會把正向傳播時的輸入x的元素中小於等於0的地方儲存為True,其他地方儲存為False。mask變數儲存了由True/False構成的Numpy陣列
如果正向傳播時輸入值小於等於0,那麼反向傳播的值為0
其作用像電路開關一樣,正向傳播時,有電的話就把開關設定為on,沒電流通過就把開關設定為off
開關為on的時候,電流直接通過,開關為off的時候沒有電流通過。
sigmoid層:
正向傳播時將輸出儲存在例項變數out中,反向傳播時使用該變數out進行計算
affine/softmax層實現:
在神經網路的正向傳播中,為了計算加權訊號的總和,使用矩陣的乘積運算
各個資料反轉的值需要彙總為偏置的元素
偏置的反向傳播會對這兩個資料的導數進行求和,使用np.sum對第零軸方向上的元素進行求和
Softmax-with-loss層:
softmax函式會將輸入的值正規化之後再輸入
輸入影象通過Affine層和Relu層進行轉換,10個輸入通過softmax層進行正則化
分幾類,輸入有幾個
神經網路的處理有推理和學習兩個階段,神經網路的推理通常不使用softmax層,一般會將最後一個Affine層的輸出作為識別結果。
神經網路中未被正則化的輸出結果有時候會被稱為得分,當神經網路的推理只需要給出一個答案的情況下,此時只對得分最大值感興趣,不需要softmax層。
神經網路學習的目的:通過調整權重引數,是的神經網路的輸出接近教師標籤,必須把神經網路的輸出與教師標籤的誤差高效地傳遞給前面的 層。
反向傳播是,要將傳播的值除以批的大小,傳遞給前面的層是單個數據的誤差。
幾個層實現程式碼如下:
# coding: utf-8 import numpy as np from common.functions import * from common.util import im2col, col2im class Relu: def __init__(self): self.mask = None def forward(self, x): self.mask = (x <= 0) out = x.copy() out[self.mask] = 0 return out def backward(self, dout): dout[self.mask] = 0 dx = dout return dx class Sigmoid: def __init__(self): self.out = None def forward(self, x): out = sigmoid(x) # 1/(1+np.exp(-x)) self.out = out return out def backward(self, dout): dx = dout * (1.0 - self.out) * self.out return dx class Affine: def __init__(self, W, b): self.W =W self.b = b self.x = None self.original_x_shape = None # 權重和偏置引數的導數 self.dW = None self.db = None def forward(self, x): # 對應張量 self.original_x_shape = x.shape x = x.reshape(x.shape[0], -1) self.x = x out = np.dot(self.x, self.W) + self.b return out def backward(self, dout): dx = np.dot(dout, self.W.T) self.dW = np.dot(self.x.T, dout) self.db = np.sum(dout, axis=0) dx = dx.reshape(*self.original_x_shape) # 還原輸入資料的形狀(對應張量) return dx class SoftmaxWithLoss: def __init__(self): self.loss = None #損失 self.y = None # softmax的輸出 self.t = None # 監督資料 def forward(self, x, t): self.t = t self.y = softmax(x) self.loss = cross_entropy_error(self.y, self.t) return self.loss def backward(self, dout=1): batch_size = self.t.shape[0] if self.t.size == self.y.size: # 監督資料是one-hot-vector的情況 dx = (self.y - self.t) / batch_size else: dx = self.y.copy() dx[np.arange(batch_size), self.t] -= 1 dx = dx / batch_size return dx class Dropout: """ http://arxiv.org/abs/1207.0580 """ def __init__(self, dropout_ratio=0.5): self.dropout_ratio = dropout_ratio self.mask = None def forward(self, x, train_flg=True): if train_flg: self.mask = np.random.rand(*x.shape) > self.dropout_ratio return x * self.mask else: return x * (1.0 - self.dropout_ratio) def backward(self, dout): return dout * self.mask class BatchNormalization: """ http://arxiv.org/abs/1502.03167 """ def __init__(self, gamma, beta, momentum=0.9, running_mean=None, running_var=None): self.gamma = gamma self.beta = beta self.momentum = momentum self.input_shape = None # Conv層的情況下為4維,全連線層的情況下為2維 # 測試時使用的平均值和方差 self.running_mean = running_mean self.running_var = running_var # backward時使用的中間資料 self.batch_size = None self.xc = None self.std = None self.dgamma = None self.dbeta = None def forward(self, x, train_flg=True): self.input_shape = x.shape if x.ndim != 2: N, C, H, W = x.shape x = x.reshape(N, -1) out = self.__forward(x, train_flg) return out.reshape(*self.input_shape) def __forward(self, x, train_flg): if self.running_mean is None: N, D = x.shape self.running_mean = np.zeros(D) self.running_var = np.zeros(D) if train_flg: mu = x.mean(axis=0) xc = x - mu var = np.mean(xc**2, axis=0) std = np.sqrt(var + 10e-7) xn = xc / std self.batch_size = x.shape[0] self.xc = xc self.xn = xn self.std = std self.running_mean = self.momentum * self.running_mean + (1-self.momentum) * mu self.running_var = self.momentum * self.running_var + (1-self.momentum) * var else: xc = x - self.running_mean xn = xc / ((np.sqrt(self.running_var + 10e-7))) out = self.gamma * xn + self.beta return out def backward(self, dout): if dout.ndim != 2: N, C, H, W = dout.shape dout = dout.reshape(N, -1) dx = self.__backward(dout) dx = dx.reshape(*self.input_shape) return dx def __backward(self, dout): dbeta = dout.sum(axis=0) dgamma = np.sum(self.xn * dout, axis=0) dxn = self.gamma * dout dxc = dxn / self.std dstd = -np.sum((dxn * self.xc) / (self.std * self.std), axis=0) dvar = 0.5 * dstd / self.std dxc += (2.0 / self.batch_size) * self.xc * dvar dmu = np.sum(dxc, axis=0) dx = dxc - dmu / self.batch_size self.dgamma = dgamma self.dbeta = dbeta return dx class Convolution: def __init__(self, W, b, stride=1, pad=0): self.W = W self.b = b self.stride = stride self.pad = pad # 中間資料(backward時使用) self.x = None self.col = None self.col_W = None # 權重和偏置引數的梯度 self.dW = None self.db = None def forward(self, x): FN, C, FH, FW = self.W.shape N, C, H, W = x.shape out_h = 1 + int((H + 2*self.pad - FH) / self.stride) out_w = 1 + int((W + 2*self.pad - FW) / self.stride) col = im2col(x, FH, FW, self.stride, self.pad) col_W = self.W.reshape(FN, -1).T out = np.dot(col, col_W) + self.b out = out.reshape(N, out_h, out_w, -1).transpose(0, 3, 1, 2) self.x = x self.col = col self.col_W = col_W return out def backward(self, dout): FN, C, FH, FW = self.W.shape dout = dout.transpose(0,2,3,1).reshape(-1, FN) self.db = np.sum(dout, axis=0) self.dW = np.dot(self.col.T, dout) self.dW = self.dW.transpose(1, 0).reshape(FN, C, FH, FW) dcol = np.dot(dout, self.col_W.T) dx = col2im(dcol, self.x.shape, FH, FW, self.stride, self.pad) return dx class Pooling: def __init__(self, pool_h, pool_w, stride=1, pad=0): self.pool_h = pool_h self.pool_w = pool_w self.stride = stride self.pad = pad self.x = None self.arg_max = None def forward(self, x): N, C, H, W = x.shape out_h = int(1 + (H - self.pool_h) / self.stride) out_w = int(1 + (W - self.pool_w) / self.stride) col = im2col(x, self.pool_h, self.pool_w, self.stride, self.pad) col = col.reshape(-1, self.pool_h*self.pool_w) arg_max = np.argmax(col, axis=1) out = np.max(col, axis=1) out = out.reshape(N, out_h, out_w, C).transpose(0, 3, 1, 2) self.x = x self.arg_max = arg_max return out def backward(self, dout): dout = dout.transpose(0, 2, 3, 1) pool_size = self.pool_h * self.pool_w dmax = np.zeros((dout.size, pool_size)) dmax[np.arange(self.arg_max.size), self.arg_max.flatten()] = dout.flatten() dmax = dmax.reshape(dout.shape + (pool_size,)) dcol = dmax.reshape(dmax.shape[0] * dmax.shape[1] * dmax.shape[2], -1) dx = col2im(dcol, self.x.shape, self.pool_h, self.pool_w, self.stride, self.pad) return dx