淺談pytorch卷積核大小的設定對全連線神經元的影響
阿新 • • 發佈:2020-01-13
3*3卷積核與2*5卷積核對神經元大小的設定
#這裡kerner_size = 2*5 class CONV_NET(torch.nn.Module): #CONV_NET類繼承nn.Module類 def __init__(self): super(CONV_NET,self).__init__() #使CONV_NET類包含父類nn.Module的所有屬性 # super()需要兩個實參,子類名和物件self self.conv1 = nn.Conv2d(1,32,(2,5),1,padding=0) self.conv2 = nn.Conv2d(32,128,padding=0) self.fc1 = nn.Linear(512,128) self.relu1 = nn.ReLU(inplace=True) self.drop1 = nn.Dropout(0.5) self.fc2 = nn.Linear(128,32) self.relu2 = nn.ReLU(inplace=True) self.fc3 = nn.Linear(32,3) self.softmax = nn.Softmax(dim=1) def forward(self,x): x = self.conv1(x) x = self.conv2(x) x = x.view(x.size(0),-1) x = self.fc1(x) x = self.relu1(x) x = self.drop1(x) x = self.fc2(x) x = self.relu2(x) x = self.fc3(x) x = self.softmax(x) return x
主要看對稱卷積核以及非對稱卷積核之間的計算方式
#這裡kerner_size = 3*3 class CONV_NET(torch.nn.Module): #CONV_NET類繼承nn.Module類 def __init__(self): super(CONV_NET,3,padding=1) self.conv2 = nn.Conv2d(32,padding=0) self.fc1 = nn.Linear(3200,-1) x = self.fc1(x) x = self.relu1(x) x = self.drop1(x) x = self.fc2(x) x = self.relu2(x) x = self.fc3(x) x = self.softmax(x) return x
針對kerner_size=2*5,padding=0,stride=1以及kerner_size=3*3,padding=1,stride=1二者計算方式的比較如圖所示
以上這篇淺談pytorch卷積核大小的設定對全連線神經元的影響就是小編分享給大家的全部內容了,希望能給大家一個參考,也希望大家多多支援我們。