Simple use of Conv2d
The usage of Conv2d in the torch package nn is similar, but not identical, to that in tensorflow.
In torch, Conv2d has a couple of basic parameters, which are
in_channels Input image depth
out_channels Outputs the depth of the image
kernel_size convolution kernel size, square convolution for single digit only
stride convolution step, default is 1
padding Whether the convolution causes loss of size, 1 is no loss of size
Unlike tensorflow, the use in pytorch is more explicit, and we can use this approach to define the depth of the input and output images and specify the size of the convolution kernel used at the same time.
Instead, our inputs are passed by parameters defined via Conv2d as follows:
# Define a conv1 variable with input depth 1, output 6, and convolution kernel size 3*3 self.conv1 = nn.Conv2d(1, 6, 3) # Pass in the original input x to get a convolution part of length and width equal to x and depth 6 x = self.conv1(x)
It is important to note that the order of inputs required in Conv2d is
batchsize, nChannels, Height, Width
Other simple uses
Similarly, there are many other functions similar to Conv2d, such as max_pool2d, relu, and so on, and their usage is extremely similar to that of Conv2d, as shown below:
# Use of the relu function (self.conv1(x)) # Use of pooling functions F.max_pool2d((self.conv2(x)), 2)
Above this pytorch1.0 in .Conv2d usage details is all I share with you, I hope to give you a reference, and I hope you support me more.