In the specifiedInsert a dimension in position, there are two parameters, input is the input tensor, and dim is the dimension to be inserted
It should be noted that the range of dim is [-()-1, ()+1), which is a range that is closed left and open right. When dim is a negative value, it will automatically convert to dim = dim+()+1, similar to using negative numbers to slice a python list.
import torch a = (2,5) print(a) print("") b = (0) print() print("") c = (()) print() Output: tensor([[-0.4734, 0.4115, -0.9415, -1.1280, -0.1065], [ 0.1613, 1.2594, 1.1261, 1.3881, 0.1112]]) ([1, 2, 5]) ([2, 5, 1])
The above is the two-dimensional data situation:
First, a two-dimensional matrix is generated with a size of [2, 5]
Then, insert a dimension on the 0 dimension, you can see that the shape of the new matrix a has now become [1, 2, 5], and the size of the 0th dimension is 1 by default
Finally, insert a dimension on the last dimension and the shape becomes [2, 5, 1]
a=(2,3,2) print("") print("(a,3) size: {}".format((a,3).size())) print("") print("(a,2) size: {}".format((a,2).size())) print("") print("(a,1) size: {}".format((a,1).size())) print("") print("(a,0) size: {}".format((a,0).size())) print("") print("(a,-1) size: {}".format((a,-1).size())) print("") print("(a,-2) size: {}".format((a,-2).size())) print("") print("(a,-3) size: {}".format((a,-3).size())) print("") print("(a,-4) size: {}".format((a,-4).size())) Output: (a,3) size: ([2, 3, 2, 1]) (a,2) size: ([2, 3, 1, 2]) (a,1) size: ([2, 1, 3, 2]) (a,0) size: ([1, 2, 3, 2]) (a,-1) size: ([2, 3, 2, 1]) (a,-2) size: ([2, 3, 1, 2]) (a,-3) size: ([2, 1, 3, 2]) (a,-4) size: ([1, 2, 3, 2])
For 3D data() = 3, so the range of dim is [-4, 4)
Difference between () and ()
The first piece:
squeeze() mainly compresses the dimension of the data and removes the dimension with the number of elements of 1. The usage method is: (N) or (a,N) and removes the Nth dimension of a to achieve dimensional compression of data a;
The function of unsqueeze() is the opposite of the squeeze() function. Its function is to expand the data dimension. The usage method is: (N) or (a,N), and add a dimension of 1 to the Nth dimension of data a to achieve the expansion of data, so that the dimension of the data fed to the model is matched by the subsequent model training and the dimension of the data received by the model.
The second piece:
device = ("cuda:0" if .is_available() else "cpu") (device) # Select the 0th cuda
(device)
The above two lines of code are placed before reading the data.
mytensor = my_tensor.to(device) #Copy all the tensor variables that were first read onto the GPU specified by the device, and then the calculations are performed on the specified GPU. These tensors are mostly variables when reading data at the beginning, and the new variables derived thereafter will also run the calculation on the specified GPU.
The third piece:
Tensor & Numpy are all matrices, the difference is that Tensor can run on the GPU, Numpy can only run on the CPU. (Oh my God, I only know now!) Tensor and Numpy are very convenient to convert each other and are compatible with the types. Tensor can directly display data types through print, but Numpy cannot.
The fourth piece:
(y) Implement the addition of x and y Tensor, without changing x, and return a new Tensor
x.add_(y) Implement the addition of x and y Tensor, which will modify the dimension of x.
This is the end of this article about the usage of unsqueeze in pytorch. For more related pytorch unsqueeze content, please search for my previous articles or continue browsing the related articles below. I hope everyone will support me in the future!