autograd is the core of pytorch's neural network construction.
In PyTorch, combined with the following code example, when you set requires_grad=True for a tensor x, you are actually telling PyTorch that you need to track all operations on this tensor so that the gradient can be calculated in the future. When you call (), PyTorch automatically calculates the gradient about out and propagates backward back to x.
In the following code:
x = (2,2) print(x) x.requires_grad=True print(x) y = x + 2 z = y*y*3 out = () print(out) () print() print() # This line will report an error,because y No gradient required,ynon-leaf tensor
y is obtained by x through a simple addition operation. Since y itself does not set requires_grad=True, PyTorch does not track the operation history of y, so it is impossible to calculate the gradient of y about out.
If you want to calculate the gradient of y, you need to set requirements_grad=True as well before creating y. However, usually we don't need to find the gradient for intermediate variables, we just need to be interested in the gradient of the final output and then calculate all the required gradients by backpropagation.
If you really need to find the gradient for y, you can do this:
x = (2,2, requires_grad=True) y = x + 2 z = y*y*3 out = () () print() # Calculate the gradient of xprint() # Now it can be calculated y The gradient is
In this modified code, x and y both set requires_grad=True, so PyTorch will track all their operations, and you can calculate the gradient of y about out. But note that normally, we just need to calculate the gradient of the final output (out in this example) about the input variable (x in this example).
This is the end of this article about the implementation of pytorch automatic gradient autograd. For more related content on pytorch automatic gradient autograd, please search for my previous articles or continue browsing the related articles below. I hope everyone will support me in the future!