Code Explanation:
Import the Required Module
import torch.nn as nn
This imports PyTorch’s neural network module, which contains various activation functions and layers.
Create a ReLU Activation Function
relu = nn.ReLU()
This initializes an instance of the ReLU (Rectified Linear Unit) function.
Apply ReLU to a Tensor
print(relu(torch.tensor([-2.0, 0.0, 3.0])))
torch.tensor([-2.0, 0.0, 3.0]) creates a PyTorch tensor with three values: -2.0, 0.0, 3.0.
relu() applies the ReLU activation function to each element of the tensor.
Understanding ReLU:
The ReLU function is defined as:
ReLU(x)=max(0,x)
Which means:
If the input value is negative, ReLU outputs 0.
If the input value is 0 or positive, ReLU outputs the same value.
Expected Output:
For the given tensor [-2.0, 0.0, 3.0]:
ReLU(-2.0) = max(0, -2.0) = 0.0
ReLU(0.0) = max(0, 0.0) = 0.0
ReLU(3.0) = max(0, 3.0) = 3.0
So the printed output will be:
tensor([0., 0., 3.])
0 Comments:
Post a Comment