Code Explanation:
Step 1: Import PyTorch
import torch
This imports the PyTorch library, which is used for deep learning, tensor computations, and GPU acceleration.
torch provides various tensor operations similar to NumPy but optimized for deep learning.
Step 2: Create a Tensor
x = torch.tensor([1.0, 2.0, 3.0])
This creates a 1D tensor with floating-point values [1.0, 2.0, 3.0].
torch.tensor([...]) is used to initialize a tensor.
The values are explicitly written as floating-point (1.0, 2.0, 3.0) because PyTorch defaults to float32 if no data type is specified.
Step 3: Apply ReLU Activation
y = torch.relu(x - 2)
First, the expression x - 2 is computed:
x - 2 # Element-wise subtraction
This means:
[1.0, 2.0, 3.0] - 2
= [-1.0, 0.0, 1.0]
Next, torch.relu() is applied:
torch.relu() is the Rectified Linear Unit (ReLU) activation function, which is widely used in neural networks.
It is defined as:
𝑅
𝑒
eLU(x)=max(0,x)
Applying ReLU to [-1.0, 0.0, 1.0]:
lua
Copy
Edit
ReLU(-1.0) = max(0, -1.0) = 0.0
ReLU(0.0) = max(0, 0.0) = 0.0
ReLU(1.0) = max(0, 1.0) = 1.0
Final result for y:
y = [0.0, 0.0, 1.0]
Step 4: Print Output
print(y)
This prints the final tensor y after applying ReLU:
tensor([0., 0., 1.])
This means:
-1.0 became 0.0
0.0 remained 0.0
1.0 remained 1.0
Final Output
tensor([0., 0., 1.])
0 Comments:
Post a Comment