Related Machine Learning Links
Learn Pytorch Machine Learning Tutorial, validate concepts with Pytorch Machine Learning MCQ Questions, and prepare interviews through Pytorch Machine Learning Interview Questions and Answers.
PyTorch
Deep Learning
Dynamic Graphs
PyTorch Basics
PyTorch is a flexible deep learning framework that uses dynamic computation graphs and integrates naturally with standard Python code.
Tensors in PyTorch
Creating tensors and using GPU (if available)
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
x = torch.tensor([[1.0, 2.0], [3.0, 4.0]], device=device)
y = torch.ones_like(x)
z = x + y
print(z, z.device)
Autograd & Gradients
PyTorch tracks operations on tensors with requires_grad=True and can automatically compute gradients via backpropagation.
w = torch.randn(3, requires_grad=True)
x = torch.tensor([1.0, 2.0, 3.0])
y = (w * x).sum()
y.backward() # compute dy/dw
print(w.grad)
Defining a Neural Network
Simple feed-forward network
import torch.nn as nn
import torch.optim as optim
class Net(nn.Module):
def __init__(self, in_features):
super().__init__()
self.fc1 = nn.Linear(in_features, 16)
self.fc2 = nn.Linear(16, 1)
def forward(self, x):
x = torch.relu(self.fc1(x))
x = torch.sigmoid(self.fc2(x))
return x
model = Net(in_features=num_features).to(device)
optimizer = optim.Adam(model.parameters(), lr=1e-3)
criterion = nn.BCELoss()
Basic Training Loop
for epoch in range(10):
model.train()
optimizer.zero_grad()
X_batch = torch.tensor(X_train, dtype=torch.float32, device=device)
y_batch = torch.tensor(y_train, dtype=torch.float32, device=device).unsqueeze(1)
y_pred = model(X_batch)
loss = criterion(y_pred, y_batch)
loss.backward()
optimizer.step()
print(f"Epoch {epoch+1}, loss = {loss.item():.4f}")
Next Steps with PyTorch
- Use
torch.utils.data.DatasetandDataLoaderto create efficient training pipelines. - Learn about common architectures (CNNs, RNNs, transformers) and how to implement them.
- Experiment with pre‑trained models from
torchvision.modelsfor transfer learning.