Related Deep Learning Links
Learn Exercises Deep Learning Tutorial, validate concepts with Exercises Deep Learning MCQ Questions, and prepare interviews through Exercises Deep Learning Interview Questions and Answers.
Deep Learning Practice Exercises
Move from theory to implementation. 100+ graded exercises, coding challenges, and mini‑projects — from perceptrons to Vision Transformers.
45+
TensorFlow/Keras
40+
PyTorch
20+
GANs & Transformers
10
Capstone Projects
1. Neural Networks & Backpropagation
Implement building blocks from scratch and with frameworks.
Perceptron from scratch
EasyTask: Implement a binary perceptron using only NumPy. Train it on synthetic linearly separable data (e.g., 2D points). Achieve 100% accuracy.
MLP for XOR
MediumTask: Build a 2-layer MLP with sigmoid/tanh to solve the XOR problem. Use backpropagation manually (no autograd). Plot decision boundary.
NumPy Manual gradFashion MNIST classifier
EasyTask: Use TensorFlow/Keras to build a 3‑layer dense network for Fashion‑MNIST. Achieve ≥88% test accuracy. Apply dropout and batch normalization.
TensorFlow Accuracy >88%2. Convolutional Neural Networks (CNN)
Conv2D manual forward
HardTask: Implement 2D convolution (without libraries) with stride and padding. Apply random filters on a sample image, visualize feature maps.
NumPy only input shape (1,28,28)CNN for CIFAR-10
MediumTask: Design a CNN with Conv+MaxPool+FC. Use PyTorch or TF to reach at least 75% accuracy on CIFAR‑10. Add data augmentation.
TensorFlow PyTorch target ≥75%Implement Residual Block (ResNet)
MediumTask: Create a residual block with skip connection. Use it to build a small ResNet-like model for CIFAR-10. Compare with plain CNN.
PyTorch Keras Get hint3. Recurrent Neural Networks & Sequence Modeling
IMDB Sentiment – LSTM
EasyTask: Use an embedding layer + LSTM to classify movie reviews (IMDB). Achieve >85% validation accuracy.
TF/Keras pad_sequences SolutionTime series forecasting (LSTM)
MediumTask: Predict univariate time series (air passengers, stock) using LSTM. Implement sliding window and inverse scaling.
PyTorch window=124. Transformers & Attention
Scaled Dot‑Product Attention
MediumTask: Implement `scaled_dot_product_attention` from scratch. Test with random Q,K,V matrices. Verify masking (optional).
NumPy PyTorch (compare) CodeFine-tune BERT for sentiment
AdvancedTask: Use Hugging Face transformers. Fine-tune `bert-base-uncased` on SST-2 dataset. Achieve ≥90% accuracy.
PyTorch + HF HuggingFace Notebook5. Generative Adversarial Networks (GANs)
Simple GAN on MNIST
MediumTask: Implement a vanilla GAN with MLP generator/discriminator. Train on MNIST; show generated digits after 50 epochs.
TF/Keras PyTorch Solution (TF) PyTorchDCGAN on CelebA
HardTask: Deep Convolutional GAN. Use stride convolutions for discriminator and transposed conv for generator. Generate 64x64 face images.
PyTorch CelebA Tutorial6. Framework‑specific drills
TensorFlow custom training loop
MediumRewrite a Keras model using `tf.GradientTape`. Apply custom loss and metric.
StarterPyTorch nn.Module – custom layer
EasyImplement a custom linear layer with learnable parameters; register parameters manually.
SolutionBenchmark: TF vs PyTorch speed
HardTask: Train identical CNN on both frameworks, measure epoch time, GPU utilization. Write a short report.
7. Capstone Practice Projects
Object Detection – YOLOv1 from scratch
Implement a simplified YOLO detection head and loss function. Train on Pascal VOC subset.
PyTorch 2 weeks effortChatbot with Seq2Seq + Attention
Build a neural conversation model using LSTM encoder‑decoder with additive attention.
TFMusic generation with Transformer
Use a decoder‑only Transformer (GPT‑style) to generate piano rolls (MAESTRO dataset).
How to use these exercises?
- Each exercise includes a short starter snippet (click "Starter"). We recommend forking our Colab.
- Solutions are private initially; attempt at least 30 min before viewing.
- Join #dl‑practice in Nikhil LearnHub community for discussions.
- New exercises added weekly – challenge yourself with different frameworks.