Related Neural Networks Links
Learn Overfitting Neural Networks Tutorial, validate concepts with Overfitting Neural Networks MCQ Questions, and prepare interviews through Overfitting Neural Networks Interview Questions and Answers.
Neural Networks
15 Essential Q&A
Interview Prep
Overfitting & Underfitting — 15 Interview Questions
Train vs validation curves, bias–variance intuition, generalization gap, and the usual levers: data, model size, and regularization.
Colored left borders per card; green / amber / red difficulty chips.
Curves
Bias
Variance
Generalization
1 Define overfitting.Easy
Answer: The model learns training noise and idiosyncrasies so training error is low but validation/test error is much worse—poor generalization.
2 Define underfitting.Easy
Answer: The model is too simple or insufficiently trained: both training and validation errors remain high—it misses real signal.
3 Bias vs variance (classic interview version).Medium
Answer: High bias: systematically wrong (underfitting). High variance: sensitive to training sample (overfitting). Ideal model balances both for lowest expected test error.
4 What is the generalization gap?Easy
Answer: Difference between train performance and held-out performance. Large gap often signals overfitting; small gap with poor absolute score suggests underfitting or hard task.
5 How do learning curves diagnose overfitting?Medium
Answer: Plot loss vs epoch: train ↓ but val ↑ or plateaus bad → overfit. Both high and stuck → underfit or need better features/architecture.
6 List fixes for overfitting.Easy
Answer: More/better data, augmentation, regularization (L2, dropout), smaller model, early stopping, label noise cleanup, cross-validation for honest estimates.
7 List fixes for underfitting.Easy
Answer: Bigger / deeper model, train longer, lower regularization, richer features, check learning rate and optimization bugs, ensure data quality.
8 Can a neural net memorize random labels?Medium
Answer: Yes—large enough nets can fit random noise on training set (classic experiment). Shows capacity without generalization; motivates regularization and correct targets.
9 Why use a validation set?Easy
Answer: Tune hyperparameters and early stop without peeking at test set. Test set should estimate final generalization once to avoid optimistic bias.
10 Classical U-shaped risk vs modern “double descentâ€â€”mention?Hard
Answer: Classical: bias–variance U-shape in model complexity. Some regimes show double descent where risk drops again past interpolation threshold—interview bonus topic, not required for basics.
11 k-fold cross-validation—purpose.Medium
Answer: Rotate train/val splits to estimate performance with less variance when data is small—better hyperparameter comparison than one random split.
12 Label noise and overfitting.Medium
Answer: Wrong labels are noise; the model may memorize them. Clean data, robust loss, or regularization helps; audit labels in production ML.
13 More data vs smaller model for overfitting?Medium
Answer: Often more diverse data is the best fix if feasible. Smaller model is a lever when data is fixed—trade off capacity vs available signal.
14 Early stopping—how does it reduce overfitting?Easy
Answer: Stop training when validation loss worsens—prevents continued fitting of training noise. Acts as implicit regularization on training time/weight trajectory.
15 One diagram you’d draw in an interview.Easy
Answer: Two curves vs epochs: train loss down, val loss down then up—point at the elbow as overfitting onset. Pair with train vs val accuracy if classification.
Tie symptoms to train and val numbers—interviewers want concrete diagnostics.
Quick review checklist
- Overfit vs underfit; bias vs variance; generalization gap.
- Learning curves; fixes; validation vs test discipline.
- Memorization; early stopping; k-fold role.