Related Computer Vision Links
Learn Alexnet Computer Vision Tutorial, validate concepts with Alexnet Computer Vision MCQ Questions, and prepare interviews through Alexnet Computer Vision Interview Questions and Answers.
AlexNet MCQ
The 2012 ImageNet breakthrough: depth, ReLU, dropout, and training a large CNN on GPUs.
ImageNet
ILSVRC 2012
ReLU
Non-linearity
Dropout
Regularization
GPUs
Scale
AlexNet in context
AlexNet (Krizhevsky et al., 2012) won ImageNet ILSVRC with a large GPU-trained CNN. It popularized ReLU activations, dropout regularization, overlapping max pooling, data augmentation, and multi-GPU model parallelism for vision. Deeper stacks of conv layers followed (VGG, ResNet, …).
Why it mattered
It showed that deep CNNs scaled with data and compute could dominate hand-crafted features on a hard benchmark.
Key ideas
Architecture
Five conv layers (with LRN and pool stages) then three FC layers.
ReLU
Faster training than saturating sigmoids/tanh; helps deep nets converge.
Dropout
Randomly drops activations in FC layers to reduce co-adaptation / overfitting.
Scale
Trained on two GPUs with split conv layers—enabled larger width.
Rough data flow
227×227 input → conv/pool stages → 4096-4096-1000 FC → softmax