Autoencoders MCQ 15 Questions
Time: ~25 mins Advanced

Autoencoders MCQ

Compress to a code, decode back—useful for denoising, pretraining, and generative building blocks.

Easy: 5 Q Medium: 6 Q Hard: 4 Q
Encoder

z = f(x)

Decoder

x̂ = g(z)

Bottleneck

Low dim

Loss

||x−x̂||

What is an autoencoder?

An autoencoder maps input x to a latent code z via an encoder and reconstructs x̂ with a decoder. Training minimizes reconstruction error (often L2 or L1). A narrow bottleneck forces compression; denoising autoencoders learn robust features by reconstructing clean data from corrupted inputs.

Information bottleneck

The latent must capture salient factors of variation if reconstruction is accurate with few dimensions.

Key ideas

Encoder

CNN or MLP downsampling x → z.

Decoder

Upsampling or transposed conv z → x̂ matching input shape.

Reconstruction loss

Pixel-wise MSE or BCE for images.

Denoising AE

Train on noisy input, target clean output.

Forward pass

x → encoder → z → decoder → x̂; backprop through reconstruction loss

Pro tip: VAEs add a probabilistic latent and KL term; vanilla AEs have deterministic codes.