Autoencoders deep dive 15 questions 25 min

Autoencoders MCQ · test your unsupervised knowledge

From undercomplete to variational autoencoders – 15 questions covering latent space, reconstruction loss, denoising, and generative modeling.

Easy: 5 Medium: 6 Hard: 4
Encoder
Decoder
Latent space
Denoising

Autoencoders: learning efficient representations

Autoencoders are neural networks trained to reconstruct their input, forcing the network to learn a compressed representation (latent space). This MCQ covers undercomplete, denoising, sparse, and variational autoencoders (VAEs), as well as their applications in dimensionality reduction, anomaly detection, and generative modeling.

Why autoencoders?

They enable unsupervised learning of useful features, can denoise data, and form the basis of generative models like VAEs. The bottleneck layer forces the network to capture the most salient information.

Autoencoder glossary – key concepts

Undercomplete Autoencoder

Latent dimension < input dimension. Learns the most important features by compression.

Denoising Autoencoder (DAE)

Trained to reconstruct clean input from corrupted version. Learns robust features.

Sparse Autoencoder

Adds sparsity penalty on latent activations (e.g., KL divergence). Can have larger latent dimension.

Variational Autoencoder (VAE)

Probabilistic spin: encoder outputs parameters of a latent distribution; adds KL divergence to regularize latent space. Generative.

Latent space / Bottleneck

The central, low‑dimensional representation that the network learns. Its properties determine what the autoencoder captures.

Reconstruction loss

Measures difference between input and output. Common choices: MSE (continuous) or binary cross‑entropy.

KL divergence (VAE)

Regularizes the latent distribution to be close to a prior (usually Gaussian), enabling generation.

# VAE loss (simplified)
reconstruction_loss = MSE(x, x_hat)   # or binary cross‑entropy
kl_loss = -0.5 * sum(1 + log_var - mu² - exp(log_var))
total_loss = reconstruction_loss + beta * kl_loss
Interview tip: Be ready to explain the difference between undercomplete and variational autoencoders, how denoising autoencoders work, and what the latent space represents. This MCQ covers these distinctions.

Common Autoencoder interview questions

  • What is the purpose of the bottleneck in an autoencoder?
  • How does a denoising autoencoder differ from a standard one?
  • Explain the reparameterization trick in VAEs.
  • What loss functions are typically used for autoencoders?
  • How can autoencoders be used for anomaly detection?
  • What is the role of KL divergence in a VAE?