Generative AI · Deep Learning 15 questions 25 min

Generative AI MCQ · test your deep learning GenAI knowledge

From VAEs and GANs to diffusion models and LLMs – 15 questions covering the core of modern generative deep learning.

Easy: 5 Medium: 6 Hard: 4
VAE
GAN
Diffusion
Transformer

Generative AI: creating new data with deep learning

Generative models learn the underlying distribution of training data to generate novel samples. This MCQ covers the four main families: VAEs (variational autoencoders), GANs (generative adversarial networks), diffusion models, and autoregressive transformers (GPT, LLMs). It also touches on evaluation metrics and challenges.

Why generative AI matters

From creating realistic images (Stable Diffusion) to conversational agents (ChatGPT), generative models are transforming content creation, drug discovery, and more.

GenAI glossary – key concepts

VAE (Variational Autoencoder)

Learns latent distribution via encoder-decoder; uses KL divergence and reconstruction loss. Generates by sampling latent and decoding.

GAN (Generative Adversarial Network)

Generator vs discriminator game. Generator creates fakes; discriminator detects real vs fake. Prone to mode collapse.

Diffusion Models

Gradually add noise, then learn to reverse the process (denoise). DDPM, Stable Diffusion. State‑of‑the‑art in image generation.

Autoregressive Transformers

Generate tokens one by one, conditioning on previous outputs. GPT, Llama. Basis of modern LLMs.

Latent Space

Low‑dimensional representation learned by models like VAE. Manipulating latent leads to meaningful interpolations.

Mode Collapse

GAN failure where generator produces limited variety (only few modes of data).

LLM (Large Language Model)

Massive transformer trained on text; generates coherent language via next‑token prediction.

Stable Diffusion

Latent diffusion model with text conditioning; generates high‑quality images from prompts.

# Simplified VAE loss (ELBO)
reconstruction_loss = mse(x, x_hat)
kl_loss = -0.5 * sum(1 + log_var - mu^2 - exp(log_var))
loss = reconstruction_loss + kl_loss
Interview tip: Be ready to compare GANs and diffusion models (training stability, sample quality), explain the VAE reparameterization trick, and discuss how transformers generate text (autoregressive). This MCQ covers these core distinctions.

Common GenAI interview questions

  • What is the difference between VAE and GAN?
  • How does a diffusion model generate images?
  • Explain the reparameterization trick in VAEs.
  • What is mode collapse in GANs and how can it be mitigated?
  • Why are transformers well‑suited for language generation?
  • Describe the role of temperature in sampling from an LLM.