Generative Adversarial Networks deep dive 15 questions 25 min

GANs MCQ · test your generative AI knowledge

From generator/discriminator to loss functions, mode collapse, and advanced variants – 15 questions covering GAN fundamentals.

Easy: 5 Medium: 6 Hard: 4
Generator
Discriminator
Adversarial
Nash equilibrium

Generative Adversarial Networks (GANs): the art of generation

GANs consist of two networks: a generator that creates fake data and a discriminator that distinguishes real from fake. They are trained simultaneously in a zero‑sum game, aiming for a Nash equilibrium. This MCQ test covers architecture, loss functions, training challenges (mode collapse), and popular variants like DCGAN, conditional GANs, and WGAN.

Why GANs?

GANs can generate realistic synthetic data – images, text, audio – and have revolutionised unsupervised and semi‑supervised learning.

GAN glossary – key concepts

Generator

Takes random noise (z) and generates fake samples. Aims to fool the discriminator.

Discriminator

Binary classifier that tries to distinguish real samples from fake ones. Outputs probability of real.

Latent space (z)

Input noise vector to the generator, usually sampled from Gaussian/uniform distribution.

Adversarial loss

Typically binary cross‑entropy: generator tries to maximize discriminator's error, discriminator minimizes it.

Mode collapse

Generator produces limited variety (collapses to few modes). A common GAN training failure.

DCGAN

Deep Convolutional GAN – uses convolutional layers, batch norm, and specific architectural guidelines.

Conditional GAN (cGAN)

Both generator and discriminator receive additional conditioning (e.g., class labels).

WGAN

Wasserstein GAN – uses Earth Mover's distance and weight clipping/gradient penalty for stable training.

# Simplified GAN training loop (PyTorch style)
for epoch in epochs:
    # train discriminator on real + fake
    real_data = sample_real()
    fake_data = generator(noise)
    loss_D = -torch.mean(torch.log(discriminator(real_data)) + torch.log(1 - discriminator(fake_data)))
    
    # train generator to fool discriminator
    fake_data = generator(noise)
    loss_G = -torch.mean(torch.log(discriminator(fake_data)))   # or -loss_D_fake
Interview tip: Be ready to explain the minimax game, why training GANs is unstable, and techniques to mitigate mode collapse (e.g., minibatch discrimination, unrolled GANs). This MCQ covers these foundational topics.

Common GAN interview questions

  • Explain the generator and discriminator roles in GANs.
  • What loss function is typically used in the original GAN?
  • What is mode collapse and how can it be detected?
  • How does Wasserstein GAN (WGAN) improve training stability?
  • What is a conditional GAN?
  • Why is Nash equilibrium important in GAN training?