Related Computer Vision Links
Learn Colorspaces Computer Vision Tutorial, validate concepts with Colorspaces Computer Vision MCQ Questions, and prepare interviews through Colorspaces Computer Vision Interview Questions and Answers.
Computer Vision Interview
20 essential Q&A
Updated 2026
color spaces
Color Spaces: 20 Essential Q&A
How we represent color numerically—RGB vs perceptual models, conversions, and practical preprocessing choices.
~11 min read
20 questions
Beginner
RGB/HSVLABgammaOpenCV cvtColor
Quick Navigation
1
What is a color space?
⚡ easy
Answer: A coordinate system for representing colors as numeric tuples (e.g. three numbers for trichromatic display). Different spaces emphasize different properties—device RGB for screens, HSV for intuitive hue/saturation edits, LAB for perceptual distance.
2
Describe the RGB additive model.
⚡ easy
Answer: Red, Green, Blue primary lights added for displays. Each channel 0–255 (8-bit) combines to reproduce colors on monitors. It is device-dependent unless tied to a standard like sRGB.
import cv2
hsv = cv2.cvtColor(bgr, cv2.COLOR_BGR2HSV)
3
Why mention BGR separately from RGB?
⚡ easy
Answer: Libraries like OpenCV store channels as B, G, R. Algorithms are identical if consistent, but visualization and pre-trained weights expecting RGB need an explicit swap.
4
What do H, S, V represent?
📊 medium
Answer: Hue (color tint on a wheel), Saturation (colorfulness vs gray), Value/Brightness (intensity). Cylindrical geometry separates chromatic from achromatic changes more intuitively than RGB for some tasks.
5
Interview: when preprocess with HSV?
📊 medium
Answer: Segmenting by hue ranges (e.g. colored objects), thresholding saturation/value to ignore shadows differently than RGB splits, and some augmentations that tweak hue/saturation while preserving identity.
6
Why is LAB used in vision and graphics?
🔥 hard
Answer: L* is lightness; a*, b* are color-opponent dimensions. Euclidean distance in LAB approximates perceptual difference better than RGB. Useful for color transfer, quality metrics, and some clustering tasks.
7
What is YCbCr?
📊 medium
Answer: Separates luma (Y) from chrominance (Cb, Cr). Used in JPEG and video codecs because human vision is more sensitive to brightness than color—enabling chroma subsampling.
8
Where does CMYK appear?
⚡ easy
Answer: Subtractive printing (cyan, magenta, yellow, key/black). Less common in core CV training; relevant for print QA, packaging inspection, and prepress—not for typical RGB camera pipelines.
9
Is grayscale a “color space”?
⚡ easy
Answer: It is a single-channel intensity representation, often derived from RGB via weighted sum. It discards chrominance—good for edge detection and speed when color is irrelevant.
10
What does linear RGB mean vs sRGB?
🔥 hard
Answer: Sensors measure roughly linear light; displays apply gamma encoding (sRGB transfer function) for perceptual uniformity. Some photometric algorithms (deblur, relighting) need linearization via inverse gamma before physical correctness.
11
What is gamma correction?
📊 medium
Answer: Nonlinear mapping between stored values and displayed intensity to match human brightness perception and legacy CRT behavior. Applying gamma wrong can break color statistics and blur/threshold results.
12
What is a color gamut?
📊 medium
Answer: The range of colors a device or space can represent. Wide-gamut displays (P3) vs sRGB differ; out-of-gamut colors clip or map when converting—important for medical imaging and professional color.
13
What is a white point / illuminant?
🔥 hard
Answer: Reference neutral light (e.g. D65) for interpreting RGB values. Different cameras/AWB change apparent colors; robust pipelines account for illumination via white balance or learning.
14
What is 4:2:0 chroma subsampling?
📊 medium
Answer: Full luma resolution but quarter resolution for chroma planes—exploits lower acuity for color. Can cause color fringing on sharp edges when decoded; relevant for video compression pipelines.
15
Should you normalize each RGB channel separately?
⚡ easy
Answer: Sometimes for model input (zero mean / unit var per channel). For photometric consistency, consider normalization that preserves color ratios—or work in a space suited to the task (e.g. LAB L channel only).
16
How do augmentations interact with color space?
📊 medium
Answer: Random brightness/contrast often in RGB or HSV; hue jitter in HSV. Extreme hue shifts may leave gamut or break class semantics—keep augmentations label-safe.
17
Why is thresholding harder in RGB than gray?
📊 medium
Answer: RGB thresholding needs rules in 3D (ranges per channel or distance to a color). HSV can separate hue cone from lighting via S/V gating—still not perfect under colored illumination.
18
Compare histogram equalization on RGB vs channels?
⚡ easy
Answer: Applying independently to R,G,B shifts colors (color cast). Often convert to LAB and equalize L only, or use CLAHE on luminance to preserve chroma.
19
Mention one approach to illumination invariance.
🔥 hard
Answer: Retinex-style ideas, white balance, homomorphic filtering (separate illumination/reflectance in log domain), or learning-based methods. Interviews reward naming tradeoffs (artifacts vs compute).
20
Typical order: decode → color convert → resize?
📊 medium
Answer: Often: load image → ensure correct color order → optional WB/gamma fix → resize/crop with good interpolation → normalize to tensor. Order matters: resize after linearization for photometric tasks; many DL pipelines keep it simple in sRGB uint8.
Color Spaces Cheat Sheet
Display / capture
- RGB / BGR
- sRGB gamma
- Gamut limits
Analysis
- HSV for ranges
- LAB for distance
- YCbCr in video
Pitfalls
- BGR vs RGB
- Histogram per channel
- AWB / lighting
💡 Pro tip: Name your target space and whether distances should be perceptual or device-raw.
Full tutorial track
Go deeper with the matching tutorial chapter and code examples.