Gamma Correction vs. Premultiplied Pixels

Pixels with 8 bits per channel are normally sRGB encoded because that allocates more bits to darker colors where human vision is the most sensitive. (Actually, it’s really more of a historical accident, but sRGB nevertheless remains useful for this reason). The relationship between sRGB and linear RGB is that you get an sRGB pixel by raising each component of a linear pixel to the power of $1/2.2$.

It is common for graphics software to perform alpha blending directly on these sRGB pixels using alpha values that are linearly coded (ie., an alpha value of 0 means no coverage, 0.5 means half coverage, and 1 means full coverage). Because alpha blending is best done with premultiplied pixels, such systems store pixels in this format:

$\left[\,\alpha,\enspace\alpha \cdot \text{R}^{1/2.2},\enspace\alpha \cdot \text{G}^{1/2.2},\enspace\alpha \cdot \text{B}^{1/2.2}\,\right]$,

that is, the alpha channel is linearly coded, while the R, G, and B channels are first sRGB coded, then premultiplied with the linear alpha. This works well as long as you are happy with blending in sRGB. And if you discard the alpha channel of such pixels and display them directly on a monitor, it will look as if the pixels were alpha blended (in sRGB space) on top of a black background, which is the desired result.

But what if you want to blend in linear RGB? If you use the format above, some expensive conversions will be required. To convert to premultiplied linear, you have to first divide by alpha, then raise each color to 2.2, then multiply by alpha. To convert back, you must divide by alpha, raise to $1/2.2$, then multiply with alpha.

Those conversions can be avoided if you store the pixels linearly, ie., keeping the premultiplication, but coding red, green, and blue linearly instead of as sRGB:

$\left[\,\alpha,\enspace\alpha \cdot \text{R},\enspace\alpha \cdot \text{G},\enspace\alpha \cdot \text{B}\,\right]$.

This makes blending fast, but 8 bits per channel is no longer good enough. Without the sRBG encoding, too much resolution will be lost in darker tones. And to display these pixels on a monitor, they have to first be converted to sRGB, either manually, or, if the video card can scan them out directly, by setting the gamma ramp appropriately.

Can we get the best of both worlds? Yes. The format to use is this:

$\left[\,\alpha,\enspace (\alpha \cdot \text{R})^{1/2.2},\enspace (\alpha \cdot \text{G})^{1/2.2},\enspace \left(\alpha \cdot \text{B}\right)^{1/2.2}\,\right]$,

The alpha channel is stored linearly, and the color channels are first premultiplied with the linear alpha, then raised to $1/2.2$.

With this format, 8 bits per channel is sufficient. Discarding the alpha channel and displaying the pixels directly on a monitor will look as if the pixels were alpha blended (in linear space) against black, as desired.

You can convert to linear RGB simply by raising the R, G, and B components to 2.2, and back by raising to $1/2.2$. Or, if you feel like cheating, use an exponent of 2 so that the conversions become a multiplication and a square root respectively.

This is also the pixel format to use with texture samplers that implement the sRGB OpenGL extensions (textures and framebuffers). These extensions say precisely that the R, G, and B components are raised to 2.2 before texture filtering, and raised to 1/2.2 after the final raster operation.