Our current color theory has been wrong for 100 years, and getting it right could have huge implications for electronics, textiles, paints, and even the planet, as a new model could save millions of dollars and kilowatts of energy on storage and internet bandwidth alone.

For the last century, scientists believed that the most accurate way to represent how our brain distinguishes colors was a 3D model developed by Nobel Prize-winning physicist Erwin Schrödinger (yes, the cat-in-the-box guy), following an idea by Bernhard Riemann, a 19th-century mathematician who created a new type of geometry that carries his name.

Unfortunately, both Schrödinger and Riemann were wrong.

## How we mapped color

Even before Schrödinger and Riemann showed up, the very first color space was described mathematically by Hermann Grassmann, a 19th-century German polymath and physicist who developed it using vectors on a flat space. Grassmann later published his theory of color mixing, which is still taught today as the Grassmann’s laws.

The accomplishment was considerable: Grassman seemed to have codified the eyes of painters, offering a mathematical palette for color relationships and mixing. Variations on his Euclidian color space are still used today everywhere, from CMYK printing to RGB displays. Why? As Roxana Bujack, a computer scientist and mathematician who works at the Los Alamos National Laboratory, tells me over video chat, it’s often “just simpler” to use a flat space.

Simpler, maybe, but Grassman’s presentation of color had holes. It was Schrödinger who, in the 1920s, figured out that Grassman’s extraordinary color space didn’t accurately define how humans distinguish colors because it couldn’t take into account many biological and psychological processes. For example, humans don’t have the same sensitivity to certain colors (we can see more hues of green than any other color).

The new Riemannian color model, however, accounted for how humans perceive those color differences accurately using three-dimensional curves rather than flat Euclidean vectors.

Or that was what we thought until Bujack published a new study demonstrating that the Riemannian model was actually incorrect. She explains that while Riemann made the suggestion that a 3D space would better describe how we perceive colors, his mathematical model didn’t perform perfectly at first.

“Then [researchers] came and performed experiments with actual people and adjusted the math to better fit the experimental data,” Bujack says. That math is what is now accepted as the best model for describing color perception. But it’s still imperfect.

## Bad math, bad

Bujack discovered that Riemann and his peers were wrong only when she started to investigate how to make scientific visualizations more accurate.

“The idea was to develop algorithms to automatically improve color maps for data visualization, to make them easier to understand and interpret,” she says. When Bujack and her colleagues tried to use the Riemannian math to create these algorithms, they discovered that it didn’t work. And the reason is that Riemannian math doesn’t accommodate human perception.

“If you add up the values assigned to small differences along a path between two very different colors, that sum is much larger than the perceived difference between the two extremes of that path,” she says.

Simply put, humans are more sensitive to little changes in color than big changes, and that’s not acknowledged within our current 3D color space. You might think of it as a form of “diminishing returns” in our color perception, Bujack says.

Yet while they’ve spotted the problem, scientists don’t really know how to describe this new color space accurately. Below you can see a visualization of how the Riemannian space needs adjustment—with the knots in the paths representing how your eye actually distinguishes one color from another—but there is no mathematical model yet to construct this vision. And we probably won’t have one for decades to come.

## What this means

If you think that debating tiny differences in color perception is futile, I don’t blame you. I had a tough time gauging why this was so important, especially when color seems to work pretty well just about everywhere. Each year, our displays get more realistic and detailed. It’s hard to imagine how you can improve on the experience of watching a movie on a top-of-the-line OLED screen running on Dolby Vision.

However, as Bujack explains, the key benefit to developing a new model is not just accuracy of presenting color but efficiency: “If you figure out a mathematical model that can figure out when a human really makes a distinction between colors, you can throw a lot of data out. You can also make displays that better represent reality.”

And that has obvious implications for things like image and video compression. If you know which colors are not worth coding into a frame because a human won’t be able to distinguish the difference with another color, you can save a lot of bandwidth. The differential can be very big.

“We are talking more than 15%,” Bujack says. That figure may seem small, but if you add up all the color information coded in all the videos streaming worldwide, even the smallest change can end up in huge savings. As of 2019, Netflix alone streamed an estimated 500 million GB of video each day.

Bujack says that they have some ideas about how to fix the color model, but it will cost millions of dollars to develop, requiring intensive human testing and mathematical research: “[We are] at least 20 years from figuring this out completely and coming up with a model.”