#488: CSS Color: lab(), lch()
Discussions
Comment by @dbaron Mar 22, 2020 (See Github)
Do you mean lch()
rather than lhr()
?
I'd also cc @svgeesus here as the primary contact for the spec.
Comment by @felipeerias Mar 24, 2020 (See Github)
Do you mean
lch()
rather thanlhr()
?
Sorry, yes, of course :)
Comment by @dbaron Mar 25, 2020 (See Github)
I think one thing we should perhaps look at is whether other parts of the Web platform (e.g., canvas) need things to prepare for colors that are outside of the sRGB gamut.
Comment by @felipeerias Mar 26, 2020 (See Github)
There is an ongoing discussion at the intent-to-prototype thread about which other parts of the spec still need some work:
https://groups.google.com/a/chromium.org/d/topic/blink-dev/iwsT-jkCQcI/discussion
And there are open issues at the CSSWG with different levels of consensus for topics such as:
- working color spaces (https://github.com/w3c/csswg-drafts/issues/300)
- rendering colors outside the sRGB gamut (https://github.com/w3c/csswg-drafts/issues/4646)
- interpolation in non-sRGB color spaces (https://github.com/w3c/csswg-drafts/issues/4647)
- etc.
My main question at the moment is whether it would make sense to implement Lab-like selection and interpolation using the browser's existing color infrastructure (i.e. with clamped outputs), or whether it would be necessary to spec and implement general support for wider gamuts first.
A related question is whether there could be a path from those clamped lab()
and lch()
functions to their wide-gamut versions (if/when they are supported by the browser).
Comment by @svgeesus Mar 28, 2020 (See Github)
@dbaron, thanks for cc'ing me into this issue.
I think one thing we should perhaps look at is whether other parts of the Web platform (e.g., canvas) need things to prepare for colors that are outside of the sRGB gamut.
Yes, they do. This was discussed previously in TAG review although that issue stalled on waiting for responses from the original poster.
The Color on the Web Community Group is working on a draft report for HDR on the Web; although the focus is on HDR, Wide Color Gamut is also being examined and all parts of the web platform will need some changes to get outside the sRGB gamut while maintaining both backawrds compatibility and future extensibility.
@felipeerias wrote:
My main question at the moment is whether it would make sense to implement Lab-like selection and interpolation using the browser's existing color infrastructure (i.e. with clamped outputs), or whether it would be necessary to spec and implement general support for wider gamuts first.
Good question From discussions with @tabatkins I understood that Chromium infrastructure had an internal datatype, a 16bit scRGB which (because it can go below 0.0 and above 1.0) could in principle be used to express a wider gamut than sRGB?
Comment by @svgeesus Mar 28, 2020 (See Github)
In case it helps TAG review, CIE Lab (the rectangular form) and LCH (the polar form) were standardized in 1976 by the International Lighting Commission (CIE). It then became an International Standard and was jointly published with ISO. The latest edition of that reference is
ISO/CIE 11664-4:2019(E): Colorimetry ā Part 4: CIE 1976 Lab* colour space. 2019. Published. URL: http://cie.co.at/publications/colorimetry-part-4-cie-1976-lab-colour-space-1
Lab is widely used in the paint, printing, and film industries. It is used as an interchange space, and commercial instruments exist to measure it. For example, making a color profile for a particular printer, ink and paper combination consists of printing a large number of swatches with combinations of the inks used, then measuring the Lab values with a spectrophotometer. These measurements are then used to construct an inverted 4D (for CMYK) lookup table to calculate in the reverse direction - given a Lab color, what combination of inks wil give the closest measured result?
So Lab and LCH are well proven in both standardization and in industry practice. The question for the TAG is how best to integrate these with the Web platform.
In terms of implementer interest, Apple is currently implementing in WebKit for Safari, using their existing ColorSync architecture (an implementation of ICC profiles). BFO is implementing for their CSS/HTML to PDF product, and I see that @felipeerias is now evaluating whether and how to do this for Chromium. I'm not aware of any public signals from Mozilla.
Comment by @felipeerias Mar 31, 2020 (See Github)
Thank you, @svgeesus
In Chromium, Blink parses CSS color values into a 32-bit ARGB data type, which are eventually handed over to the Skia library to carry out the actual drawing (using its own 32-bit ARGB type).
Interestingly, Skia does support wider color gamuts (see "Color Correct Skia") and many of its methods can optionally take high precision colors (four floats plus a color space).
The problem that I am working on at the moment is that very little code in Chromium uses Skia's high precision colors, and Blink does not use them at all. In other words, the code that actually draws Web pages uses 32-bit ARGB colors throughout.
As far as I could see, Chromium's support for wider gamuts seems limited to selecting a specific color profile (scRGB linear, P3, etc.) instead of the one specified by the operating system. There isn't support at the moment for using a wider color gamut for individual Web elements.
I'm still studying the code and doing experiments, looking for a path forward that could be tackled (or at least started) by a one-volunteer effort.
Discussed
Apr 6, 2020 (See Github)
David: I was reasonably happy with how the blink-dev thread evolved
... I think people eventually agreed that there's more work to be done
... what is the TAG supposed to be doing about this? Fundamentally, lab and lch are simple, except that there's all this color gamut stuff, and I don't know coordination will happen if we don't do it, but I don't know if the TAG is particularly good at that
Peter: It is something we should do, whether or not we're good at it. Who should we reach out to?
David: I'm not entirely sure. I don't completely understand some of the color profile stuff. The other questions are how does it interact with canvas and compositing.
... does it make certain compositing optimizations more observable now?
Tess: I can run this by Simon and Dean
David: we should be making sure that the right people are looking at this
... it would be nice if some of the chromium folks working on this were also doing that
... seems the person working this on the chromium side is new to colors
Peter: are there any other places on the platform where color is exposed
Tess: HTML legacy (color attributes, input element), CSS, SVG, all different canvas contexts (2d, webgl, webgpu), media (video, images)
David: videos may be particularly interesting
Peter: WebXR?
David: the media stuff probably already has a bunch of color space stuff that isn't quite right
Tess: what's the next step here?
David: what are we concerned about? if we ask people to review, what do we want them to keep an eye out for?
... 1a. do you have the ability to specify all the colors you want to specify in this place
... 2. do you get interoperable results (lack of interop could be things like different working color spaces; different clamping to device gamuts; other things)
Peter: at some point things that expose raw bitmap data will want to do something more than 24 bits
David: 1b. so it's not just specify, it's also can you get them out
... another part of 2. making sure we don't create a bunch of corners... avoiding things being "whatever you want" rather than defined... e.g. untagged images in the past (now treated as sRGB). we want to avoid new things like that.
Tess: This plus the list of places to look at earlier create a review matrix
David: another part of 2: the definitions of color-interpolation
and color-interpolation-filters
properties in svg may have an interesting list of things that could be affected
[rossen returns]
Tess: could design-principles come out of this?
Rossen: this could be a good set of design principles, lots of cross-cutting concerns
David: I'll try to write a comment in the issue, summarizing the conversation we just had, and we should all try to rope in experts to do the review
... maybe Chris Lilley
Tess: when should we push this out to?
[push out two weeks
Comment by @dbaron Apr 7, 2020 (See Github)
So we had a bit of a brainstorm in the TAG's breakout B today as to what issues we think it would be useful to review for platform consistency here. What we came up with is the following list, which is almost certainly incomplete:
First, there's a set of things that deal with colors (many of which are probably fine, some of which probably need some new features, and some of which might need some deeper adjustments):
- CSS and SVG
- syntax for specifying colors
- the list of things whose behavior is changed by the
color-interpolation
property and thecolor-interpolation-filters
property is a good list of things that do math on colors (more generally, I think it comes down to compositing, gradients, animations, and filters, most of which have CSS and SVG pieces that might be a little different)
- HTML
- legacy color attributes
<input type=color>
- Canvas
- 2D context
- WebGL context
- webgpu context
- ImageBitmapRenderingContext
- including readback APIs (e.g., the format of raw bitmap data that you get back)
- WebXR?
- images (consider different formats (GIF/JPEG/PNG) and also whether or not they contain color profile information (tagged or untagged)
- videos (which might be different from images)
Then there's the set of things we're concerned about:
- specifying (1a) and reading back (1b) colors: can you specify the range of colors that you want in all of these places
- do things produce interoperable results? Some things to think about might be: a. different working color spaces b. different ways of clamping to device gamuts c. other sources of clamping d. places where things might be undefined (e.g., like untagged images in the old days before we said they were all sRGB)
I suspect other folks could expand on this list.
Comment by @imkremen Apr 12, 2020 (See Github)
Why Lab proposed as main perceptual uniformity color space for color 4? Why not more modern CAM 16 or JzAzBz (especially for HDR)?
And why proposed to use Lab instead of Luv? Here reference to Ross Ihaka:
Ihaka et al. (2003): The two perceptually based spaces introduced by the CIE in 1976 are the CIELUV and CIELAB spaces. The CIELUV space is generally preferred by those who work with emissive colour technologies (such as computer displays) and the CIELAB space is preferred by those working with dyes and pigments (such as in the printing and textile industries),
see https://www.r-project.org/conferences/DSC-2003/Proceedings/Ihaka.pdf
Comment by @felipeerias Apr 14, 2020 (See Github)
@dbaron : Thank you very much for your work.
Sorry for the delay in following up. I've been having a number of conversations about this topic in the past days.
The main value of functions like lab()
and lch()
would be to enable authors to express colors in CSS that are outside the sRGB gamut (but within the gamut of modern devices). A precondition is to have support in the browser for handling those richer CSS colors: at the moment, Chromium uses a 32-bit ARGB format for these, so it would be necessary to add support for a higher-precision representation of colors throughout the rendering pipeline.
As this looks like more than could be tackled by a one-volunteer effort, I'm stopping this implementation work for now.
I am still very interested in this topic and would like to help in any way I can to move it forward; perhaps researching the concerns that @dbaron listed could be a first step.
I'm also wondering whether there could be other pieces of the Color spec that might be more easily implemented. For example, adding support for LAB interpolation (even if the colors may be clamped to the sRGB gamut for the time being).
Comment by @imkremen Apr 17, 2020 (See Github)
The main problem with "LCH" is that from source to source it reference to different color spaces (CIELab or CIELuv), here good comment from @jrus
And what is some day we want to add support for polar CAM16 or polar JzAzBz?
Comment by @tabatkins Apr 17, 2020 (See Github)
"RGB" can similarly refer to a multitude of spaces, but the spec nails down its precise meaning just fine. For most people the exact details don't matter; for those that do, reading the spec tells them exactly what they need to know.
(Note as well the line in that comment "Nowadays it also doesnāt really make sense to use CIELUV for anything.")
And what is some day we want to add support for polar CAM16 or polar JzAzBz?
If we decide those are reasonable to add as top-level functions, we can figure out names at that point. If not, they can be smuggled in via color()
, possibly as predefined colorspaces. I don't see this as a future-compat issue.
Discussed
Apr 20, 2020 (See Github)
Alice: would be nice to have a discussion of how you'd choose which one to use.
Alice: also curious if we could see explainers for CSS specs, which would help people who weren't in the meetings understand them. More examples might be useful; conversion description seems very underspecified (no links).
Alice: WCAG specifies color contrast using an algorithm based on YCbCr. Could this be useful in computing color contrast ratio values? Could we give people something that would let them do math on their CSS values for color contrast?
Rossen: ...
Alice: If we can get the computed values in lab() and then map the luminance there to something used in the existing color contrast calculation?
Alice: I could look at comparing lab() lightness ratios with WCAG contrast ratios.
David: equations
Alice: But the main question I don't know: can we get computed values in lab()/lch()/etc.
David: part of a different spec?
Rossen: (looks through w3c/csswg-drafts#4647, then looks for a different issue)
Peter: could be handled by Typed OM, but I don't see anything there about colors at all
David: should we try to come back to this in a week or two?
Comment by @dbaron Apr 20, 2020 (See Github)
The other thing I would say about the list I drew up is that I don't think all of those things necessarily need to be addressed before shipping some parts of this. Features can be rolled out gradually. However, it's probably good to at least have a plan to get them into some reasonable state of completeness in a reasonable amount of time.
Comment by @dbaron Apr 23, 2020 (See Github)
Also, a few examples of things that might need fixing (or new alternatives) that would be found by auditing the list of things to audit to ensure that the platform has good support throughout for out-of-sRGB colors:
- (this one I'm pretty sure about) everything to do with canvas pixel manipulation, including the
ImageData
interface, to have variants that allow getting canvas data that's outside of the sRGB gamut - the algorithm for serializing bitmaps, to ensure that it's possible to serialize any possible canvas accurately
- the
<feColorMatrix>
and<feComponentTransfer>
filter primitives, to ensure that there are sensible ways to do those operations on colors outside of the sRGB gamut
Comment by @LeaVerou Apr 26, 2020 (See Github)
@imkremen CAM16 is a color appearance model, not a color space.
A color appearance model (CAM) is a mathematical model that seeks to describe the perceptual aspects of human color vision, i.e. viewing conditions under which the appearance of a color does not tally with the corresponding physical measurement of the stimulus source. (In contrast, a color model defines a coordinate space to describe colors, such as the RGB and CMYK color models.)
While CAMs result in more accurate color specification, they are not feasible for the Web platform, as we do not know the viewing conditions (and just setting them to defaults renders the whole point of using a CAM moot).
Comment by @imkremen Apr 26, 2020 (See Github)
@LeaVerou
@imkremen CAM16 is a color appearance model, not a color space.
As I understand, that's why CAM16āUCS, actually, was developed.
While CAMs result in more accurate color specification, they are not feasible for the Web platform, as we do not know the viewing conditions (and just setting them to defaults renders the whole point of using a CAM moot).
Definitely, user (developer) shouldn't carry about viewing conditions. Theoretically, in some future they can be provided by system (light sensors/display settings). But as intermediate stage, default values can be setted.
Comment by @imkremen Apr 26, 2020 (See Github)
@tabatkins
(Note as well the line in that comment "Nowadays it also doesnāt really make sense to use CIELUV for anything.")
According to this paper
CIELAB ( * L , * a , * b ) is recommended for the colorant industries (surface colors) while CIELUV ( * L , * u , * v ) for the display (self-luminous colors) industries.
As I previously write (see reference to Ross Ihaka), for the same reason polar Luv (HCL) is widly used in R lang community.
Comment by @jrus Apr 26, 2020 (See Github)
It is still a good practice to use the u' v' chromaticity diagram (in preference to the xy chromaticity diagram), but what I can remember reading from multiple color science sources is that CIELAB has equal or better performance than CIELUV for essentially all applications, including for comparing colors from emissive displays, gamut mapping, etc. Iām not doing a good job finding a reference in 2 minutes here though. āUse CIELUV for emissive mediaā is basically a 40-year-old heuristic that still persists because people keep citing it.
If you want something better than CIELAB there has been work on this more recent than the 1970s. Just because the R community uses something doesnāt mean it should be baked into every web browser. Browser specs should err on the side of adding fewer rather than more miscellaneous features. Anyone who really cares can do the simple arithmetic in their own code.
Discussed
Apr 27, 2020 (See Github)
David: 3 threads of discussion in this issue - one was the one we talked about last week - useful for contrast ratio - we have settled. The other is someone who says lab ^ lch aren't very good. A bunch of people disagree. The third thread is what other parts of the web platform need to be adjusted. Maybe we should bump it...
Dan: maybe close off the first 2...
David: [bumps 2 weeks and leaves a comment]
Comment by @dbaron Apr 27, 2020 (See Github)
I think so far we've had three four threads of discussion in this issue:
- discussion about alternative color spaces to
lab()
andlch()
(entirely in this issue), for which the TAG really doesn't have the experts on, so it's probably best raised in the w3c/csswg-drafts repo (if anywhere) - discussion (in last week's breakout and plenary and between them) on whether the L component of
lab()
andlch()
would be useful for color contrast calculations: our conclusion was that it's not, but only because it differs by a gamma correction from a value that would be useful (see the conversion fromy
toL
inXYZ_to_Lab()
, particularly theMath.cbrt
). (Although it differs from the value currently used by both that gamma correction and by the difference between the D50 and D65 whitepoints, but the whitepoint difference seems much less significant.) (I also noticed w3c/wcag21#815 on the way.) - discussion about auditing what other pieces of the web platform would need to change given the presence of colors outside of sRGB's gamut, in https://github.com/w3ctag/design-reviews/issues/488#issuecomment-610107610 and https://github.com/w3ctag/design-reviews/issues/488#issuecomment-616863174 and https://github.com/w3ctag/design-reviews/issues/488#issuecomment-618105500
- the state of APIs is for getting a color in one of the different color spaces, particularly for computed style; it would be good to be able to read colors in the preferred space
We discussed this briefly in a TAG breakout today, and I think the main reason I want to leave the issue open further is to monitor progress on the third one. Though we want to also get an issue filed on the fourth.
Comment by @svgeesus Apr 28, 2020 (See Github)
@imkremen wrote:
Why Lab proposed as main perceptual uniformity color space for color 4?
Because Lab is the industry standard. It is widely implemented; it is the Profile Connection Space for International Color Consortium (ICC) profiles, it is used in most books and articles
And why proposed to use Lab instead of Luv?
Because no-one uses Luv. No commercial instrument reports it. It has a terrible chromatic adaptation built in (XYZ scaling) and does not have a distance metric. Most books mention it once, in passing, when the introduce Lab.
The 1976 quote (from when the CIE standardized both Luv and Lab) is classical standards-body equivocation and relates more to the backgrounds of the input to the standards body; but in the 40-odd years since, everyone uses Lab unless they also have the data to construct a color appearance model.
Why not more modern CAM 16 or JzAzBz (especially for HDR)?
It's interesting that you mention the CIECAM color appearance models (97, 02, 16) , and it is reasonable to ask why CSS Color 4 and 5 are defined in terms of colorimetry (Lab and LCH) instead of color appearance. The reason that color appearance (which was considered) could not be used in CSS is because of the inherent nature of CSS. Style rules from various origins (author, reader, user agent) are combined via specificity and cascading to yield an eventual result. Thus, all colors are specified at a very granular level on individual elements in the document tree. There is thus no notion of the overall visual field or surroundings in which colors will take on an appearance. Certain aspects of that (the background color or image for a specific element, the colors of nearby elements) could in theory be tractable to analysis by a CSS processor. Others (the colors of other windows that are visible in addition to the browser window) are not available (and must not be, for security and privacy reasons); the overall room luminance, the current white point and the degree of user adaptation to that white point are unknown to a CSS processor and thus cannot be used as input to a color appearance model.
CIECAM is often used in the literature for perceptual gamut mapping of images, but in that case all the colors are available and the (assumed homogeneous, monochromatic) values for the immediate and distant surround and the lighting conditions for the print can be measure or specified and input to the model.
Comment by @svgeesus Apr 28, 2020 (See Github)
@dbaron wrote:
discussion (in last week's breakout and plenary and between them) on whether the L component of lab() and lch() would be useful for color contrast calculations: our conclusion was that it's not, but only because it differs by a gamma correction from a value that would be useful (see the conversion from y to L in XYZ_to_Lab(), particularly the Math.cbrt). (Although it differs from the value currently used by both that gamma correction and by the difference between the D50 and D65 whitepoints, but the whitepoint difference seems much less significant.) (I also noticed w3c/wcag21#815 on the way.)
I agree with this conclusion. The L in Lab is certainly useful for many things (unlike the L in HSL, which is almost meaningless) but for color contrast calculations one needs a linear-light-intensity color space. The Y in XYZ (which is a step along the way to calculating Lab) is luminance, and this is used in the current WCAG contrast calculation. Which means that, when WCAG is ready to add this, the color contrast between, say, a foreground color in display-p3
and a background color in sRGB
can easily be specified.
There is also a very interesting and detailed study on color contrast which may be used in WCAG 3, replacing the simpler formula in WCAG 2; but again, starting from the luminance of the foreground and background colors.
Comment by @svgeesus Apr 28, 2020 (See Github)
see https://www.r-project.org/conferences/DSC-2003/Proceedings/Ihaka.pdf
That is a very introductory paper, not presented at a color science conference, with a brisk overview of color spaces followed by an example of generating color scales for bar graphs. The main principles are those of Munsell, which is correct. The author then states:
Although Munsellās colour balance recommendations are specific to his colourspace, they apply equally to any perceptually uniform space based on correlates of hue, value and chroma. In particular, reasonable results are obtained by applying Munsellās principles to the CIELUV and CIELAB spaces.
It then goes on to utilize Luv without any explanation why it was picked. I suspect it is because Luv was supported by their programming environment, so it was a choice of least resistance.
Comment by @svgeesus Apr 28, 2020 (See Github)
@dbaron wrote:
First, there's a set of things that deal with colors (many of which are probably fine, some of which probably need some new features, and some of which might need some deeper adjustments):
I would like to thank the TAG for their work on identifying gaps in the current Open Web Platform, arising from the current sRGB-centric model of the early Web, which would need to be filled to deploy Lab and LCH.
A similar gap analysis has been ongoing in the Color on the Web CG since fall last year, and a draft report High Dynamic Range and Wide Gamut Color on the Web identifies similar gaps to those noted by the TAG. While the report is incomplete, I think a good way forward is to merge any new items noted by the TAG, which are not in the current CG report, into that report. There is a good community of relevant Color Science and industry experts in that CS, which ensures good technical review.
Comment by @simontWork May 5, 2020 (See Github)
A very interesting discussion. Could I just ask what operations you're intending the colour representation chosen to be used for?
I have seen issues with large colour volumes being reduced for display on practical monitors. If the scaling occurs within the colour space, then there may be a need to investigate how hue-invariant desaturation and brightness changes are.
For example, Braun, Fairchild and Ebner show experimental results for CIELAB desaturation: https://www.researchgate.net/publication/2514272_Color_Gamut_Mapping_in_a_Hue-Linearized_CIELAB_Color_Space
There are more recent colour representations that can be used and which have significantly lower hue-distortion on desaturation, but care needs to be taken in understanding the limitations of each representation's design, e.g. some are designed to be implemented in the hardware currently prevalent in televisions, some are not easily realised in current consumer hardware.
Discussed
May 11, 2020 (See Github)
David: I emailed Chris Lilley, he responded but I haven't processed his response fully yet.
Tess: David wrote a comment two weeks ago... 4 distinct threads of conversation in the issue, main reason was to monitor progress on what other parts of the web platform would need to change.
David: Chris filed 4 issues on w3c/ColorWeb-CG on that.
... We might want to bump this to a longer time interval...
Tess: Why don't we add TAG-tracking labels to all the issues on other specs, and track those issues that way?
Rossen: One other concern... harmonising colours and colour spaces across the platform is definitely important, but the other issue was computed values. Can we request computed values in these colour spaces from the OM? What is the implication on how these colour functions become more useful for things like colour contrast and other a11y features?
Alice: Would be nice to get computed values, but David calculated that the L* value isn't useful for that. Might be worth asking the question directly of whether we could have APIs to get computed luminance directly, although there is work in progress (?) on a new contrast ratio computation. https://github.com/w3c/wcag/issues/695
Rossen: Looking at current state of CSS Typed OM, whether or not we are defining typed colour values in a way that we can provide an API to simply compute the contrast ratio. Currently no, simply mapping to CSS Color (rgba) value.
Alice: Would be nice to be able to provide results of colour math specifically for the purposes of computing contrast ratio, but I think that's mostly orthogonal to this issue. For this issue we're probably alright with just adding trackbacks as Tess suggested, and closing the issue.
Comment by @atanassov May 15, 2020 (See Github)
Another thread of discussions has been about providing ability to use the new color spaces to compute contrast ratios. I opened the issue w3c/wcag#989 to continue this discussion at CSS Typed OM since typed color values seem look like a great way to address this.
Comment by @dbaron May 28, 2020 (See Github)
@svgeesus, regarding the issues above in the Color on the Web CG -- one of the things that presumably will need to happen at some point is that issues get filed against HTML, CSS, etc. Is the goal of the Color on the Web CG to try to do that? (And if so, is it waiting on developing more concrete proposals to address the issues, or do you feel that it makes sense to file the underlying issues even before you've developed proposals to address them?)
Comment by @svgeesus Jun 6, 2020 (See Github)
Currently that CG is tracking existing issues, to get a sense of the overall readiness of the Open Web Platform for Wide Color Gamut and High Dynamic Range.
I do expect new issues to be raised as we discover them.
Comment by @Myndex Jun 11, 2020 (See Github)
.... The L in Lab is certainly useful for many things .... but for color contrast calculations one needs a linear-light-intensity color space.........There is also a very interesting and detailed study on color contrast which may be used in WCAG 3, replacing the simpler formula in WCAG 2; but again, starting from the luminance of the foreground and background colors.
@svgeesus Hi Chris!
Sorry I'm just now commenting, things have been nuts in LA/Hollywood... I just wanted to mention a couple things in passing that you might be interested in.
Readability
The first note is that the contrast values are intended to be more consistent in terms of functional readability than perceived contrast levels. I.e. actual readability is the goal.
Perceptual
The contrast is reported as a percentage similar to Weber or Michaelson, not a ratio. The test values are linearized, and then have perceptual curves applied based on estimated adaptation. L* difference is sometimes used this way, though we are using slightly different curves (RLAB does something similar using different curves for foreground and background), and this has the advantage of a more accurate contrast/readability prediction.
Adaptation
The eventual final version is intended to have a module that will estimate the immediate surround on green in addition to the foreground and background, as that predicts the global adaptation level, which used to provide a more accurate contrast prediction.
Color/Hue
Also in the final (not yet shown) the color/hue module will make some adjustments based on color saturation. This is mostly for red for those with protan CVD, and blue due to the non-intuitive way that blue affects perceived luminance, and therefore contrast.
sRGB
As a side note, this is being designed for sRGB. Eventually a module could add-in support for other spaces (in fact, the algorithm is designed to allow a module for dynamic environmental response).
That said, in the interim sRGB is the "ideal monitor for accessibility" for a few reasons.
- Common Standard It is the common space now, and expected to be available far into the future.
- sRGB allows for a consistent display type while other assertive technologies are being developed.
- Color Vision Deficiency: sRGB is nearly ideal for helping those with CVD. In particular, for those with Protanopia the red primary is of a short enough wavelength that a saturated red is still visible and presents only a perceived 15% to 30% luminance loss. P3 is slightly more of a loss. But the new UHD Rec2100 and Rec2020 have the red primary of such a longer wavelength that it would likely appear black (not perceived) for those with protanopia.
As such, for some vision types, Rec2100 et al will need content to be "Daltanized" for display. (Daltanizing is a best practice for CVD... but requires some assistive tech to remap the content colors).
Please let me know if you have any questions!
Thank You!
ā Andy
Comment by @svgeesus Jun 15, 2020 (See Github)
I have seen issues with large colour volumes being reduced for display on practical monitors. If the scaling occurs within the colour space, then there may be a need to investigate how hue-invariant desaturation and brightness changes are.
Yes, using for example the Hung and Berns dataset, Lab/LCH has a well known hue curve towards purple in the blue region. CIECAM02 has, unfortunately, a similar bend. Jzazbz has a correction for this, but when I implemented this colorspace, I found it had an equally problematic curve towards cyan.
(I would upload examples, but GitHb doesn't support SVG)
Comment by @svgeesus Jun 15, 2020 (See Github)
Okay, here is an example for Lab: the gradient shows sRGB blue (#0000FF) is progressively desaturated by reducing LCH chroma until it meets the neutral axis. Below are shown the (linear-light, not gamma-corrected) R G and B values, it can be seen that R rises faster thus accounting for the purple shift.
https://drafts.csswg.org/css-color-4/images/lab.svg
Here is the same thing in Jzazbz, sRGB blue is progressively reduced in JzCzHz chroma until it gets to the neutral axis. Of note: the green now rises faster, giving a cyan shift. Also the RGB channels do not fully converge on the gray axis, which is very odd.
Comment by @Myndex Jun 16, 2020 (See Github)
Yes, using for example the Hung and Berns dataset, Lab/LCH has a well known hue curve towards purple in the blue region. CIECAM02 has, unfortunately, a similar bend. Jzazbz has a correction for this, but when I implemented this colorspace, I found it had an equally problematic curve towards cyan.
Hey @svgeesus
Not sure if you have played with this, but Bruce Lindbloom had made a perceptually uniform LAB space. I've not spent a lot of time with it, but thought I'd mention as it seems relevant to these last two posts.
http://www.brucelindbloom.com/UPLab.html
That said, I've been looking into blue in terms of readability and finding the instability and context sensitivity frustrating. Such as how blue adds or subtracts from perceptual contrast as dependent on the value of other primaries involved, not to mention the effect of rod intrusion at low luminance (under 8 nits).
Comment by @jrus Jun 16, 2020 (See Github)
This should better be called āMunsell renotations as an ICC profileā. Itās basically a way of adapting the 1943 lookup table produced by the OSA so that some modern software can make use of it. Itās not really comparable to CIELAB as an analytical model. (But itās reasonably close, because CIELAB and later color models were explicitly designed to match Munsell renotation data.)
Discussed
Jun 22, 2020 (See Github)
Rossen: Not much to talk about here, maybe. Lots of comments on the TAG review issue though.
David: Was trying to encourage getting issues filed against HTML and CSS. https://github.com/w3ctag/design-reviews/issues/488#issuecomment-635010629
Peter: Chris is saying the color on the web CG will file issues as they find them.
David: I think we decided last time we were happy to close this, encouraging the CG to file issues on the relevant WGs sooner rather than later.
Rossen: We discussed figuring out how we can do color contrast in terms of computed values... can we create a Typed OM color object to easily answer some of these questions. Lea mentioned some kind of color space agnostic color object. They are working on trying to release as a library soon-ish.
... Then, how do we standardise color, what is color, getting very philosophical. https://github.com/w3c/css-houdini-drafts/issues/989
... I also recall us discussing closing the TAG issue, allowing progress to continue in the CG, HTML and CSS.
... Main feedback was about the color object, having an API to compare colors. These things are all on their radar.
Alice: How would this work in other parts of the platform, per David's original feedback?
Rossen: Color object will be color space agnostic... can you translate colors outside of the sRGB gamut .. for canvas?
David: CG is doing a bunch of that work. It's more than just canvas - things like filters and interpolation, etc.
dbaron: Possible closing comment:
We're looking at this in a TAG breakout today, and I think we're happy closing this issue at this point, which we propose to do in this week's plenary. We've provided a few pieces of feedback above that seem to have dealt with (see previous summary), or are being actively worked on:
- One of those is the review of sRGB dependencies in other parts of the platform, which it seems the Color on the Web CG is working on. We'd strongly encourage getting those issues filed against the relevant specs sooner rather than later so that the working groups involved can get started on fixing them sooner rather than later.
- Another seems to be the desire for better color conversion APIs, for which a color object proposal seems to be forthcoming
Comment by @dbaron Jun 22, 2020 (See Github)
We're looking at this in a TAG breakout today, and I think we're happy closing this issue at this point, which we propose to do in this week's plenary. We've provided a few pieces of feedback above that seem to have dealt with (see previous summary), or are being actively worked on:
- One of those is the review of sRGB dependencies in other parts of the platform, which it seems the Color on the Web CG is working on. We'd strongly encourage getting those issues filed against the relevant specs sooner rather than later so that the working groups involved can get started on fixing them sooner rather than later.
- Another seems to be the desire for better color conversion APIs, for which a
Color
object proposal seems to be forthcoming.
Comment by @Myndex Nov 24, 2020 (See Github)
Hi David @dbaron,
I just now noticed this:
- discussion (in [last week's breakout].... on whether the L component of
lab()
andlch()
would be useful for color contrast calculations: our conclusion was that it's not, but only because it differs by a gamma correction from a value that would be useful ....
I wanted to address: the new contrast method in Silver/WCAG3 is APCA (Advanced Perceptual Contrast Algorithm). it does use perceptually-based power curves, though with different exponents depending on the context and polarity.
The problem with using just āL* (Lab difference) is that color difference is not contrast.
Like color, "contrast" is not real, that is, it is a perception, and not absolute relative to perception. Contrast is extremely context dependent, and in some ways our perception of contrast is more non-linear than our perception of a color or luminance. Our perception of visual contrast is much more than a color difference: for instance spatial frequency is critical, and has a stronger effect on contrast perception at high spatial frequencies than luminance (for web, this translates to very thin small fonts).
Here is my favorite demonstration of context. Both yellow dots, and the squares they are on, have the same absolute color coming out of your monitor:
And the yellow of course is not yellow, it is separate red and green that stimulates the L and M cones in a similar way as spectral yellow does (which is between red and green). The red and green do not mix in the air like paint: they "mix" in your neurological system, starting with the ganglion cells behind the retina ( 'opponent color'), and then through the multistage filtering and processing of the visual cortex.
What you see all day is not exactly reality: it is your brain's filtered and deconstructed perception, which can be quite different from absolute values. Context needs to be accounted for or estimated to predict a perception as complicated as contrast.
APCA is focused on luminance contrast for fluent readability: getting whole words and letter-pairs into the Visual Word Form Area (VWFA) ā this needs ample luminance contrast. But other stimuli, for spot reading, object recognition, state change.... are processed via different routes and in some different areas of the brain, and have different requirements.
Andy A Guy Too Obsessed with Color
Comment by @mysteryDate Oct 26, 2022 (See Github)
Shipping lab and lch in blink soon:
https://groups.google.com/a/chromium.org/g/blink-dev/c/r0QATT8-kOw
Followed this draft spec: https://www.w3.org/TR/css-color-4/#lab-colors.
We're trying to pass all of the interoperability tests here along with webkit and gecko.
Comment by @dbratell Oct 29, 2022 (See Github)
Also oklab
and oklch
per https://drafts.csswg.org/css-color-4/#ok-lab
OpenedMar 22, 2020
Hola TAG!
I'm requesting a TAG review of:
lab()
andlch()
CSS functions for colors.We'd prefer the TAG provide feedback as:
š¬ leave review feedback as a comment in this issue and @-notify