#646: Canvas 2D color management
Discussions
Comment by @LeaVerou Jun 10, 2021 (See Github)
Hi @ccameron-chromium!
Fantastic to see a proposal to color manage Canvas, and extend it beyond sRGB. 👍🏼 It's unfortunate that sRGB has to be the default, but completely understandable for web compat.
Here are some questions that occurred to me on initial reading.
- You say the target is Chrome 92. However to my knowledge, there are no plans to implement
color(display-p3)
orlab()
orlch()
in Chrome 92. Without those, it would be impossible to draw graphics that utilize P3 colors, and thus the only value-add of this implementation would be the ability to paint P3 images on canvas. Is that the case? Does Chromium plan to implementcolor(display-p3)
but only for Canvas? Something else? - I imagine eventually we'd want to extend this to the Paint API’s
PaintRenderingContext2D
. Given that the context for that is pre-generated, how would that look like? - Is this intended to become 10 bit by default once 10 bits per component are supported? Could this introduce web compat problems?
- <blockquote>Input colors (e.g, fillStyle and strokeStyle) follow the same interpretation as CSS color literals, regardless of the canvas color space.</blockquote> What happens when someone paints an e.g. <code>rec2020</code> color on an sRGB or Display P3 canvas? Is the result gamut mapped? If so, how?
- If I understand the explainer correctly, this means that the first script that calls
getContext()
gets to define the color space the canvas is in. What happens on any subsequentgetContext()
calls, either without a colorSpace argument, or with a different one? Do they produce an error or silently return the existing context, color managed with a different color space than the one the author specified? Do they clear the contents? Not sure any of these options is better than makingcolorSpace
be mutable (which would also addressPaintRenderingContext2D
). It is not that unheard of to change the color space of color managed graphics contexts, e.g. it's possible in every color managed graphics application I know of, and there are several reasonable ways to do it. - Am I reading it correctly that
getImageData()
will return sRGB data even in a P3 canvas, unless P3 data is explicitly requested? What's the rationale for not defaulting to the current color space? - <blockquote>The color space is then an immutable property of the CanvasRenderingContext2D.</blockquote> Unless I missed it, none of your Web IDL snippets include this readonly attribute. I assume in unsupported color spaces this attribute will be <code>"srgb"</code>?
Comment by @ccameron-chromium Jun 10, 2021 (See Github)
Thank you so much for the quick look!
Something I should have emphasized is that CanvasColorSpaceProposal.md is what was brought to WhatWG, and then the WhatWG PR is what came out of that review. It may be that I should update CanvasColorSpaceProposal.md to reflect those changes.
Hi @ccameron-chromium!
Fantastic to see a proposal to color manage Canvas, and extend it beyond sRGB. 👍🏼 It's unfortunate that sRGB has to be the default, but completely understandable for web compat.
Here are some questions that occurred to me on initial reading.
- You say the target is Chrome 92. However to my knowledge, there are no plans to implement
color(display-p3)
orlab()
orlch()
in Chrome 92. Without those, it would be impossible to draw graphics that utilize P3 colors, and thus the only value-add of this implementation would be the ability to paint P3 images on canvas. Is that the case? Does Chromium plan to implementcolor(display-p3)
but only for Canvas? Something else?
Indeed Chrome 92 will not have color(display-p3) et al. WCG content can be drawn to a 2D canvas via Images and via ImageData.
When we were trying to decide which pieces to pick off first (CSS color vs 2D canvas), the balance came out in favor of canvas, for applications that wanted to ensure that their images weren't crushed to sRGB (even if all CSS colors were still limited to sRGB). Ultimately both are much more useful with each other.
- I imagine eventually we'd want to extend this to the Paint API’s
PaintRenderingContext2D
. Given that the context for that is pre-generated, how would that look like?
For PaintRenderingContext2D, the actual output color space is not observable by Javascript (getImageData isn't exported). This is unlike CanvasRenderingContext2D, where the color space is observable (and has historically been a fingerprinting vector). Because of that, my sense is that the user agent should be able to select the best color space for the display device (just as it does for deciding the color space in which <img> elements are drawn and composited), and potentially change that space behind the scenes. Having the application specifying a color space for PaintRenderingContext2D feels like an unnatural constraint.
Similarly, ImageBitmap and ImageBitmapRenderingContext don't want color spaces -- one should just be able to create an ImageBitmap from a source and send it to ImageBitmapRenderingContext and, by default, have it appear the same as the source would have if drawn directly as an element. (Of note is that we will likely add a color space to ImageBitmapOptions to allow asynchronous-ahead-of-time conversion for when uploading into a WebGL/GPU texture, but that is outside of the 2D context).
- Is this intended to become 10 bit by default once 10 bits per component are supported? Could this introduce web compat problems?
Indeed for non-srgb-or-display-p3 spaces, we may want to default to something more than 8 bits per pixel. That's part of why we decided not to include rec2020 in the spec (the other part being disputes about its proper definition!!).
For srgb and display-p3, the overwhelming preference is for 8 bits per pixel, and so the default of 8 bits per pixel will be what we will want to stay with (using more than 8 bits per pixel comes with substantial power and memory penalties, for almost no perceptual gain). As you noted, in the HDR spec, we may want to make a selection of color space imply a particular pixel format (I'm still on the fence about that -- fortunately we're avoiding being affected by how that decision lands -- display-p3 is the most requested space).
Input colors (e.g, fillStyle and strokeStyle) follow the same interpretation as CSS color literals, regardless of the canvas color space.
What happens when someone paints an e.g.
rec2020
color on an sRGB or Display P3 canvas? Is the result gamut mapped? If so, how?
The input colors (like other inputs) are converted from the input's color space to the canvas's color space using relative colorimetric mapping, which is the "don't do anything fancy" mapping. In your example, the rec2020 color can always be transformed to some pixel in sRGB, but that pixel may have RGB values outside of the 0-to-1 interval. Relative colorimetric intent just clamps the individual color values to 0-to-1.
This is what happens today in all browsers if the browser, e.g, loads a rec2020 image that uses the full gamut and attempts to display it on a less capable monitor.
(Somewhat relatedly, one thing that came up in a separate review is that it might be useful for developer tools to have a "please pretend I have a less capable monitor than I do" mode).
- If I understand the explainer correctly, this means that the first script that calls
getContext()
gets to define the color space the canvas is in. What happens on any subsequentgetContext()
calls, either without a colorSpace argument, or with a different one? Do they produce an error or silently return the existing context, color managed with a different color space than the one the author specified? Do they clear the contents?
The current behavior is that the subsequent call to getContext('2d') will return the previously created context, even if it has different properties than what was requested the second time around. This applies to all of the settings (alpha, etc).
Not sure any of these options is better than making
colorSpace
be mutable (which would also addressPaintRenderingContext2D
). It is not that unheard of to change the color space of color managed graphics contexts, e.g. it's possible in every color managed graphics application I know of, and there are several reasonable ways to do it.
Yes, this was another tricky area. There was some discussion around making the colorSpace be a mutable attribute, but there were a few things pushing against it. One was that there were indeed many reasonable things to do (clear the canvas, reinterpret_cast the pixels, convert the pixels?), and no single option was a clear winner. Another was that this matched the behavior for alpha (which will likely match the future canvas bit depth). Another was that it felt conceptually like a bad fit (especially in comparison with, e.g, WebGPU, where the GPUSwapChainDescriptor is the natural spot, and can be changed on frame boundaries).
So that's how we ended up landing where we did. Does that feel reasonable to you too?
In practice, if one wants to swap out a canvas for a differently-configured canvas, one can create the new element (or offscreen canvas) and drawImage the previous canvas into it (which will achieve the "convert" behavior).
We also briefly discussed if it was possible for the canvas to automatically update its color space to support whatever is drawn into it (turns out it's not, at least not without scrutinizing every pixel of every texture that gets sent at it, and even then that may not be desirable).
- Am I reading it correctly that
getImageData()
will return sRGB data even in a P3 canvas, unless P3 data is explicitly requested? What's the rationale for not defaulting to the current color space?
Yes, this is a good point -- the WhatWG review changed this behavior (again, sorry I wasn't more clear about that earlier).
The text that landed is what you suggest (getImageData returns the canvas's color space). Critically, getImageData, toDataURL, and toBlob have the property if that one exports a canvas into a (imagedata, blob, url), and then draws the result on the same canvas, the operation is a no-op (no data is lost ... unless you choose lossy compression).
The color space is then an immutable property of the CanvasRenderingContext2D.
Unless I missed it, none of your Web IDL snippets include this readonly attribute. I assume in unsupported color spaces this attribute will be
"srgb"
?
Following alpha's pattern, it's query-able using getContextAttributes (it will be in the returned CanvasRenderingContext2DSettings).
When creating a context, the color space for the context is set to the color space in the attributes, so all enum values that get past the IDL must be supported for 2D canvas and for ImageData. (Also, the proposal document advertised a feature detection interface, which was nixed in WhatWG review).
If the browser doesn't support this feature at all, then there will be no colorSpace entry in CanvasRenderingContext2DSettings, so the feature may be detected through that mechanism.
Thank you again for the quick feedback!
Discussed
Jun 14, 2021 (See Github)
Assigned to next week
Comment by @ccameron-chromium Jun 18, 2021 (See Github)
I've updated the listed explainer to reference this document. This is the best place to look for a concise description of the formalizations and changes being proposed in this feature.
This is a revised and streamlined version of the initial proposal, reflecting the changes made during WhatWG review.
Comment by @LeaVerou Jun 19, 2021 (See Github)
What happens when someone paints an e.g. rec2020 color on an sRGB or Display P3 canvas? Is the result gamut mapped? If so, how?
The input colors (like other inputs) are converted from the input's color space to the canvas's color space using relative colorimetric mapping, which is the "don't do anything fancy" mapping. In your example, the rec2020 color can always be transformed to some pixel in sRGB, but that pixel may have RGB values outside of the 0-to-1 interval. Relative colorimetric intent just clamps the individual color values to 0-to-1.
This is what happens today in all browsers if the browser, e.g, loads a rec2020 image that uses the full gamut and attempts to display it on a less capable monitor.
Relative Colorimetric is essentially a set of rules for how gamut mapping should happen, not a gamut mapping algorithm. The per-component clamping you describe does conform to RC, but is a very poor implementation of it. E.g. consider the sRGB color rgb(100% 200% 400%)
. Using per-component clamping, it would just be converted to achromatic white.
That said, Canvas is not the place to define how gamut mapping happens in the Web platform, and there are plans to flesh this out more in CSS Color 4. Meanwhile, please avoid prose that renders implementations non-conformant if they don’t use naïve clamping in the spec (in case there was any).
But beyond how gamut mapping happens, there's also the question of whether it happens. The current behavior of restricting everything on a canvas to the gamut of the color space it's defined on is reasonable. Using the colorSpace
argument to just specify a working color space, and allowing both in-gamut and out-of-gamut colors on the canvas also seems reasonable. What was the rationale of going with the first, rather than the second, option? Did you find it satisfies more use cases?
Comment by @ccameron-chromium Jun 19, 2021 (See Github)
Relative Colorimetric is essentially a set of rules for how gamut mapping should happen, not a gamut mapping algorithm. The per-component clamping you describe does conform to RC, but is a very poor implementation of it. E.g. consider the sRGB color
rgb(100% 200% 400%)
. Using per-component clamping, it would just be converted to achromatic white.
Yes, good point. And yes, particularly when extended into HDR, per-component clamping can create pretty poor-looking results.
That said, Canvas is not the place to define how gamut mapping happens in the Web platform, and there are plans to flesh this out more in CSS Color 4. Meanwhile, please avoid prose that renders implementations non-conformant if they don’t use naïve clamping in the spec (in case there was any).
Thanks for the heads-up. We can be softer on the language with respect to the particular gamut mapping algorithm in the canvas section (I had been trying to get that variable nailed down, but if that's getting taken care of in a more central effort, that would be better).
FYI, a related topic, HDR tonemapping -- mapping from a larger luminance+chrominance range down to a more narrow one, comes up periodically in the ColorWeb CG HDR discussions.
But beyond how gamut mapping happens, there's also the question of whether it happens. The current behavior of restricting everything on a canvas to the gamut of the color space it's defined on is reasonable. Using the
colorSpace
argument to just specify a working color space, and allowing both in-gamut and out-of-gamut colors on the canvas also seems reasonable. What was the rationale of going with the first, rather than the second, option? Did you find it satisfies more use cases?
With respect to Display P3, most (perhaps all?) users and use cases we encountered wanted the gamut capability of Display P3, rather than having Display P3 as a working space (they didn't mind having Display P3 as the working space -- it's "sRGB-like" enough that it comes with no surprises compared to the default behavior, but that wasn't the part of the feature they were most after).
Allowing in-gamut and out-of-gamut colors requires having >8 bits per pixel of storage. That isn't much for a moderately-powerful desktop or laptop, but it is quite a burden (especially with respect to power consumption) for small battery-powered devices, and so most (I'm again tempted to say all?) users that I've encountered wanted Display P3 with 8 bits per pixel.
(The rest of this might be getting a bit ramble-y, but it also might be some useful background on how we ended up where we did):
In some of the very early versions of the canvas work we tried to separate the working color space from the storage color space. That ended up becoming unwieldy, and we discarded it -- it ended up being much more straightforward to have the storage and working space be the same. In practice, having a separate working space meant having an additional pass using that working space as a storage space, and so having the two not match ended up being downside-only. (There was one sort-of-exception, sRGB framebuffer encoding, which is useful for physically based rendering engines, but is very tightly tied to hardware texture/renderbuffer formats, and so we ended up moving it to a separate WebGL change, and those formats will also eventually find their way to WebGPU's GPUSwapChainDescriptor).
We also discussed having some way to automatically allow arbitrary-gamut content that "just works", without having to specify any additional parameters, and without any performance penalties. One of the ideas was to automatically detect out-of-gamut inputs and upgrade the canvas. This one was discarded because it would add performance cliffs, would have a complicated implementation, and might not be what an application wants (e.g, if just 1 pixel is 1 one bit outside of the gamut, they may prefer it to be clipped rather than pay a cost). Another idea could be to use the output display device's color space, but that would then become a fingerprinting vector (and would also have the issue that the output display device is a moving target).
Discussed
Jun 21, 2021 (See Github)
Lea: left a few comments -
[discussiom of p3 color spaces]
[reviewing comments]
Lea: pretty satisfied with all of his answers. One small thing - getImageData
returns sRGB data even with a p3 canvas - don't think that's a good idea but he said this has already been fixed. One of the concerns I have - consistency with getContext. I guess it's fine. you set the color space when getcontext is first called - subsequent calls return context already created even if the options are different - he said that's consistent with existing context attributes like alpha. Not happy 8bit by default forever - but you can get away with 8bits in p3 though - there will be worse banding than sRGB but not super bad - so i think that's OK.
Lea: colorspace attribute mutable ? arguments pro and against. right now it's immutable. if you wanted to extend to paint API we can't. however we don't really need it there.
Rossen: need to further review. They would have the recreate only the predefined color space and 2d settings context?
Lea: you'd need to create a new canvas and paint the previous canvas on the new canvas.
Rossen: from the use case - how often do you change color space... either use initiartiated - moving to a different device - or changing the color settings...
Lea: It's unrelated to the output device. You could be working on a P3 canvas, on an sRGB device. You obviously wouldn't be able to see the non-sRGB colors in that case, but their coordinates would be unchanged.
Rossen: trying to identify the use case - when you would want to change the color space - it seems like it's pretty rare. based on that the immutable principle will reduce complexity downstream.
Lea: Agreed, also it can be made mutable in the future if there are enough use cases.
Yves: if you're changing the color space you're better off creating a new canvas anyway...
Rossen: probably cheaper from a compute point of view.
Peter: could happen a lot by accident... e.g. some library might assume sRGB...
Lea: that's what my concern was... You get back a P3 context not expecting it.
Peter: might be safer if it throws an exception ... If I create a canvas in p3 and get rendering context in srgb - it could give me a context that is srgb and does the math - or it converts the backing image store.. losing data.
Peter: mutable would be problematic. could potentially destroy data. returning an unxpected color space could cause you to draw wrong color.
Lea: silent failure worries me.
Rossen: a long transition for libraries to get onboard with color management.
Peter: safer: getConext gives you whatever space you asked for....
Lea: we don't want getContext to change what's displayed on the canvas - it should not be destructive.
Rossen: but how would the conversion happen?
Dan: what should we ask them.
Rossen: if the getContext has the explicit color space - create a context and call getContext 2d - that assumes sRGB. From this usage point of view you can throw - and teach using a stick.
Lea: he made the point that browsers who don't know about the feature would not throw.
Rossen: but for them it will be sRGB anyway... Once the feature is there and supported - you expect the library to respect it or you don't support it at all which is fine because everything is in sRGB... So teach [libraries] with a stick or carrot? carrot would be don't throw an exception but do some kind of magic that converts the colors...
Lea: what about the argument that this is the way alpha works already, so consistency. If you call getContext() with alpha: true and you have previously called getContext() with alpha: false on the same canvas, you'll silently get a context without alpha. No errors thrown. 2nd: we need to consider the case of 2 libraries working on the same context object... if the canvas changes the 1st library wouldn't know that - that would be messy. Breaks expectations of existing code...
Tess: In the current design the 2nd call to getContext returns the same context - we're concerned about the case where the 2nd caller doesn't know about new feature. Is it inspectable?
Lea: queryable using getContextAttributes - it's unclear to me whether an unsupported color space becomes sRGB or ...
Tess: I think Peter's solution of returning a new context that does the conversion - a proxy - that would break the fewest sites. But color space conversions though not computational expensive it's non-zero. Ethical Web Principle of sustainability - making lots of extra calculation is not great. Libraries don't get update so throwing sucks. Visible bugs - color space conversion errors - visible bugs might cause someone to update the library. Library authors shiould check the return color space - best practice.
Dan: that's where they're headed anyway...
[yes]
Dan: let's put it to proposed close and close at the plenary if appropriate.
ACTION: Lea to draft comment before the Plenary
Proposed comment:
Hi @ccameron-chromium,
We reviewed this proposal this week and overall we are happy with the direction. We were initially troubled by some of the design decisions, but after discussing them further, we came to the same conclusions.
Therefore, we are going to close this issue. We are looking forward to seeing this feature evolve further
Comment by @LeaVerou Jun 21, 2021 (See Github)
Following alpha's pattern, it's query-able using getContextAttributes (it will be in the returned CanvasRenderingContext2DSettings).
When creating a context, the color space for the context is set to the color space in the attributes, so all enum values that get past the IDL must be supported for 2D canvas and for ImageData. (Also, the proposal document advertised a feature detection interface, which was nixed in WhatWG review).
Just noticed this -- so if I'm reading this right the colorSpace from the attributes will be srgb
in that case?
Comment by @ccameron-chromium Jun 21, 2021 (See Github)
Following alpha's pattern, it's query-able using getContextAttributes (it will be in the returned CanvasRenderingContext2DSettings). When creating a context, the color space for the context is set to the color space in the attributes, so all enum values that get past the IDL must be supported for 2D canvas and for ImageData. (Also, the proposal document advertised a feature detection interface, which was nixed in WhatWG review).
Just noticed this -- so if I'm reading this right the colorSpace from the attributes will be
srgb
in that case?
Sorry, I might not have understood the context of the question (let me know if I miss it again here!). WRT the question of "in unsupported color spaces will this attribute be "srgb"
? There can be two meanings of "unsupported":
- A string that isn't a valid
PredefinedColorSpace
. This will throw an invalid enum exception. - A string that is a valid
PredefinedColorSpace
, but isn't supported by the implementation. The intent of the language of the spec is for this category to not exist (e.g, there is no "supported versus not" in the context creation algorithm). In some earlier versions of the WhatWG PR, there was this category of "a valid but not supported color space that falls back to sRGB", but this was considered too complicated (see discussion here for more details).
There's also the case of a user agent that hasn't implemented this feature. In that case, there will be no colorSpace
entry in CanvasRenderingContext2DSettings
.
Comment by @LeaVerou Jun 23, 2021 (See Github)
To clarify my question further:
I suppose user agents will implement this proposal by first implementing the srgb
and display-p3
color spaces. However, you plan to eventually extend this enum with more values, so there will be a transitional period where authors may try to use e.g. colorSpace: "rec2020"
in user agents that only support srgb
and display-p3
.
In that case, if I'm reading your message correctly, it will throw an invalid enum exception?
Comment by @annevk Jun 23, 2021 (See Github)
Yeah, that's correct.
Comment by @LeaVerou Jun 23, 2021 (See Github)
Yeah, that's correct.
Thank you. Is it correct to assume it would throw with the same error in subsequent calls to getContext()
?
I.e.
let ctx = canvas.getContext("2d", {colorSpace: "display-p3" });
let ctx2 = canvas.getContext("2d", {colorSpace: "flugelhorn" }); // throws?
Another question that came up in a breakout this week. I do see some examples in the explainer use a media query to decide which color space to use. I assume however that the canvas color space and the display device color space are entirely decoupled, and therefore it's entirely possible to work on a P3 canvas, in a less capable (e.g. sRGB) display device. You would obviously not see the non-sRGB colors, but the underlying numbers would be unaffected. Is my assumption correct?
Comment by @annevk Jun 23, 2021 (See Github)
Yeah (IDL enum validation happens prior to executing the method steps). And yeah, that's correct, the canvas color space and computations are its own thing and not impacted by any kind of global state.
Comment by @LeaVerou Jun 23, 2021 (See Github)
Hi @ccameron-chromium,
We reviewed this proposal this week and overall we are happy with the direction. We were initially troubled by some of the design decisions, but after discussing them further, we came to the same conclusions.
Therefore, we are going to close this issue. We are looking forward to seeing this feature evolve further.
Comment by @ccameron-chromium Jun 23, 2021 (See Github)
Thank you for the review! Please feel free to reach out of there are any follow-up questions or related topics.
OpenedJun 10, 2021
I'm requesting a TAG review of Canvas 2D color management.
This was developed in the W3C's ColorWeb CG, and has been reviewed and updated in WhatWG review. I would like TAG to put their eyes on it too!
Summary: This formalizes the convention of 2D canvases being in the sRGB color color space by default, that input content be converted to the 2D canvas's color space when drawing and that "untagged" content is to be interpreted as sRGB. This adds a parameter whereby a 2D canvas can specify a different color space (with Display P3 being the only value exposed so far). Similarly, this formalizes that ImageData is sRGB by default, and add a parameter to specify its color space.
Further details:
We'd prefer the TAG provide feedback as (please delete all but the desired option): 💬 leave review feedback as a comment in this issue and @-notify ccameron-chromium