#977: Accessibility conformance Testing (ACT) Rules Format 1.1

Visit on Github.

Opened Jul 24, 2024

こんにちは TAG-さん!

I'm requesting a TAG review of Accessibility Conformance Testing (ACT) Rules Format 1.1.

The purpose of the Accessibility Conformance Testing (ACT) effort is to establish and document rules for testing the conformance of web content to accessibility standards, such as Web Content Accessibility Guidelines (WCAG). These test rules address automated, semi-automated, and manual testing. ACT makes accessibility testing more transparent, and thus reduces confusion caused by different interpretations of accessibility guidelines.

Further details:

  • I have reviewed the TAG's Web Platform Design Principles
  • Relevant time constraints or deadlines: Ideally you can provide comments in two months from now.
  • The group where the work on this specification is currently being done: Accessibility Conformance Testing Task Force
  • The group where standardization of this work is intended to be done (if current group is a community group or other incubation venue): Accessibility Guidelines Working Group
  • Major unresolved issues with or opposition to this specification:
  • This work is being funded by:

You should also know that...

[please tell us anything you think is relevant to this review]

Discussions

Log in to see TAG-private discussions.

Discussed Oct 7, 2024 (See Github)

Matthew: APA is looking into this, haven't finished yet. No concerns so far.

Discussed Oct 21, 2024 (See Github)

Matthew: this is an fpwd of 1.1... changes highlighted from 1.0. APA is looking at this. from an Architectural perspective I don't think we should be concerned about it. Parts do raise interesting questions... one thing some in the TAG might have experience with. A few small questions that APA will ask for clarifications on. A number of people from different a11y consultancies and other places - good mix of people involved. I think from a TAG PoV it's good.

... One thing that is related: got me thinking about - a lot of a11y is subjectine and dependent on context. But some is very mechanical. All of these rules are plain language rules... but e.g. some custom control has been implemented correctly, there are ways to express that .. Could we represent those by patterns? Just wondering is there anything people have come across that expresses the relationship between DOM structures ...

Martin: I think selectors would do it... :has() is a big change.

Matthew: I think the question is how applicable ... sort of tangential to what ACT is doing...

Matthew to draft a closing comment and we close by end of week

Discussed Oct 28, 2024 (See Github)

Matthew: Jeffrey & I were discussing... we do think there might be something worth talking about here... Automatable? I've just seen some additional comments.

Jeffrey: this specification is not a technical spec.. It's a guide for how to write specs. I feel like this should not be on the REC track... They don't have a format...

Dan: a guidelines doc...

Yves: WCAG is on rec...

Jeffrey: it's saying "here's how you write a rule to test a web page against"

Matthew: Some parts of WCAG can be tested mechanically .. we have a plurality of different rule sets ... some proprietary. This is an attempt to harmonize... This document is a tech specification for writing those rules... Perhaps it's a novel sort of document for the REC track though it does have MUST, SHOULD, MAY... but you couldn't write code to lint these rules as they are allowed to be written in file formats that lack structure (though in practice, many seem to be written in Markdown, which opens the possibility for linting and migration between rule format versions)... I do understand why it's rec track... but also it could be done in a different way. It's a good piece of work.

we discuss potential comment - Matthew to work with Jeffrey

Comment by @matatk Nov 25, 2024 (See Github)

Thank you for sending us this review. We see a few different ways that your work could be applied, and be tested. As this document is on the REC track, certain criteria need to be met in order for it to advance. The main one of this is having "at least two independent implementations" of the thing being specified.

There are a number of possible avenues here—we're wondering which you are intending for this document. Some possibilities:

  • The status quo: this document is specifying a format for writing ACT rules—in which case, the ACT rules documents themselves would count as implementations. We encourage you to ensure that the specification is precise enough to allow tools to be written that could verify, or 'lint', ACT rules against this spec—these linting tools would then form a test suite that ACT Rules authors could use to verify the rules documents they're writing.

  • We also encourage you to publish a REC track document that aggregates the ACT rules as written by the CG—in which case the requirement for them to mature on the REC track would be that accessibility testing tools embodying (or supporting) the rules would be required.

  • If you have a goal that manual testing tools should be able to load ACT rules, for humans to use in test procedures (which we encourage) then implementations would be accessibility testing tools that support the loading of ACT rules.

  • Some of the ACT rules could be checked entirely mechanically. It's possible that ways could be developed to achieve this that could be used (by a machine) directly, as part of projects such as Playwright, Cypress, and/or Web Platform Tests (which has a somewhat different focus). This would allow the rules (or rather the mechanical checks underpinning them) to be more widely adopted, so we encourage you to investigate this approach in future.

Discussed Feb 3, 2025 (See Github)

Matthew: Jeffrey and I came up with a comment... Framed their problem where any of this should be normative, if it will solve particular problems, and could the rules be integrated into mainstream testing projects (headless and the like), and we haven't heard back. We didn't say "we want to hear back from you", but expected feedback.

We're not concerned, it's a good thing they're doing this. Tehre is a lot of inconsistency of WCAG etc. Consensus here would be good. But hte comment we made: getting the max possible impact from it. Make sure they were pursuing right track in W3C. We weren't asking them to do something, just get their ideas for where they wanted to go.

Hadley: should we explicitly ask?

Matthew: I'll draft something. Wouldn't be terrible to just close it, but I'm curious about the plans and whether we could help. Will prod and will ask Jeffrey to help.

Comment by @matatk Feb 6, 2025 (See Github)

Hi @daniel-montalvo, we are following up on the ACT Rules review this week. We were wondering if any of the avenues suggested above align with the group's plans for this work? If there is any advice on how to proceed along those directions that we can offer, please let us know.

Discussed Feb 10, 2025 (See Github)

Matthew: no update on this... Last week we talked about it and realized we didn't hear anything back... I explicitly asked them 4 days ago.. no reply.

we will circle back to it at the plenary this week

Comment by @daniel-montalvo Feb 21, 2025 (See Github)

Sorry @matatk this slipped through.

I'll respond below.

  • The status quo: this document is specifying a format for writing ACT rules—in which case, the ACT rules documents themselves would count as implementations. We encourage you to ensure that the specification is precise enough to allow tools to be written that could verify, or 'lint', ACT rules against this spec—these linting tools would then form a test suite that ACT Rules authors could use to verify the rules documents they're writing.

In our view, commonly used accessibility checkers are already doing this job, although it'd be desirable that more vendors will join.

Currently, the process is as follows:

  • The rules are considered "implementations" of this format specification.
  • Everybody can write rules that conform with the format specification.
  • There is a set of rules written by the ACT Task Force. Some of them have already been approved by AGWG, others are proposed by the Task Force and will eventually be approved by the Working Group.

Although implementation guidance is always welcome, wouldn't be out of scope for the rules format document itself?

  • We also encourage you to publish a REC track document that aggregates the ACT rules as written by the CG—in which case the requirement for them to mature on the REC track would be that accessibility testing tools embodying (or supporting) the rules would be required.

I am not clear on this point. Doesn't the set of rules mentioned above cover this already? In addition to this, there is also Implementation pages that cover how different automated, semi-automated, and manual tools have indeed implemented the rule set.

It's true these are not on Rec track because back in the day it was decided that they'll be more impactful if they're published as resources on the WAI website.

  • If you have a goal that manual testing tools should be able to load ACT rules, for humans to use in test procedures (which we encourage) then implementations would be accessibility testing tools that support the loading of ACT rules.
  • Some of the ACT rules could be checked entirely mechanically. It's possible that ways could be developed to achieve this that could be used (by a machine) directly, as part of projects such as Playwright, Cypress, and/or Web Platform Tests (which has a somewhat different focus). This would allow the rules (or rather the mechanical checks underpinning them) to be more widely adopted, so we encourage you to investigate this approach in future.
  • This is certainly something that could be developed in the future. We have not explored yet how the rules could be used in the context of Playwright or Cipress.
  • The group is currently exploring how test cases within rules can contribute to the current accessibility Web Platform Tests. For the moment it's mostly automated, manual, and semi-automated accessibility checkers which are implementing the test cases in the rules.
  • There is also work on manual rules (rules that automated checkers cannot currently check by themselves) but these are still not approved by the Task Force, hence not published as Proposed to the AGWG Working Group.

The Task Force Would appreciate any comments if you think these additional pages and resource still don't cover the points you raise.

Discussed Mar 17, 2025 (See Github)

Matthew: If the rules of themselves are implementations of this document, then our recommendations about how to make the format more precise would kick in?

Jeffrey: A lint to some rules to comply with the spec.

Matthew: I can comment. We knew in general it wasn't precisely defined.

Jeffrey: We should recommend to pick one but perhaps don't need to object. All of the rules are in one format so no reason not to standardise on that.

Matthew: IIRC, the headings/values weren't specified. Can pick out some of those examples. They can ask APA for input and we (TAG) can help out.

Jeffrey: Sounds reasonable.

Matthew: They're looking at WPT. Work on manual rules as well. Some are not objective. All seem reasonable.

Matthew: Is our preference to close this? Mention examples of things to be more precise?

Jeffrey: Do you think we'd be unsatisfied because the spec isn't tight enough?

Matthew: Technically I think we are?

Jeffrey: Ask to iterate.

Matthew: Ack.

Jeffrey: I'm happy to review. Anyone else can review too if they like.

Matthew posted comment

Comment by @matatk Mar 17, 2025 (See Github)

Thanks for your detailed and helpful reply @daniel-montalvo - this gives us a clear picture of the goals, and this gives rise to some things we'd like you to consider.

As the rules are implementations of the spec (the rules format) it's important that the format be made sufficiently precise such that the rules themselves can be automatically checked against the format. We imagine a linting program would be very helpful to those in the community writing the rules. Here are a few examples (which are not exhaustive) as to how that could be achieved:

  • Prescribing a single format in which the rules be written would simplify the process of checking them, and mean probably only one linting program would need to be written.

  • Here are some examples of things that would need to be tightened up in the spec in order to make automated linting possible (this isn't an exhaustive list, but hopefully provides a helpful starting point):

    1. The exact names for section headings.

    2. Possibly also the ordering for section headings (though this would be more to keep output consistent for readers).

    3. Specifying the exact strings to use as enum values. E.g. for rule types what would be the exact text string that indicates the rule is either atomic or composite?

    4. Specifying the exact structure - and permitted values - of the accessibility requirements mapping - i.e. how are the 5 fields identified, and what are the allowed values (exact text strings)?

    5. Could expectations (example expectations section) be more tightly specified, such that they could be (in future) loaded into an automated test framework? It seems that this would be possible, and could have significant benefits later on.

Your explanation as to why you are not pursuing REC track for the rules themselves makes sense. You could always pursue it later if it does make sense for the group.

Regarding the possibility of integrating the rules into general testing tools later, including WPT, we're pleased that you're exploring this. Let us know if we can help.

We hope this helps clarify things - we are keen to hear your thoughts, and provide any further clarification or advice that we can on this, so we will keep this thread open.

Comment by @daniel-montalvo Apr 14, 2025 (See Github)

Thanks for your suggestions about this, @matatk.

This is indeed a great idea. We already have a good number of tests in the ACT-rules.github.io repo. Going with strings only as a way to make these normative may raise some i18n concerns. It seems the only way to go about this would be coming up with normative prose semantically, for example through an RDF schema, which is quite an undertaking given current group resources.

We are assuming this is not a blocker for the current 1.1 version. This overall sounds like a good idea to pick up in a future ACT Rules Format version, and we’d be willing to explore potential approaches for this when the time comes for a 2.0 version.

Comment by @WilcoFiers May 13, 2025 (See Github)

@matatk Have you had a chance to look at Daniel's response? We would like to have confirmation that TAG is on board with this approach before we process to CR.

Discussed May 19, 2025 (See Github)

Jeffrey: we sent them things .. they replied .. we just need to tell them it's not a blocker...

Matthew: Daniel the staff contact for this group asked me to chat - and I clarified the meaning of our comment. They were concerned about making it harder to contribute rules but they understood the value of proper linting, etc... If they want to go to Rec ... they need to be more precise. I don't think it's a problem if it's a note and not be precise...

Jeffrey: I think we can tell them it's OK to go to CR.

Matthew: I don't want to get in the way of it ... but the consensus we had is that if they are doing a format then they need to specify it more tightly ... and what are the acceptance tests for meeting CR.

Matthew: ... are we happy for our feedback to be included in 2.0?

Jeffrey: we could say "we think this should either be a Note or have automated tests for the rules. You don't have to follow our advice, and you can ask for AC review over our concerns".

Dan: we could give a "satisfied with concerns" <- where the concerns are that they need more tests.

Matthew: they said they would explore it for 2.0 so I'm happy with that.

Jeffrey: who can draft?

Matthew: I can draft - but I'm away so don't let me block.

Jeffrey to pick up after Matthew drafts...

Discussed May 19, 2025 (See Github)

Jeffrey: we sent them things .. they replied .. we just need to tell them it's not a blocker...

Matthew: Daniel the staff contact for this group asked me to chat - and I clarified the meaning of our comment. They were concerned about making it harder to contribute rules but they understood the value of proper linting, etc... If they want to go to Rec ... they need to be more precise. I don't think it's a problem if it's a note and not be precise...

Jeffrey: I think we can tell them it's OK to go to CR.

Matthew: I don't want to get in the way of it ... but the consensus we had is that if they are doing a format then they need to specify it more tightly ... and what are the acceptance tests for meeting CR.

Matthew: ... are we happy for our feedback to be included in 2.0?

Jeffrey: we could say "we think this should either be a Note or have automated tests for the rules. You don't have to follow our advice, and you can ask for AC review over our concerns".

Dan: we could give a "satisfied with concerns" <- where the concerns are that they need more tests.

Matthew: they said they would explore it for 2.0 so I'm happy with that.

Jeffrey: who can draft?

Matthew: I can draft - but I'm away so don't let me block.

Jeffrey to pick up after Matthew drafts...

Comment by @jyasskin May 22, 2025 (See Github)

We talked about this in our breakout this week: We are supportive of this work, and we are happy for the 1.1 version to go to CR.

We're pleased that you're keen to explore the specificity issues that we raised in a future revision (ideally the next after 1.1). As the spec you're writing is about the format of the rules, then rule documents themselves don't constitute tests of the spec: a linter that can check any given rule documents against the format would constitute a test for the format's spec.

We agree that it's important to ensure there are no internationalization concerns. There are ways this could be achieved whilst still fulfilling the need to make the format's spec more precise, and we'd be happy to advise on that later.

We're closing this review with the satisfied with concerns label in order to confirm that we support the work you're doing, and we hope to work with you in future to address the specificity issues we raised.

Thank you again for your review request, and for the important work your group is doing.