So there is the magic string [no important files changed] we can put in commit messages to avoid rerunning tests unnecessarily. (post announcing the feature) One concern that has been raised is that adding this flag can easily be forgotten. I can confirm this is a problem, I have personally caused all submissions on the Rust track to be retested unnecessarily Personally, Iām also always anxious to mistype it so I have to go around searching for some place where I can copy-paste the string.
@MatthijsBlom suggested to automate part of this process. I want to pick that suggestion up and made a little proof-of-concept workflow on the Rust track.
Personally Iād like the workflow to post a comment for all changes that would trigger a rerun of tests, not just when tests are updated.
The comment can then simply point out what to consider. This can be adding [no important files changed] or if a rerun is preferred, maybe the comment can point out to limit the number of exercises being updated in a single PR.
Yes, agreed. Iām not 100% confident how to determine that. The documentation has some stuff regarding that, and I believe it has to do with the "files" property in config.json. (Both the track config and the exercise configs have it though)
The point of that would be to batch the test reruns? So they donāt all happen at once and clog up the system?
Yes if I understand the docs correctly, a rerun is triggered if any of the files listed in files.editor, files.test or files.invalidator in the exerciseās .meta/config.json have been touched.
Yes thatās the point indeed.
One thing that Iām unsure of is when we actually want solutions to be re-tested automatically after making changes to a track. What is a typical use-case where automatically rerunning tests is preferred @iHiD/@ErikSchierboom?
I think rerunning tests is the default. The website shows community solutions based on whether or not they pass the latest tests. Assume the test files change because a test was added. We want the tests to be rerun so we can show the community solutions which still pass the new test. The ones that donāt pass the new test should not be shown, because presumably those were incorrect implementations all along.
We only want to override this and avoid test reruns if we know that the testing logic did not change. e.g. running a formatter over test files and the changes are only whitespace. Or in my case, Iām changing the test files of all Rust exercises because Iām incrementally applying the test case generator I wrote to all exercises. Since that usually doesnāt change the test logic, assuming the previous test cases were already up-to-date with problem-specifications, we donāt want to rerun the tests in this case.
Given that an exerciseās .meta/config.json is authoritative for triggering test reruns, I guess it should be quite feasible to make the workflow language track independent. Maybe this could then be upstreamed to the repo exercism/github-actions and maintainers who would like to enable this automation can copy a very small workflow file that simply uses this action into their track repo.
The way it works is that when we sync an exercise, we calculate a hash of its āimportantā files. These files consist of:
.docs/instructions.md
.docs/instructions.append.md
.docs/introduction.md
.docs/introduction.append.md
.docs/hints.md
Any files in the .files.test[] array in the exerciseās .meta/config.json file
Any files in the .files.invalidator[] array in the exerciseās .meta/config.json file
Any files in the .files.editor[] array in the exerciseās .meta/config.json file
The primary goal of this āimportant files hashā is to determine whether to present the student with the option to update an exercise to the latest version.
When determining whether or not to re-run the tests, we only consider:
Any files in the .files.test[] array in the exerciseās .meta/config.json file
Any files in the .files.invalidator[] array in the exerciseās .meta/config.json file
Any files in the .files.editor[] array in the exerciseās .meta/config.json file
We consider just these files as they have the potential to affect the test outcome (which the docs never should do). Now if a maintainer knows that theyāre only making changes to these files that wonāt influence the test outcome (e.g. by only changing the formatting of the files), the [no important files changed] marker can be added to the commit.
Alright, I will derp around on the Rust track to make the workflow based on the invalidating files specified in .meta/config.json and post again here when I have presentable results
Little devil pops up over shoulder: āWrite an exercises with tests that read from the documentation filesā
I now have a version which I think should be language independet. It reads files.test, files.invalidator and files.editor from an exerciseās config file. I was not yet able to test invalidator and editor, as the Rust track generally doesnāt have these two fields.
The script also assumes exercises are always located at exercises/{practice,concept}/$SLUG. I believe this is the case for all a language tracks, right?
The following PRs are only meant as demonstration and basis for work upstream in the exercism/github-actions repo.
I have not done so, but if someone would want to produce a git āpre-commitā hook that adds this to messages based on the requirements, that might be a thing.
Iām not sure that would be the right place. (I assume by āadds this to messagesā you mean commit messages?) The magic string needs to be in the commit message of the squash commit when the PR is merged. So I think the PR itself on GitHub is the right place to have the automation. Itās also not possible to apply the magic string automatically, because really only a human can determine if the test file has changed in a way that warrants rerunning tests or not.
So I think the best we can do is to remind peope to consider adding the string themselves before they merge.