In the Java track I occasionally have to remind people that they can’t introduce new dependencies in an exercise because Exercism’s test runner runs in an offline environment, so it doesn’t support any third-party libraries not already shipped with the test runner image.
It just occurred to me that we might be able to make this process less error-prone by re-using a track’s test runner as a CI runner on that track’s repository, so that compatibility between an exercise and the Exercism test runner is known while building, instead of basing it on student feedback.
Are there maintainers out there who have had a similar idea or think this would be a good idea? @ErikSchierboom what do you think?
8th does this. There’s some configuration needed for the workflow that I think Erik needs to do
Thanks! I had a look and that is exactly what I had in mind.
Actually because the test runner interface is shared across tracks the test script used in 8th is 99% reusable for other tracks. It might even be possible to extract the whole setup into a shared GitHub Actions workflow by the looks of it. Might be worth checking out!
Well, 8th was special in that it requires docker build args, which meant that I had to do some secrets stuff. I think in general it should “just work.” I’m almost certain there are more tracks that do this.
edit: I remembered, Unison also does this: https://github.com/exercism/unison/blob/main/bin/test
Thanks for the feedback! From all of the examples I was able to update the Java track to use a similar approach
In the mean time I’ve been playing around with coding a custom GitHub Action inspired by all of this: GitHub - sanderploegsma/exercism-test-runner-action. Apart from it being a lot of fun and adding cool stuff like job annotations and summaries, I also realized that it might be possible to use this setup the other way around:
The test runner repositories could clone their respective track repositories in CI and verify that no regression is introduced when changes are made to the test runner itself.
I’ve actually got that on my to-do list for Python, I just have other things I need to get done first. We run all the exercise example/exemplar files through the test runner when anyone PRs to the content repo, and we have a bunch of golden tests for the test runner itself, but we don’t re-test the content repo when the runner changes (yet!)
Leaving a small update regarding the GitHub Action I’ve been working on: I tested it on a bunch of tracks and so far the results are pretty promising!
There is a couple of tracks that make it a bit harder to find a solution that fits everything though. Most of it comes down to the fact that it’s hard to figure out how the example files map to the solution files.