I wanted to share with you my latest open source project, the Universal Test Runner!
When I’m switching between languages (especially common when working on 12in23), I never remember how to run that language’s unit test suite. Sometimes it’s an npm test, othertimes it’s cargo test or go test or… you get the idea.
Instead of losing my muscle memory every month, I wrote a program that uses an advanced AI a series of if-else statements to determine what command to run (and then runs it).
Mine isn’t nearly as well developed, but I accomplished something similar-ish in my editor. I use vim for all my editing and I can run tests directly from vim. I set up :E to save-and-test across languages:
autocmd FileType go command! E w|!go test
autocmd FileType python command! E w|!pytest
I think I want to keep the main project where it is (since I’m planning things that are out of scope for exercism, such as django & rails support).
That said, I’d be happy to fork it into your org and release a version that’s a more Exercism focused. Since that’s a much more finite problem space, I could probably support any language which exercism already supports for local development. It could release as the exercism-test-runner or something.
Happy to chat more if that’s something you’re interested in.
I’d love to ship this as part of the CLI, but it would need to be an executable without dependencies to achieve that. So maybe we do a Nim or Go port (most of our tooling is in one of those two languages!).
Let’s see what @ErikSchierboom thinks when he’s back off holiday in ~10days!
The cons of downloading docker and test runner images (often multi-gb) are quite big in my eyes. I think if people want to not install tooling locally, then we have the editor for that, but I think if people want to dig into a language, they’re going to need to install some stuff locally anyway.
I also don’t think we’d want to have automatic unskipping of tests, so we’d need to find a way to stop all the docker images from doing that.
On the flip-side, if we were trying to achieve “run tests before upload” as part of the CLI workflow, then I’d be more pro-docker for the auto-unskipping and the consistency. But I don’t think that’s what we’re trying to achieve here (at least in this phase).
What Jeremy said. People developing locally usually have the local commands set up. Setting up Docker to run tests seems like a big ask to me.
As far as unskipping tests, in many tracks that’s handled via an environment variable. Those can be set via the tool when forking the child process that will run the tests. I vaguely recall other tracks may have other approaches but I’m not familiar with the details.
I agree on going local first- folks doing local dev will probably have (or be interested in) the requisite platform tooling.
re: skipping. Some tracks like rust instruct the user to edit the test file to remove the skip instruction. This approach will have the widest compatibility at the risk of a user changing or breaking a test file accidentally.
A safer option is ensuring wide support for environment and flag passthrough. This is how universal-test-runner already works and it brings a lot of compatibility for free. For instance, Rust supports the --ignore-skipped flag to cargo test. Each language has their own version of this, so we could amend instructions to say
run exercism test -- --ignore-skipped to run all your rust tests
Lastly, configuration. I think we launch the initial version of this feature without it. While some people will want to do per-track configuration, the core value prop (IMO) is It Just Works. I think if we ship an unconfigurable version, that’s still a huge win for most users. We can follow up with some of the discussed configuration options in a later release; I think we should start (and really nail) the most common case.
If that all sounds good, I can clean up the PR and add more tracks. I’ll likely need help writing the unit tests though, since mocking the filesystem and system calls in go isn’t something I’m particularly familiar with.
I think for a lot of languages it wouldn’t be a simple thing to do. For C#/F# for example, it would require me to create a custom attribute. It’s possible, but might be more work than you envision.
That said, I do like the idea. I would make the command be something like:
exercism test --include-skipped (positive assertion) or something like that.
But for now, let’s focus on getting the most simple version merged.
One complication that you might not yet have considered is that the command to run might be different across os-es. For example having to run test.ps1 vs running test.sh.
I think we need to support this at the very least.
Fair! I was using cargo as an example of a situation in which it would be useful to pass through flags to the underlying command. I think we could get that mostly for free, but given that some tracks require a local user to edit the test file, it’s fine to have that be the focus for everyone. Let’s keep it simple to start.
Agree! Do you have examples of languages that have platform-specific test commands? I would guess the two major cases are “windows” and “macos/linux”, right?