Introducing: The Universal Test Runner!

I think most are a couple of hundred megs, but many are indeed very big.

Maybe so. I’m not necessarily in favor of one option or the other, but I did want to point it out.

What Jeremy said. People developing locally usually have the local commands set up. Setting up Docker to run tests seems like a big ask to me.

As far as unskipping tests, in many tracks that’s handled via an environment variable. Those can be set via the tool when forking the child process that will run the tests. I vaguely recall other tracks may have other approaches but I’m not familiar with the details.

I can’t name one that does it like that :smiley: But we’re probably doing different tracks.

Yeah, I think this is true. Let’s go with local.

  1. agree, let’s go local
  2. skipping tests: I think that’s a personal preference that people can implement on their own
    • it will get tricky quickly if we try to do this on multiple OS’s
  3. think about how to structure the CLI source code for it with *nix/Win/Mac in mind.
  4. I’ve done this a lot, so I’ll have contributions, or just look at the various __exercism__test__*.fish scripts in https://github.com/glennj/exercism-cli-fish-wrapper/tree/main/functions.

bash and Tcl do it.

1 Like

I agree on going local first- folks doing local dev will probably have (or be interested in) the requisite platform tooling.

re: skipping. Some tracks like rust instruct the user to edit the test file to remove the skip instruction. This approach will have the widest compatibility at the risk of a user changing or breaking a test file accidentally.

A safer option is ensuring wide support for environment and flag passthrough. This is how universal-test-runner already works and it brings a lot of compatibility for free. For instance, Rust supports the --ignore-skipped flag to cargo test. Each language has their own version of this, so we could amend instructions to say

run exercism test -- --ignore-skipped to run all your rust tests

Lastly, configuration. I think we launch the initial version of this feature without it. While some people will want to do per-track configuration, the core value prop (IMO) is It Just Works. I think if we ship an unconfigurable version, that’s still a huge win for most users. We can follow up with some of the discussed configuration options in a later release; I think we should start (and really nail) the most common case.

If that all sounds good, I can clean up the PR and add more tracks. I’ll likely need help writing the unit tests though, since mocking the filesystem and system calls in go isn’t something I’m particularly familiar with.

1 Like

I think for a lot of languages it wouldn’t be a simple thing to do. For C#/F# for example, it would require me to create a custom attribute. It’s possible, but might be more work than you envision.

That said, I do like the idea. I would make the command be something like:

exercism test --include-skipped (positive assertion) or something like that.

But for now, let’s focus on getting the most simple version merged.
One complication that you might not yet have considered is that the command to run might be different across os-es. For example having to run test.ps1 vs running test.sh.
I think we need to support this at the very least.

Fair! I was using cargo as an example of a situation in which it would be useful to pass through flags to the underlying command. I think we could get that mostly for free, but given that some tracks require a local user to edit the test file, it’s fine to have that be the focus for everyone. Let’s keep it simple to start.

Agree! Do you have examples of languages that have platform-specific test commands? I would guess the two major cases are “windows” and “macos/linux”, right?

Correct. COBOL is an example: cobol/exercises/practice/collatz-conjecture at main · exercism/cobol · GitHub

Awesome! I’ve done a quick audit of all of the tracks to see what sort of edge cases there are:

Most tracks use a simple, static command. A few have platform-specific instructions, but fewer than I would have expected.

Some have more complex setups, including multiple commands or additional requirements. We should be able to handle some of them, but I’m willing to ship without some of the trickier cases (don’t let perfect be the enemy of good, etc).

Finally, a few get run via an IDE or similar; I think those are out of scope for now.

I’ve pushed an update to my PR that covers basic platform-specific features. I’ll flesh out the other simple commands either tonight or next week.

Yeah. Later on, we can output a message that such a track doesn’t support exercism test.

Do you have an example of the configuration file?

I’ve got both the “not supported” message and examples of track configurations in my PR: Add `test` command to run any unit test by xavdid · Pull Request #1092 · exercism/cli · GitHub

Or do you mean a different configuration file?

I’ve reviewed the PR (I don’t know Go, so someone else has to do that).

:open_mouth: OMG the first crack in the facade! :wink:

@iHiD this functionality was just released in 3.2.0 of the CLI as exercism test! It’s a full re-implementation in go and covers nearly every track (all but 5, I think).

I’ll be updating my standalone tool to defer to the exercism CLI if present. I’ll also write up a post on my blog about the feature and post it here when it’s ready.

Thank you for the idea and Erik for working with me on the implementation and code review! Excited for people to use this.

2 Likes

Amazing! That’s very exciting :slight_smile:

I’m off for a few days, but I’ll shout about it when I’m back!

Don’t worry, I’ve just posted: Version 3.2.0 of the CLI has been released

All right, blog post about the project is live!

2 Likes

Lovely! Let’s definitely do a video with you, me and Erik this week about this! I’ll ping you on Discord :slight_smile:

1 Like