This error shows when the test runner dies without an expected result.
This is normally down to:
- Some underlying infrastructure issue (generally an outage of some kind)
- A test runner that performs unpredictably, or that is close to the timeout window and sometimes trickles over.
- Some issue in the code
We fundamentally don’t know which one of these three things is causing the issues, so it’s nigh on impossible to improve the error message.
What we can do is try to write better software so that (1) and (2) happen less. And maybe improve our software so that (3) is better reported to.
Some assorted information/thoughts below.
About 1:
I spend a lot of my programming time working on (1). Sadly, for the last year, that’s been very little time because I’ve being doing a lot of other things on Exercism. I’m changing things now so I have more time to code, which hopefully means (1) happens less often. The reality is though, the infrastructure works great and processes millions of submissions a month. When it breaks, it always breaks in a new unexpected way, which means we then have to add code to stop that from happening again. As it’s just me that’s working on this, sometimes we have outages for longer than I’d like as it takes me time to get back to a computer to debug and fix. But normally we don’t get the same type of outage more than once.
As an example of this, this week, this PR was merged. That took down the whole test runner infrastructure, because it reran all Haskell solutions, and due to a bug in the Haskell test runner, that meant that all the machines ran out of HDD space and collapsed. Some code that was intended to catch this didn’t work (because I made a mistake and didn’t test something properly) and so rather than magically fixing itself (by the machine killing itself and being replaced) instead it just sat there hoping to get well again.
These three PRs solve this issue:
That last PR has had probably 30 hours of work go into it so far.
As you can see, when we do have issues they tend to be complex, multi-faceted and difficult to predict in advance.
About 2:
Maintainers do do this all the time, and @ErikSchierboom is currently going through reviewing all test-runners to see where problems are occurring. We’re intending to add more CI to the test-runner repos with benchmarking and some meta-testing, to check for regressions. We have over 100 different pieces of production tooling running code in 65 languages though, so this is also complex and time-consuming.
About 3:
We could maybe do a better job at detecting things like student’s code timeouts within the test runners themselves. This would then allow us to provide better messages. We could also report infinite loops or other such things. But this needs to happen within the test-runners for it to work, which again means working across all 65 of them.
Maintainers could help with (or do) this on many of the bigger languages though, and maybe this is a good piece of work we should ask people to consider doing.