[Feature Request] Add test durations to Tests output

It would be nice if there were a way to see how long each test took. This is particular of interest for computationally intensive exercises like Alphametrics. I know that for PHP this is possible if you add the --log-junit <filename> option to phpunit. Here’s an example entry:

<testcase name="testPuzzleWithEightLetters"
  file=".../exercism/php/alphametics/AlphameticsTest.php" line="80"
  class="AlphameticsTest" classname="AlphameticsTest"
  assertions="1" time="2.592573"/>

This may be possible in other languages that output a JUnit file. If that is not available, perhaps just displaying how long the entire test suite took would be good.

I imagine it would make quite a difference as to the specs of the machine that the tests are running on, especially for exercises where this is interesting to consider.

Understood, but it would still be nice to have. Maybe the specs of the test runner could be added to the output. I’m just curious, is there a significant difference in performance among the different machines that run the tests? That seems like it would lead to unreproducible test results, where if a run was just over the time limit, the next run (if some small change were made to the code, like adding a comment) could possibly succeed.

Different machines have difference resources. The load on any one machine can also vary a lot, which can have a massive impact on the timing.

From a test runner perspective will this not be that hard for some test runners to implement but as both Isaac and Jeremy has mentioned so will the result be unstable and unpredictable and I am unsure how much value it will give. Perhaps if the solution is way faster you can see a difference but otherwise a fluke might occour which makes the result useless.

That seems like it would lead to unreproducible test results, where if a run was just over the time limit, the next run

On test runners which have longer compile time is this something which often happens so, yes even the result is unpredictable sometimes.

I did some experiments lately for setting the upper time limit for CI on PHP track. I found the online test runner runtime to be quite stable and predictable (at least for PHP) - not on a microsecond level, but a test of about 20 milliseconds never ran more than 22 or less than 18 milliseconds. Which is quite stable for a shared cloud environment.

That said: I would not show such times in the online environment as a feature. Every language offers some kind of timing measurement, and those interested in runtimes can print the values to show them.