Ballerina Test runner timeouts

Most of the exercises are hitting the timeout before finishing with the suggested solution. What is the best way to troubleshoot the issue?

How long do the tests run when you run them on docker on your local machine? What is the underlying infrastructure of Ballerina - does it run on a VM?

All of the exercises take 50s to test in the Ci. update blurb text and config formatting · exercism/ballerina@37fa1ae · GitHub

Each exercise should take between 5-10s to run the tests. The test result formatting script probably contributes to the overall time it takes to produce a result, as it is a separate ballerina script that needs to run.

started a PR for improvements: compile test report script and run from .jar file by vordimous · Pull Request #12 · exercism/ballerina-test-runner · GitHub

Can you help with the closer bot?

@iHiD I have updated the test runner, and the “run tests in docker” GitHub action is completing 30s faster. It has been 30 min since the build action, and the exercise submissions are still timing out. Is there a way to confirm the new test runner is being used, and if so any other options to improve runtime?

This is something you’ll need to get @ErikSchierboom to look at, but unless you have some unusual setup, the GitHub action isn’t related to the test runner. That’s just your internal CI. So you’ll need to ensure the actual Docker setup is as fast :slight_smile:

It’s probably best to wait for Erik to approve changes to the Dockerfile before merging too, as he is the expert in this from our perspective (Erik - this should probably be enforced via Branch Protection now anyway as this is running on our infrastructure)

I’ve looked into this and it is without a doubt caused by the --code-coverage flag.
Without that flag, bal test --offline runs in about 1.5s on my (beefy) machine. With that flag, bal test --code-coverage --offline runs in about 6.5s, a 5 second difference. This makes sense, as calculating code coverage is often quite slow.

Looking at the code, I see that you’re using the file created by the --code-coverage tool to generate the results.json file.
I then checked the Ballerina docs and found that there is also a --test-report option.
Running with this option produces no noticeable delay (1.5s again) compared to the version without flags, and it produces a JSON file that looks like this:

{
  "projectName": "hello_world",
  "totalTests": 1,
  "passed": 1,
  "failed": 0,
  "skipped": 0,
  "coveredLines": 0,
  "missedLines": 0,
  "coveragePercentage": 0.0,
  "moduleStatus": [
    {
      "name": "hello_world",
      "totalTests": 1,
      "passed": 1,
      "failed": 0,
      "skipped": 0,
      "tests": [{ "name": "testFunc", "status": "PASSED" }]
    }
  ],
  "moduleCoverage": []
}

To me, it looks like there is enough information in this file to still be able to generate a nice version 2 results.json file.
What I would thus suggest to do is to change the test runner from using --code-coverage to using --test-report.

Thank you for looking into this. We also had some discussions around needing the --code-coverage flag. The time delay makes the extra info not worth the wait. I will work on this change and submit a PR.

The exercises are passing now. This and the improvements to the .json converter seem to have helped. Thank you for your assistance!

1 Like

I’m starting the ballerina track and have been seeing timeouts consistently.

Hmmm, that’s unfortunate! We’ll need to look into that.