Making a testing framework for a language that does not have one

I am one of the maintainers of the BQN track and we need to come up with a testing framework.

The main problem here that I am worried about is, if the track takes off, what will be the future problems i might have with a bare-minimum testing framework? I have been looking at minitest but that one has features which may be beyond the scope of this project. So far I have understood that

  1. the file with tests should be importable in some way as data
  2. the file should be runnable in some way to check the tests and get output on stdout.
  3. The testing framework will eventually have to link into the test runner

What are the general recommendations for a brand new unit testing framework? It would be great to implement these early so we don’t have to rewrite the exercises en masse.

You might get some insight from the people on the Godot language track, who have a similar problem. This is one of their discussions: Create new track for GDScript - #53 by pfertyk

1 Like

We just went through this process in the 8th track.

Starting from a test suite: https://github.com/exercism/8th/blob/main/exercises/practice/sieve/test.8th

  • include the student’s solution file, and the testing framework
  • set the number of tests expected to run
  • for each test, we have the test name, the expected value, the actual value, and equal? is the expected result
    • equal? also increments a counter depending if the result was pass/fail/skip
    • additionally, we have true?,false?,null? predicate functions used in other exercises: thus far, those 4 are all we’ve needed.
  • end-of-tests signals we’re done testing, so emit the summary report.

Running the tests looks like:
image

Or, setting an environment variable to override the SKIP directive:
image

and test failures look like
image

One enhancement I’d like to make is to have test failures show the expected and actual values.

2 Likes

It might also be really useful to check The Test Runner Interface | Exercism's Docs. There you’ll find the data we expect the test runner to return.

Note that you don’t have to support a version 2 or 3 test runner, a version 1 test runner will work fine but will be less optimal for students.

GDScript is perhaps not the best example, as it is significantly different than “normal” programming languages (Godot Engine is created with gamedev in mind, and many things work differently here). However, we also found ourselves in a situation where we didn’t have a built-in testing framework, and had to figure it out. The decision was between an existing addon (GUT: Godot Unit Test) or defining the test format on our own. We went with the latter ;)

The test runner is not merged yet (need some refactoring), but you can take a look at this PR: Implement a test runner for GDScript by pfertyk · Pull Request #2 · exercism/gdscript-test-runner · GitHub

The test file format that I came up with is as simple as it can be:

const TEST_CASES = [
	{"test_name": "Test One", "method_name": "add_2_numbers", "args": [1, 2], "expected": 3},
	{"test_name": "Test Two", "method_name": "add_2_numbers", "args": [10, 20], "expected": 30},
]

I assumed that Exercism tests don’t need any setup, mocking etc., so making a custom test runner should be manageable. It looked like a better option than using a third-party addon (which required a lot of configuration, and which is outside of our control). But, again, this was the case for Godot Engine, it might be a bit different for BQN, so you need to decide :)

Let me know if I can help with anything else ;)