In general, we try get the test cases from problem specs. Each little test case in problem specs is then converted into an independent test that can individually fail ou pass. The idea of this is to see a bunch of tests failing, but as you implement functionality little by little, you see more tests passing.
Despite some tests being in lists, most of the tests run each case of the list as a separate test. You can see this by the calls of t.Run()
inside the function tests. It’s why most of the test cases have a “description” field, this field is used in the call to t.Run()
to name the subtest.
I’m curious on what you consider meaningful feedback. For instance, if I have a partial solution for protein translation like this:
// Translate RNA sequences into proteins.
package protein
import "errors"
var (
ErrStop error = errors.New("stop")
ErrInvalidBase error = errors.New("invalid codon")
)
// FromCodon converts performs the conversion of a codon into a protein
func FromCodon(codon string) (string, error) {
return "", nil
}
// FromRNA decodes the proteins in a RNA strand
func FromRNA(strand string) ([]string, error) {
return nil, nil
}
The online editor gives me:
And when running in the terminal, I get:
Note here that while TestCodon
is just a single test function, that does take an array with all the test cases for the FromCodon
function and iterates over them, each case in the list becomes a separate test case. This does allow for you to implement functionality little by little and see tests passing incrementally.
This is something similar to what the JS track does. In which ways do you think the feedback from the tests could be improved?
When the exercises were created, most of them were indeed just a big list of test cases that didn’t separate each subtest appropriately. It was also common for most exercises to cal t.FaltalF()
outside any t.Run()
, meaning the test would fail immediately at the first subtest failure, without running the other tests.
But there was been a push since a long time ago to change this. You now see most exercises running with subtests (calling t.Run()
). You can see a discussion about this here and you can check some PRs where we converted the code to use subtests. If you see an exercise that doesn’t do this, it’s likely a bug, feel free to tell us!
On a track-by-track level, we can also do things like make the student create more functions that incrementally make a bigger solution, change the order of the functions, make the test case descriptions more expressive, etc. Also feel free to tell us if you have any feedback on that regard.
I think true TDD is very difficult in Exercism, because it requires one to always be changing the test code, along with the code of the solution. While you can of course play around with the test file and make modifications to it, we ultimately want all the tests there to pass with the test code provided. But we can absolutely try to provide a good experience where you see more tests passing as you implement new features and “borrow” that part from TDD.