TDD and the Golang track

Hey folks,

A friendly question here more than an issue. As someone who used Exercism a lot in the past, I’ve found the Golang track a bit tricky to get traction on, mostly because it seems pretty difficult to build the data structures and the APIs incrementally, testing as you build. Maybe that’s just me, but the primary blocker seems to be that you often need to write fairly complete code before you can get meaningful feedback from the tests.

It seems like there’s been a conscious decision on the part of the maintainers to prefer generated tests, and I see a lot of tests that require fairly complete APIs. This is fine if you want to build a complete-ish solution, test it, then iterate on your solution, but it makes it difficult to build up your models and APIs incrementally, enabling relevant tests as you go (more TDD in approach.)

Any history on why that decision was taken? Is this now the standard practice on Exercism more generally, or is this a cultural artifact of the Go track?

Thanks, appreciate your time.

1 Like

Hi @matthewmorgan :wave:

All the Practice Exercises (so the ones that aren’t specifically designed to teach you the language) come from the central repository of Problem Specifications, which specifies the tests. So generally speaking I’d expect the tests to be the same on Go exercises as on other tracks.

In terms of Exercism in general, it’s intended for things to be TDD, but it does come down to the exercise as to how possible and/or well designed that is.

The Learning Exercises should be very granular though.

Do you maybe have an example or two of where you feel Go is different from the same exercise on other tracks you’ve worked on before?

Hi @iHiD ,

Thanks for your reply.

One example might be linked-list. Using the JS track for comparison, the first test requires you to implement new, push, pop. But in Golang, you also have to implement the traversal methods-- in addition to being more methods, it’s really another concept, so the scope of the first test isn’t the same across tracks.

Another trivial example is protein-translation, where to pass the first test in JS, you implement a zero-value case of the codon mapper. In Golang, you need to implement all the codons, because the first test case checks a list of codons. Of course you could go and comment out part of the codon list, but the required intervention isn’t always that obvious. Part of this is, I think, because the tests loop over many cases, calling helpers to check various bits, adding additional layers of indirection to the tests.

Anyway, I’m just wondering, mostly for my own curiosity, if the differences are intentional. Probably this is just a mental block on my part!

:+1:


@andradefr8 @junedev (or @ErikSchierboom) Any thoughts on this? :)

Perhaps you intended to tag @andrerfcsantos?

I did. Thank you and sorry!

(cc @andrerfcsantos)

@matthewmorgan I started another forum thread a few weeks ago because I found the order of the tests in problem specifications by design is not respecting TDD as a process. That may be an additional source of TDD issues.

1 Like

In general, we try get the test cases from problem specs. Each little test case in problem specs is then converted into an independent test that can individually fail ou pass. The idea of this is to see a bunch of tests failing, but as you implement functionality little by little, you see more tests passing.

Despite some tests being in lists, most of the tests run each case of the list as a separate test. You can see this by the calls of t.Run() inside the function tests. It’s why most of the test cases have a “description” field, this field is used in the call to t.Run() to name the subtest.

I’m curious on what you consider meaningful feedback. For instance, if I have a partial solution for protein translation like this:

// Translate RNA sequences into proteins.
package protein

import "errors"

var (
	ErrStop        error = errors.New("stop")
	ErrInvalidBase error = errors.New("invalid codon")
)

// FromCodon converts performs the conversion of a codon into a protein
func FromCodon(codon string) (string, error) {
    return "", nil
}

// FromRNA decodes the proteins in a RNA strand
func FromRNA(strand string) ([]string, error) {
	return nil, nil
}

The online editor gives me:

And when running in the terminal, I get:

Note here that while TestCodon is just a single test function, that does take an array with all the test cases for the FromCodon function and iterates over them, each case in the list becomes a separate test case. This does allow for you to implement functionality little by little and see tests passing incrementally.

This is something similar to what the JS track does. In which ways do you think the feedback from the tests could be improved?

When the exercises were created, most of them were indeed just a big list of test cases that didn’t separate each subtest appropriately. It was also common for most exercises to cal t.FaltalF() outside any t.Run(), meaning the test would fail immediately at the first subtest failure, without running the other tests.

But there was been a push since a long time ago to change this. You now see most exercises running with subtests (calling t.Run()). You can see a discussion about this here and you can check some PRs where we converted the code to use subtests. If you see an exercise that doesn’t do this, it’s likely a bug, feel free to tell us!

On a track-by-track level, we can also do things like make the student create more functions that incrementally make a bigger solution, change the order of the functions, make the test case descriptions more expressive, etc. Also feel free to tell us if you have any feedback on that regard.

I think true TDD is very difficult in Exercism, because it requires one to always be changing the test code, along with the code of the solution. While you can of course play around with the test file and make modifications to it, we ultimately want all the tests there to pass with the test code provided. But we can absolutely try to provide a good experience where you see more tests passing as you implement new features and “borrow” that part from TDD.

1 Like

I’ve found this as well, not just on the Go track. But to offer another perspective:

  • for some solutions (zebra-puzzle comes to mind), there’s no way to pass even the first test without a complete bug-free solution. That’s just kind of inevitable with some of the exercises.

  • for more granular tests, that might be forcing a particular implementation upon the student. What if the tests demand a particular data structure, but I think the problem would be an interesting way to explore a different one, or even a different paradigm. Recall that the canonical data for the tests are to be used across all tracks, so they cannot be specific to any implementation.

When I was helping to build the Tcl track, I did struggle with the second bullet. There are some exercises where the tests require an OO solution, some where the tests require a procedural solution. I was trying to balance the student’s creativity against forcing them to explore different paradigms.

Now, the test suite for any particular exercise on the Go track could include extra Go-specific tests. If you’d like to volunteer some ideas, I’m sure those would be welcome.

1 Like