Go concept exercises that prevent TDD

I’m working through the Go syllabus and 2 of the concept exercises are somewhat frustrating to develop piece-by-piece. lasagna-master and elons-toys both get the student to write all the functions/methods. This is fantastic practice. However the program can’t compile until all the functions exist with the right signatures and return values. Students essentially have to write the whole program before testing.

I don’t know what can be done about this. I just wanted to vent.

1 Like

I have the same thing with C++. It would be nice to have it pass, but that would mean I have to include things like [[maybe_unused]] into the stub and then explain it.

A step-by-step “test and reveal” would be nice.

The tricky thing with Go is that is a compiled language, and for the tests to run, the code still needs to be compiled. But for the tests to compile, if they call certain functions, those functions must exist at compile time.

However, you don’t actually need to implement the functions, they just need to exist. All exercises in the Go track provide you with a stub of the functions you need. Those stubs for functions include a panic call - we recommend you don’t remove that panic call until you are ready to implement the function. With panic, the compiler won’t complain, but the tests that depend on that function will fail naturally. That way you can have some tests pass and others fail while having just a small part of the exercise implemented.

Spoilers for Go's Lasagna Master - example of just some tests failing

With the online editor:

Locally:

$ go test -v
=== RUN   TestPreparationTime
=== RUN   TestPreparationTime/Preparation_time_for_many_layers_with_custom_average_time
=== RUN   TestPreparationTime/Preparation_time_for_few_layers
=== RUN   TestPreparationTime/Preparation_time_for_default_case
--- PASS: TestPreparationTime (0.00s)
    --- PASS: TestPreparationTime/Preparation_time_for_many_layers_with_custom_average_time (0.00s)
    --- PASS: TestPreparationTime/Preparation_time_for_few_layers (0.00s)
    --- PASS: TestPreparationTime/Preparation_time_for_default_case (0.00s)
=== RUN   TestQuantities
=== RUN   TestQuantities/few_layers
--- FAIL: TestQuantities (0.00s)
    --- FAIL: TestQuantities/few_layers (0.00s)
panic:  [recovered]
        panic:
[... truncated output ...]

Except for concept exercises: example go/exercises/concept/lasagna-master/lasagna_master.go at main · exercism/go · GitHub

Python has an analogous issue. It’s why you see some students here asking why they have import errors. Python has pass for the body of a function or class, which will allow compilation to byte code - but the class or function signature needs to be there or Python and PyTest can’t import it to run or test. Haven’t figured out what I want to do about that - but we also have exercises (lasagna, Ellens Alien Game, others) where the student needs to stub out/have stubbed out the functions or classes before the tests can run properly.

Edited to add: I wrapped the imports of our lasagna exercise to catch missing constants and function names, and modified our test runner to trim stack traces for that - but dong that for all the concept exercises seems very kludgey. It would be nice to have … something … that would allow good error messages for missing function definitions.

Ah, that one is an exception. All other concept and practice exercises should have stubs.

The reason that one is an exception is that it’s linked to the functions concept, and we figured including the stubs for that one would not make you write the functions themselves from scratch, which is important if you are learning about functions.

However, I do see how this can be a bit frustrating and leave you feeling that you have to write the full implementation before testing. Not sure what would be a good solution here and I’m open to suggestions. We could include some extra instructions saying that you can start by writing the signature of all functions and make them panic. But not sure if that wouldn’t create even more confusion, because now there’s this panic thing you have to know about.

I do understand why the stub file is empty, and it’s important that it is. It’s lasagna-master for Functions and elons-toys for Methods.

Maybe if there was a comment in it suggesting the first thing to do is write the functions with the correct signatures and a “dummy” return value

package lasagna

// TODO: define the 'PreparationTime()' function

// TODO: define the 'Quantities()' function

// TODO: define the 'AddSecretIngredient()' function

// TODO: define the 'ScaleRecipe()' function

// Your first steps could be to read through the tasks, create the
// functions with their correct parameter lists and return types,
// but only have a default return value for the function body.
// This will let you then implement the function logic using
// test-driven development.

I like this. Let me know if you’re interested in creating PRs for this, otherwise I can get it done.

I would just expand a bit more on why they should start by creating the functions:

// Your first steps could be to read through the tasks, create the
// functions with their correct parameter lists and return types,
// but only have a default return value or panic for the function body.
// 
// This will make the tests compile, but most of them will fail.
// You can then implement the function logic using test-driven 
// development (TDD), where you should see an increasingly
// number of tests passing as you implement more functionality.

Or with a panic. I think this is a really good idea.

I am not sure TDD is good to reference for concept exercises. The test cases for each individual task are usually not very TDD style like they are for practice exercises. I am not sure “making 5 different tests for 5 different functions pass one by one” has a lot of relation to TDD given what the term commonly refers to.

I like the idea with the comment overall (at least as long as we don’t have the “compiling” exercise to explain this better upfront). But I would suggest to not refer to TDD, e.g. to have this for the second part in @andrerfcsantos’ suggestion:

// ...
// This will make the tests compile, but most of them will fail.
// You can then implement the function logic one by one and see an increasing
// number of tests passing as you implement more functionality.

Well, “task-driven development” – read the instructions for a task, implement that function, run the tests.

I haven’t seen what happens in the web editor though: how does it handle non-existant functions?

@glennj If it does not compile it does not compile and you see the compiler error. We decided against doing any magic for this case in the web editor test results.
However, I put up a PR today that will include non-executed top level tests in the web test results given the code compiles. E.g. if you have 4 functions all with a panic inside, you will see 4 failed tests in the output once that PR is merged. (This was needed to make the version 3 test runner interface work.)

1 Like

If we’re all onboard for a comment in the stub file, I’ll open a PR.

1 Like

A couple of checks still running, but:

1 Like