Suggestion for the track's test documentation

Hello everyone!

Starting the C++ track, I have a suggestion concerning Testing on the C++ track available in the track documentation.

It only explains to build and run the first test(s). After submitting exercise “log-levels”, it took me a while to understand why I had only 1 test running locally and 9 on Exercism. I finally found the reason in the file (moving if defined(EXERCISM_RUN_ALL_TESTS) from test to test).

After re-reading, I see that “Uncomment the next test” is mentioned at the beginning of the documentation, but I think this goes a little unnoticed and deserves to be detailed. I think it would be logical to also include the explanation from in the Testing on the C++ track documentation, or at least to indicate that more details are available in the


Hello Adwing,

I’m the maintainer of the C++ syllabus, so it’s practically my fault when the exercise is not intuitive.

Everything that clears up the process is very welcome. Do you want to write a PR for the files? As you just went through the process, you probably know best what can be improved for new users.

Jumping in here - explicit instructions for how to run all the tests without editing the test file would be appreciated. I was halfway through the track before I realized I didn’t have to paste #define EXERCISM_RUN_ALL_TESTS in the test file.

explicit instructions for how to run all the tests without editing the test file would be appreciated.

I’m not a big fan of that. The current instructions try to promote TDD where you can work on one aspect at a time. I think explaining how you can pass -DEXERCISM_RUN_ALL_TESTS=1 to cmake could add some confusion, and students would have to write code that compiles for all test cases at once.
But maybe that’s a minority opinion.

It might help to use catch2’s info in the test case file and show how many tests are (not) run. I want to expand on this anyhow, because currently, some exercises display no useable information for failed test cases.

Where are these instructions? The closest thing I found (after some digging) was Testing on the C++ track | Exercism's Docs, which doesn’t mention TDD or incremental testing at all. In fact, I haven’t come across any documentation that says that not all the tests are run by default - I spent some time in the early days trying to figure out why my tests were passing locally but not on submission, and the solution I came up with was editing the test file to make sure I was testing everything.

(Edit: if you’re talking about the last bullet in the first list, “Uncomment the next test”, that’s an erroneous instruction since there’s nothing to actually uncomment. It didn’t make any sense to me when I read it, and that approach (“Keep testing iteratively until all tests pass”) is, from my perspective, a waste of time, especially with exercises with lots of corner cases.)

You can find them in the file that gets downloaded via that CLI.
On GitHub it’s in the exercises/shared directory.

Wow. I never noticed that file. It’s better than the web instructions, I guess, but it’s certainly not where I would expect to find instructions on how to configure testing, especially if my tests are passing.

1 Like

that approach (“Keep testing iteratively until all tests pass”) is, from my perspective, a waste of time, especially with exercises with lots of corner cases.

I can understand that, TDD is not everybody’s favorite.
But Exercism wants to promote TDD where that “one test at a time” is central.
(At least they wanted that before I joined Exercism, back when this EXERCISM_RUN_ALL_TESTS macro was introduced.)

Go doesn’t appear to do this (though they do gate benchmarks behind a flag). Rust doesn’t seem to do this either. Perhaps we should rethink.

1 Like

I agree, that’s not the best user experience in the world.
And finding the documentation on the website (track page, meatballs menu, “C++ Documentation”) feels more like hiding this information than wanting to show it ;-)

1 Like

It’s a known issue. @glennj and @iHiD talked about making the documentation more visible in last week community call. Hopefully, it will be reworked to make it shine! :sparkles:

I saw that the syllabus was indeed quite recent, thanks for all your work on it!

Yes, I’d be glad to handle the PR as soon as I can! :smile:

1 Like

Just wanted to jump in here to share my experience after my first week with the C++ track, especially regarding tests:

It’s a lot of fun and can be quite challenging and I’m really thankful for this course, but I’m struggling with the tests. I think I have spent more time understanding why tests fail than working on code. In most exercises I got errors (for later tasks), altough the code for the first task is correct. I then went on to work locally using the test-instructions, but I was only able to get a “All test passed” for one exercise, the others fail although the code gets accepted by the online editor.

Because that became quite frustrating, I’m now working completely offline and do my own little tests before pasting everything back in the online editor to get a pass:

I feel sorry for the creators of this track, because they have probably put a lot of work into all the test scripts, but I can’t get it to work like intended.

I’m not sure what you mean by this? Could you talk through exactly what breaks for you please?

It would also be helpful if you can provide some code that passes/fails tests locally but has a different result in the online editor. Thanks!

Hey @buchnema,

I wrote that exercise, and the tests. I also thought a lot about the current iteration of the C++ test runner. There are a few things, that you might do differently than the online task runner and thus get different results.

If you click on the “Run Tests” button, the test-runner will compile your code and the test code as a whole. The test code executes the functions for all the tasks to get the results of the student code.

If you have not implemented the classes, enums, or functions yet, the compilation will fail. Even when you got the first test correctly implemented.

The way I see your code, you implemented your own tests in the main function, which probably gets extended for every task you face.

I have not found a solution to do something similar for C++ testing. One way would be to give partly implemented functions, that would compile, but the students would never write their own complete function. This shortcut would also fail with enums and classes because you cannot implement a partly defined class. The tests would fail to compile if a non-existent member is accessed.

I see two ways to solve your (and probably many other people’s) problem:

  1. Make the students write the skeleton of every class, enum, and function as a first task.
  2. Search the student’s solution for the appropriate function implementations and build the tests on the fly before the compilation.

The first method requires a lot of explaining and reduces the step-by-step mentality of the exercises. The second one is a lot of programming in the test-backend and would probably require hand-crafted routines for every single exercise.

Maybe a third option exists:

  • Explain to students how Exercism tests are structured and that the C++ tests can only execute correctly after they have implemented every function, enum, or class from each task in some (not necessarily correct) way.

You did spend a lot of time with the track, what do you think?

No need to feel sorry for us :). The learning mode is quite new, so there is a lot to learn for us as well.

Hi @Adwing !
Being quite new to TDD and Cmake, it took me a while to understand how to work locally on the Cpp track, and I share your views on this documentation suggestion.
Did you manage to make the PR ? If not I’d be happy to.

I also wanted to suggest adding precisions to exercises/shared/.docs/ itself.
It took me a while to understand I had to look into the someexercise_test.cpp file to move the #if defined(EXERCISM_RUN_ALL_TESTS) line across tests, for the simple reason that I was looking at the hello_world file structure, and, having just the one test, it doesn’t feature that line !
So I could make a small PR for that file too if @vaeng or other maintainers approve.

Finally, thanks everyone for all the recent work on the C++ track, I love it :slight_smile:

If you want to improve the docs with an example, I would be happy to review your PR. :slight_smile:

1 Like

Done !
Do you need the PR ref ? If so : Update with example by apprentiLAB-Suzanne · Pull Request #712 · exercism/cpp · GitHub
Or is it best to tag in the comments ?

Thanks for the link. I have reopened your PR and added a comment.

If you mention me on github, that is probably the fastest to get my attention.

1 Like