Hi, I’m getting an error in Parallel Letter Frequency. Actually, a warning of “variable set but not used” (freqs), but -Werror is on.
It happens for test cases where freq should be empty. There’s no check for emptiness.
Hi, I’m getting an error in Parallel Letter Frequency. Actually, a warning of “variable set but not used” (freqs), but -Werror is on.
It happens for test cases where freq should be empty. There’s no check for emptiness.
Hey there, I have noticed this in the meantime myself. You’re correct in that the tests are missing the checks for empty maps. This PR is supposed to fix it: fix: missing assertions in parallel-letter-frequency test by ahans · Pull Request #830 · exercism/cpp · GitHub
Thanks for reporting and fixing this, y’all. I merged the PR.
Hi, there’s still a test case without a check for an empty map. Because of that, the compiler keeps warning about a variable set but not used.
I believe it’s just a matter of including “CHECK(freqs.empty());” below line 18 of the test file.
Man, this is really embarrassing! Sorry about this and thanks for noticing!
@vaeng Please review and merge.
@ahans earlier today I submitted a solution to this problem which I believed wouldn’t pass, as it most likely suffers from data racing. However, it passed.
I’ve just tried the same solution locally on my machine and, as I anticipated, it didn’t pass the tests. There was several small and random differences from the expected solution.
My solution used std::execution::par policy. Is it possible that the test runner isn’t properly configured to parallel algorithms? If that’s the case, my solution would be running sequentially on the test runner and that’s the reason it passed here.
As the test runner uses GCC (if I’m not mistaken), Intel TBB library might not be installed, or perhaps the “-ltbb” compiler flag is not there.
Yeah, the test runner falls back to sequential execution. We considered adding TBB to the runner image, but then figured it’s not worth the extra effort. This exercise makes most sense for students to work on locally. Also running the benchmark is not an option as part of the standard test that happens in the runner. But playing with different options and seeing how their performance differs is what is most interesting about this exercise IMHO.
I haven’t considered data races. Having a chance of detecting those could be a reason to enable parallel execution in the runner. But then again, it’d be a hit or miss. At the very least, we should probably add some text explaining what the online runner does.
You’re right, this kind of exercise is all about playing around with different options for concurrency/parallelism.
I believe the description should mention that, though. That the test runner doesn’t support parallelism and it’d be best to work on the exercise locally to experiment with different options and to test the solution against race conditions, deadlocks or similar issues.
It is not that much of an issue to activate tbb, it is just a few MB. But I think the best user experience will be from their local testing.
I think we should not overload the exercise text. There is the option to write articles and approaches and this is where we should put the different strategies and benchmarking topics.
If you want to be thorough you can even add an article to explain asan and tsan testing.
You’re right, the text is big enough as it is.
But don’t you think it’s a reasonable expectation, for an exercise that should be done using parallelism and that even suggests this approach, that the test runner’d correctly assess the use of parallelism?
I mean, it’s possible the student doesn’t even download the exercise and only try it online. I usually do that.
Right now it’s not possible to know that wouldn’t be “advisable”. As far as I’m aware, it’s the first exercise in the cpp track to have an appreciable difference between running the tests locally or online.
I’m with you. We should add it.
That feature fell off my to-do list. Thanks for being patient.
I made the PR and wrote a small test to see that all worked as intended. If we merge it, I would be happy to hear back from you to see if your solution now passes the tests on the website.
Hi, @vaeng, sorry for the delay. Was the solution merged? Do you still need me to check if my solution passes the tests?
Thanks for the reminder. I need approval from a guardian to merge, but then the test-server should support it and I you can try your code.
Edit: Done. You should be able to run your solutions through the test-runner now.