I’ve seen people struggle with running their C++ solutions because they contained what the compiler considers warnings and those are apparently treated as errors.
Although some warnings do indeed improve one’s knowledge of C++, some of them can feel very pedantic. And because they are treated as errors, there’s no way to see if your code is actually correct when those occur.
Is that something that is present in other tracks? It’s a bit like if the Ruby track piped each solution through a very strict Rubocop…
Can somebody show an example where those warnings feel too pedantic?
When I had discussions about that with students on the C++ track they usually just wanted the compiler to shut up and accept their sloppy or bad code, often they were out of ideas and just throwing things on the wall hoping to get a correct solution.
At first glance it looks like the final cout line belongs to the loop because it is at the same indenation level as the if/else. But that loop doesn’t have curly braces so this cout line will be executed after the loop has finished.
With the current compiler options the compiler generates an error (“… the latter is misleadingly indented as if it were guarded by the ‘for’ …”) which I think is helpful. I don’t see how changing the compiler options to allow that piece of code to pass would help the students.
I have not done exercises on tracks with an analyzer for a while, but I had the workflow in mind like this:
if the code passes the tests, the analyzer can suggest changes
certain error levels in the analyzer hinder the students from submitting as completed
Is that correct?
I would like to have an environment on the C++ track, where the students are not hindered from seeing test results, even if their code is sloppy/wrong. If we could tame down the test runner to show more information instead of blocking with a compiler warning, while still blocking “bad” solutions from entering the community solutions pool, I would be a lot happier.
I don’t know C++ to have an authoritative opinion, but my Exercism-sense is that using the analyzer for this is the idiomatic approach from the website perspective. Messy code passing solutions is normal - analyzers and representers (and obviously mentors) fixing it is then the secondary value add.
That said, if it takes time to get the analyzer to do this, maybe we treat them as errors in the interim?
That is the current state. Apart from certain concept exercises the test-runner fails if any warning shows up during compilation. This is also the behavior I encountered in most industry projects I coded for.
Currently the practice exercises use these compiler options: -Wall -Wextra -Wpedantic -Werror. -Wall -Wextra are two fundamental warning levels, all C++ projects I’ve worked on did use them. -Wpedantic warns about code that does not conform to the C++ specification, e.g. compiler-specific extensions -Werror turns those warnings into errors, i.e. code with warnings does not compile.
Many experts recommend enabling much more warnings, so I think our options are not that strict.
In my experience the recommendations in other tracks are more about “best practices” or “elegant alternatives”, e.g. in C# I get the recommendation to turn one-statement functions into expression bodied-functions.
IMHO that’s not the same as the warnings generated by -Wall -Wextra -pedantic.
I think we’re not doing anybody a favor by allowing objectively bad code.
But perhaps we’re talking about different things.
Can you show some reasonable examples of code where out current settings are too pedantic?
The warning are all valid are should be addressed. But they are warnings and not errors. Alas, the test runner doesn’t currently have a great way to surface warnings; it produces a pass/fail. Ignoring the warnings isn’t great; treating the warnings as errors isn’t great. Thankfully someone volunteered to build out the track, which puts them in the position to decide between trade-offs. Neither option is going to make everyone happy but an option needs picking.