I just saw that a solution in the Python queue got this automated feedback:
“People more often read code than write, so pay more attention to variable naming. In this case it’s better to call k and v variables as what this variables represent: letter and points. that would make it easier to get back later and see immediately what is that code doing.”
I think that’s excellent advice, but this particular solution did already use expressive variable names (points and letters), not generic names like k and v.
Could it be that the representer ignores variable names?
Is there a way to avoid giving automated feedback that is specifically about names?
The key thing with representation feedback is that they apply to multiple solutions. Anything that the representer normalizes should not be commented on, as you have no way of knowing what the original syntax was for a particular solution.
Another example is naming of variables, functions, methods or classes. As representers could normalize identifier names, you shouldn’t comment on them. Even if your representer currently does not normalize identifier names, you still should not comment on it, as this is a normalization likely to be added to a representer later.
There’s also a special REPRESENTER_NORMALIZATIONS.md file that the Python track is missing (see Elixir’s file for example) that should be added to help mentors know what kind of normalizations are done by the representer, warning them not to leave comments about variable names for example.