The Markdown Exercise is intended as a way to practice refactoring existing code that already passes tests, but could be more idiomatic or succinct, or generally improved.
However, a large number of students (936 on the Python track & counting in the representer) blow right past those instructions and submit the unaltered code.
One way to address this is to write a response (from the representer or analyzer) that says something to the effect of
Markdown is intended as a refactoring exercise, but you have not substantially changed the starter code. Consider what changes you could make to clarify the code or make it more idiomatic, while still passing the tests.
Another way might be to write a test that fails and gives the same feedback - which would be more prominent, because the student would have to ‘pass’ that test before they could mark their solution complete.
Of course, the test method would be prone to ‘gaming’ - where students focused on faking out the diff. Nevertheless, I do think it is problematic that so many students miss the point (and opportunity) to practice a refactor.
Thoughts? Suggestions? Rants of your own?
I prefer this approach as it ensures someone using the CLI will understand what’s going on.
I’m in no way concerned about people gaming the system as they’re only cheating themselves. So I’d focus on a solution that makes genuine students understand what’s wrong.
Maybe the exercise needs refactoring “hints” as part of the description too?
We could also have a tag for the exercise as a
refactoring exercise which adds some addition info to the README automatically.
Potentially we could also add tooling to ensure that the submitted file isn’t the same as the “stub” too, but that’d be a bigger project that wouldn’t get done soon.
Yes. I like that! Perhaps we do part in the README (or addendum), and part in the Hints? It could really benefit from a video approach - or even a cross-track intro video that talks about the practice of refactoring…
It would also be great to have a tag/category of exercises that are refactors. I mean - we don’t want to go nuts, but having a few more exercises around refactoring could be really interesting.
I’m trying to think of/find an approach that would let me diff an AST of the stub vs the students solution within pytest, and then ‘fail’ the ‘test’ if there is less than… 5% diff or maybe less than that? It’s still fuzzy in my head. I haven’t found anything solid yet – but that’s where my brain is headed. We’ll see. I might be forced into a straight diff.
that’s me, honestly I felt intimidated and neglected the exercise.
Didn’t feel the challenge just frustration because I didn’t take time to actually understand it better. Maybe this one could benefit from some hints
Some hints would be nice. There’s also always the option of requesting mentoring!