The Markdown Exercise is intended as a way to practice refactoring existing code that already passes tests, but could be more idiomatic or succinct, or generally improved.
However, a large number of students (936 on the Python track & counting in the representer) blow right past those instructions and submit the unaltered code.
One way to address this is to write a response (from the representer or analyzer) that says something to the effect of
Markdown is intended as a refactoring exercise, but you have not substantially changed the starter code. Consider what changes you could make to clarify the code or make it more idiomatic, while still passing the tests.
Another way might be to write a test that fails and gives the same feedback - which would be more prominent, because the student would have to âpassâ that test before they could mark their solution complete.
Of course, the test method would be prone to âgamingâ - where students focused on faking out the diff. Nevertheless, I do think it is problematic that so many students miss the point (and opportunity) to practice a refactor.
I prefer this approach as it ensures someone using the CLI will understand whatâs going on.
Iâm in no way concerned about people gaming the system as theyâre only cheating themselves. So Iâd focus on a solution that makes genuine students understand whatâs wrong.
Maybe the exercise needs refactoring âhintsâ as part of the description too?
We could also have a tag for the exercise as a refactoring exercise which adds some addition info to the README automatically.
Potentially we could also add tooling to ensure that the submitted file isnât the same as the âstubâ too, but thatâd be a bigger project that wouldnât get done soon.
Yes. I like that! Perhaps we do part in the README (or addendum), and part in the Hints? It could really benefit from a video approach - or even a cross-track intro video that talks about the practice of refactoringâŚ
It would also be great to have a tag/category of exercises that are refactors. I mean - we donât want to go nuts, but having a few more exercises around refactoring could be really interesting.
Iâm trying to think of/find an approach that would let me diff an AST of the stub vs the students solution within pytest, and then âfailâ the âtestâ if there is less than⌠5% diff or maybe less than that? Itâs still fuzzy in my head. I havenât found anything solid yet â but thatâs where my brain is headed. Weâll see. I might be forced into a straight diff.
thatâs me, honestly I felt intimidated and neglected the exercise.
Didnât feel the challenge just frustration because I didnât take time to actually understand it better. Maybe this one could benefit from some hints
As a rookie student myself, I felt that this exercise opens too early, IMPORTANTat least on the JavaScript track, and that can be confusing, specially for people who are fresh into coding and havenât come into contact with refactoringas a concept. That allied with the fact it presents you with template strings, makes it seem even more complicated then it is, and I think it might be a factor in how people (maybe) are treating it. The fact that you can you can just submit it as it is was, ironically, what shed some light for me personally, as I had a chance to look at solutions that were completely different.
Considering the color theme of the code submissions, Iâd sudgest having a yellow-coded for unnaltered code subs, in general (tho I imagine thats a hella big addition). For this exercise, being able to see some different solutions before needing to pass the test is also heaven-sent. Being able to do that introduced me to regular expressions, so maybe adding that concept to the excercisesâ intro might be nice.
P.S.: Despite the python tag, I felt like itâs be more appropriated to give my feedback here than starting a new post, but I could be corrected.