What do you all think about adding the classic refactoring exercise Gilded Rose Kata to the problem specifications?
I’d be happy to open a PR in the problem specifications, or add the kata as an exercise in the Ruby track if that’s a better starting point. First I just want to know whether it would have any chance of being accepted.
Here’s the exercise in Ruby, but it exists in many languages (see below):
It’s a classic, and probably the most popular code kata. It’s been covered in lots of talks, such as this one by Sandi Metz.
Since it already exists in many languages, adding implementations across many tracks would be straightforward. Many versions don’t include ready-made tests, though, so tests would have to be written for some tracks.
It’s a refactoring exercise… I love them, but I wonder if there’s a reason Exercism has only two three, and those date back to 2016. Maybe they’re difficult to mentor, or they’re too long, or they’re outside of Exercism’s focus?
The scenario is fanciful bordering on nonsensical, unlike the practical vibe of exercises on Exercism. I’m not sure if that matters or not.
I might propose you implement it for Ruby as a trail run to get all the “kinks” worked out, and then move from that implementation to a more “generic” problem-specifications one, since the requirements and discussion for probblem-specs can be involved. But honestly, there is enough data out there on the problem that it can be directly discussed in problem-specs.
There is also the Markdown exercise, which isn’t implemented for Ruby at this time. I (personally) would like to see more refactoring exercises - I think they are right in line with the TDD philosophy and can lend themselves to learning/teaching idiomatic syntax. Code starts out maybe more verbose or complex using more “generic” syntax, and then can be made less verbose/complex by using more language-specific tools and libraries (as an example).
Having “not ideal” code that passes all tests also encourages students to ask themselves “can I do better?”, and might lead them to seek out mentorship/code review and discussion - all things we want from students using Exercism.
I think it’s hard coming up with exercises that work in a wide swath of languages and aren’t over-specified. And that difficulty is compounded when it comes to creating refactoring exercises. It’s also more work to craft automated feedback and analysis - so human mentoring is very important.
I also think we need to be much much clearer with student prompting when an exercise is a refactoring exercise. On the Python track, we often see students submit virtually unaltered code, because it passes the tests and there is a lack of understanding that the goal is to change the code while keeping the tests passing. We also get students who think that the stub code is somehow the solution, so they try to PR improvements, rather than submitting a refactor as a solution on the website.
I’d argue it is no more fanciful than Annalyns Infiltration or DnD Character or Saddle Points. In fact, you could make the setup story a sort of extension or riff on either Annalyns Infiltration or DnD Character by placing the Inn in the same RPG game, and having the task be that you have been told that the code needs review/refactor.
Thanks for being open to the addition! I like your suggestion of implementing it in Ruby first. I’ll plan on doing that.
Thanks also for mentioning the Markdown refactoring exercise, which I’d overlooked. I added it to my OP above.
Agreed! One thing about Gilded Rose that might help on that point, is that it’s not purely a refactoring exercise. The student’s ultimate task is to add a feature (so, tests for that feature are failing at the beginning), but adding that feature becomes much easier if the student first refactors the existing code.
You’re right that the instructions should clearly state that refactoring is expected, but if the student nevertheless tries to add the feature without first refactoring, the hope is that they’ll feel some pain and reconsider their approach
Perfect, then Gilded Rose will be in good company!
I think refactoring is an important skill. I would go as far as adding a feature to the (cpp) test-runner to flag unaltered solutions for specific exercises. Just as a very special hint for printing “Hello, world!” is implemented for some tracks.
I’m adapting this exercise for PowerShell atm and I have a couple of questions since the wording of the problem still might be some what ambiguous in some cases.
If any item being create with a quality more than its actual maximum capacity, would it self correct before the update of each day?
For example if i have a normal item of quality 89. Now I call update on it, would the next day its quality to be 50 (only correct after the update) or 49 (correct before update and subtract 1 from daily)?
And I also assume that whenever sulfuras being created, it always get the maximum quality first at 80 regardless if being conjured or not. If this is the case then I think a test for input quality below 80 should be included as well.
The starting code (which is from the original Gilded Rose Kata) doesn’t enforce any of the constraints that you mentioned. So if an Item were created with quality above the maximum, or if a Sulfuras were created with a quality anything other than 80, the system would not correct those values.
I didn’t add tests for those cases because (since this is a refactoring exercise) I didn’t think I should test beyond what the starting code does, other than the new feature at the end.
Should we clarify the instructions to say that the student doesn’t need to handle invalid starting quality? Or in other words, that an item’s quality can be assumed to be valid at the beginning.