Clarify Exercise Methodology Required

Would a solution here be to leave the rules alone and just add a note along the lines that “you need to implement the requirements embodied by the tests; the tests do not perfectly map to a ‘complete’, ‘full’ or ‘perfect’ Pig Latin”?

I would say “satisfy the requirements” rather than “implement the requirements”. Because using the requirements will be done regardless of if they go beyond, but satisfying them is the “limit” at which “good enough” looks like.

Thanks for the suggestion, but I would still find the rules cofusing. Why are they mentioning xray but not xbox in Rule 1? Why yttria but not ylang-ylang?

I would also repeat - I’m an even worse linguist than I’m a programmer. Imagine what happens when linguistics experts join Exercism :slight_smile:

The rules and tests are arbitrary and are not expected to completely represent a “correct” Pig Latin.

Is that the confusing part? Should that sentence be added to resolve any such confusion?

Actually, to me the word arbitrary (I went to google translate to make sure we’re on the same page - and yes, it returns the meaning of arbitrary as random, voluntary, wanton) isn’t really what I’m expecting to describe rules for an exercise

Maybe arbitrary isn’t the best word. The tests aren’t designed to perfectly model Pig Latin, though. They are designed to deliver a TDD exercise of medium difficulty which roughly lines up with the broad rules of Pig Latin. The rules could be simpler to make for a simpler exercise but simple is not the goal. The rules could be much, much more involved to more accurately model Pig Latin but an accurate Pig Latin model is not the goal.

Is the confusion around an expectation that the rules are supposed to accurately model Pig Latin and fail to do so? If so, we can add a note saying, “Yes, the rules don’t completely match the storyline/generalized goal. Please focus on the tests (see TDD) and not on what Pig Latin ought to be.” Said statement applies to most exercises on the site.

Yes, I think this has been discussed several times here - neither the rules nor the tests are supposed to be perfect. Instead, I suggest that we discuss how we can make them more meaningful for the students.

And here we jump back to my earlier suggestion to actually increase the difficulty level of the exercise. If it’s indeed of a medium difficulty, perhaps it shouldn’t be marked as “easy”. I voted for “nightmare” difficulty but I’m happy to negotiate.

Again, no. The confusion is:

“vowel/consonant sounds, not letters” are two different approaches to modeling the language, though. One is more correct and harder to pin down, the other is less accurate and simpler to define specs for. One is a high level goal of what ideally should happen, the other is a simpler spec that can be embodied in a test based spec.

So if we say “Your task is to encode a subset of Pig Latin that handles certain cases using the following rules: [comprehensive list of rules that align to the tests]”, would that solve this? That seems to me to remove all of the confusion here, removes any discrepency between letters/sounds, as we specify rules explicitely.

If that doesn’t solve it, what would solve it?

(Sidenote: I don’t think upping the difficulty is warranted as I solved it in a few minutes but working through the tests. I think right now it’s nightmarish if not solving to the instructions, and easy if solving to the tests. But as TDD is explicitely part of Exercism, we don’t rate exercises according to the difficulty of not using the tests.)

1 Like

Yes, thanks, this looks much closer to the actual task for me, and is much less confusing. I would also add something like: “you’re not supposed to map every letter and letter combination to the sounds they produce, all you’re asked in this task is to implement the following rules: [comprehensive list of rules that align to the tests].”

Well done! It’s not me however who said this is a medium-difficulty task :sweat_smile:

1 Like