# Update canonical-data.json

Fixed some erroneous test values.

I just created a pull request to fix some test values in the Luhn exercise, but the system said I couldn’t submit anything due to an “open source pause” of the developers.

And it said that I should post something here, and let me say that I think that the Luhn tests are wrong. For instance, let us apply Luhn to 091. Zero stays as it is, the 9 is not even doubled, but 1 is doubled and becomes 2, so that we get 092, and the digit sum is 11, and hence the number is invalid.

There are two more errors like this, which I fixed but was unable to get pulled.

1 Like

No, read the description carefully: starting from the right we double every 2nd digit.

1. `1` – not doubled – 1
2. `9` – doubled – 9 * 2 - 9 = 9
3. `0` – not doubled – 0

The sum is 10, so 091 is a valid sequence.

Before assuming the data is wrong, consider how likely it is that you’d be the first one to notice an error like that. There are thousands (hundreds of thousands?) of existing solutions that pass those tests using that data. If that data is wrong, all of those solutions are incorrect.

1 Like

I had no idea about the scale of the project. I thought there were some tens of solutions. Also, the WP article disagrees with the description. So I thought there may be another mistake.

I don’t have actual real values for how many people passed the tests. However, I can see how many people published solutions. (Many people might have solved the exercise without publishing.) The Python track lists 3120 published solutions. Javascript: 991. Java: 1125. Depending on what percentage of people publish their solutions, I’d guess we’re somewhere between tens of thousands to maybe a hundred thousand. It’s definitely well beyond tens of solutions

Well, I noticed the error when working through the Julia track.

But your colleague is, of course, right, the number is being processed from the right, and unfortunately, most of the numbers that are to be checked work both ways, so that actually all but three tests passed for me.

By the way, the number of community solutions is not yet visible to me, actually.

Perhaps it might be a good idea to display their (total over all languages) number, so that people like me get intimidated by them.

Could save you a bit of work.

Many of the exercises are developed using test driven development, and may be presented that way as well. So the tests are definitive (yet not exhaustive), and may augment the reading material (but should never contradict them).

That means that there are two sources of information, and they should complement each other.

So having the numbers all be similar up to the failing three tests only provides a progression through solving the exercise, rather than a failure of the exercise.

If this is not the case, then there is indeed a problem to be solved regarding the exercise itself.

I initially made the same error with this exercise in another track. I’ve learned that poorly worded or even misleading project descriptions are the norm here, and you won’t get any sympathy for being misled. If you decide to move forward, try just skimming the instructions but examining the tests very closely.

If the instructions are unclear or misleading, we’re always open to discussing improvements! We’re hesitant in general to change exercise implementations, though, as that often means invalidating tens (hundreds?) of thousands of completed solutions.

Now the error was my mistake, but if in doubt, one could remove the Wikipedia link (which in my case contributed to the suspicion that there might have been some mistake, which in this case wasn’t there) and write the word “from the right” in boldface.

One could also include an odd-number-of-digits example into the instructions.

There is no “perfect description”, and with so many users there are always going to be some who misread it. I’m definitely the last one placing blame on anybody, but believe me, I’ve been hit with ENORMOUS criticism of basically everything I ever did in my life. I’m trying not to pass it on, so I hope I didn’t come across in this way.

1 Like

Hang on. If I perform Luhn (as given in the instructions) on 059, then all numbers remain the same, and they don’t sum to a multiple of 10, because they sum to 14.

I do now conjecture that the WP article is the algorithm the exercise wants. And that does work with a check digit. Is that perhaps the case?

I now implemented the algorithm from Wikipedia, and all tests pass.

Note that the “-9” method does work, because 1x - 9 gives x+1 (= 1x - 10 + 1), where x stands for a digit.

(If you want to get doubly served: https://www.youtube.com/watch?v=TYIh4MkcfJA)

The 5 will be doubled, giving 9 + 1 + 0 = 10

Similarly, given 158: 8 + 1 + 1 = 10

1 Like

That should be corrected. If you see something that needs clarification, please open a new thread. It is beneficial for every student to have clearly worded exercises. As an exercise creator, finding good wording for a task is often very difficult, as you already know the answer, no matter how badly the question is formed. So any insight from the outside is immensely helpful.

1 Like

That may be, but if one posts about this kind of thing on the forum, likely as not they will receive a reply like one of the following. It’s much more practical to forget it and try to figure out the requirements from the tests.

I am very sorry to hear, that you are discouraged from talking about the difficulties.

I can only talk about the C++ track, but I really do care about those exercises and I do want to make them better and easier to understand. If the general communication about improvements is always blocked by “No, it’s TDD, it does not need to be understandable.”, we should as a community strive to be better. Maybe that is a topic that can be discussed with @iHiD on a community call.

Exercise requirements are test driven. But this doesn’t mean the prose should be confusing! Anything written in the prose should be clear and simple to understand. The prose should explain the general problem and generally what is required.

The prose (in most cases) is not expected to explicitly list every edge case and requirement; those details are encoded in the test. However, lacking the full, comprehensive requirements should not impact how clear the general description is.

If there is something in the prose which is not clearly communicated (opposed to how to approach an edge case or an implementation requirement), that should be clarified. If there is something missing in the prose which omits a requirement, that is be design.

Does that distinction make sense?