During a mentoring session for the DnD Character exercise on the Java track, I noticed a passed all the tests for the ability method but wouldn’t meet the example below from the instructions. I wonder if it would be worth adding a test for it?
3, 5, 3, 4: You discard the 3 and sum 5 + 3 + 4 = 12, which you assign to wisdom.
The solution’s implementation looked like this:
int ability(List<Integer> scores) {
if (allScoresAreTheSame(scores)) {
return 3 * scores.get(0);
}
int smallestScore = Collections.min(scores);
return scores
.stream()
.filter((score) -> smallestScore != score)
.reduce(0, (total, score) -> total + score);
}
I notice the existing ability tests aren’t in the problem specifications. Perhaps they could be Java specific?
The current specs only check that the abilities are within range (3 to 18).
Since the ability generation is random, there’s no direct way to check on the single character if the lowest of 4 dice is discarded or just 3 dice are used.
But the means differ (3.5*3 for 3 dice and I presume 12 for “3 best of 4”).
With Law of large numbers - Wikipedia, there’s probably a way to calculate how many times we need to generate the ability to be 99.9…9% confident the average would be above X.
I did some empirical calculations.
Given that we generate the ability 600 times (100 character generations x 6 abilities) and require the average ability value to be more than 11.25 (precisely in the middle between 10.5 and 12), there’s very little change for a false positive (that we’d fail the correct solution): I ran 2 000 000 simulations of the mentioned setup and didn’t get a single false positive.
The question remaining to me: How big is the chance to actually catch a wrong implementation that way? Or will those wrong implementations still get submitted as often as now?