# Add to Problem Specs? Perfect Numbers unit tests don't cover an edge case

Hi there!
While receiving mentoring to improve my code in Perfect Numbers, I noticed that exercism will allow code that covers all unit tests, but actually incorrectly computes some aliquot sums.

Consider this function computing the aliquot sum:

``````def compute_aliquot_sum(number):
if number == 1:
return 0
aliquot_sum = 1
i = 2
while i * i < number:
if number % i == 0:
aliquot_sum += i + number // i
i += 1
#if i * i == number:
#aliquot_sum += i
return aliquot_sum
``````

This code (with another function for classification) will pass all unit tests, but only uncommenting those two lines will compute all of the correct aliquot sums (as far as I know). Luckily, in the range from 1 to 10,000,000 I found exactly four edge cases where the classification will be incorrect:

``````196, correct: abundant, aliquot sum: 203, incorrect: deficient, incorrect aliquot sum: 189
13456, correct: abundant, aliquot sum: 13545, incorrect: deficient, incorrect aliquot sum: 13429
15376, correct: abundant, aliquot sum: 15407, incorrect: deficient, incorrect aliquot sum: 15283
1032256, correct: abundant, aliquot sum: 1032383, incorrect: deficient, incorrect aliquot sum: 1031367
``````

I propose adding at least one of those four numbers to the unit tests to root out incorrect solutions.

Thanks so much for posting, and for finding these edge cases.

Generally, Exercism test cases don’t intend to be exhaustive, as an important part of learning is both working with a mentor to improve code and refactoring code to discover where more tests might be beneficial (you’ve done both, which is great! ).

That being said, it is probably worth discussing the addition of a `aliquot sum` test case for this exercise – but for more than just the Python track (since what you’ve discovered is not Python-specific).

Since this exercise is a practice exercise, the test cases are pulled from a common cross-track repository called `problem-specifications`, where multiple language tracks use the same data to create tests for their programming languages.

I think your proposal should be discussed there.

@iHiD - can we transfer this from Python to a more general category, and make it a `problem-specs` proposal?