Exercise tests give different results from running code directly

I have completed more than 100 exercises in the Python track, but on the “Change” exercise, for the first time, the tests give different results from the results I get if I run the code directly.

For example this test fails:

self.assertEqual(find_fewest_coins([1, 5, 10, 25, 100], 15), [5, 10])
Lists differ: [10, 5] != [5, 10]

However if add the following line at the bottom of my program, and run the code directly I get [5, 10], the expected result.

print(find_fewest_coins([1, 5, 10, 25, 100], 15))

Any hints as to what I might be doing wrong?

When showing the code here we can use “code fence” to help format it. In line, that would be a backtick and then the line of code, and an ending backtick.

If there are lines of output, then three backticks, the language followed by code on the following line(s), terminated by three backticks.

It would look like this (given information from your post):

self.assertEqual(find_fewest_coins([1, 5, 10, 25, 100], 15), [5, 10])
Lists differ: [10, 5] != [5, 10]

Which I did by doing this:

```python
self.assertEqual(find_fewest_coins([1, 5, 10, 25, 100], 15), [5, 10])
Lists differ: [10, 5] != [5, 10]
```

To answer the question, perhaps a different comparison should be done with this exercise. Because a nickel and a dime are the same as a dime and a nickel, so I would not expect this to be a failure. The only difference is the order in which the two coins appear in the list.

Hi kotp,

Thanks for responsing, and thanks for advising on how to format posts using a “code fence”.

I agree that the order of the two coins should not be a failure. But my confusion is that my code produces that list in one order if I run it via the tests, and another order if I run my code directly. would expect my code to produce the same result in both cases.

The tests could certainly do an order-agnostic compare. But the Python track is heavily TDD oriented so the tests would be implying that the expectation is that the output must be sorted. Ergo the implication here is that the solution is wrong as a correct solution must match the ordering shown in the prose (where all the expected outputs are shown in sorted order).

Without having access to your code, it’s rather difficult to figure out where your code is going wrong :slight_smile: You may want to share your code. Or, better yet, consider requesting mentoring! Exercism is built around having a discussion with a mentor via the “Request Mentoring” button. Mentors automatically have access to your code and test results, which makes debugging significantly easier.

Thanks for your reply IsaacG.

If it was simply the case that my code produced the list i one order and the tests expected the list in a sorted order, then I would happily sort my list before returning it.

The problem that’s vexing me, it that my code appears to produce different results depending on whether it is run via the tests, or run directly. Could it be some sort if test result caching issue? Or it there some other aspect that is different when I run the code through the tests…?

I will certainly consider requesting mentoring if no one is able to point out the problem in this forum. Thanks for that advice.

Ok, I think I figured out the problem.

I had a global variable (a dict declared external to the function being tested).

fewest_cache = {}

The test code only imports the function being tested:

from change import (
    find_fewest_coins,
)

and this global variable initialization was not being executed between tests.

So if I executed a single test case directly (by calling the function directly under the function declaration in my code file), it behaved as expected. However if my code was executed via the tests, it would potentially be performing other tests as well, and not re-initializing my global variable in between.

All fixed now.

Thanks everyone for your help!

This is why you shouldn’t use global variables :wink:

1 Like

I throw myself upon the mercy of the court. :bowing_man:

It’s pretty difficult for people on the forum to debug issues like that without seeing the code. With the code, a mentor might be able to spot that global and lingering state pretty quickly. I’d suggest in the future that mentoring be the first choice (even if you don’t want code feedback but only an answer to “why is this happening?”) prior to the forum.

Yes - Good point IsaacG.
Next time I think I’ll take your advice and opt for mentoring as a first choice. The mentoring feature of Exercism is a really good one.
I think I might have been a bit hesitant at first, because mentoring only becomes available in an exercise if you submit an iteration. And I somehow felt that it was inappropriate to submit code that was still failing tests.

You do need to use the CLI to submit your exercise if you want mentoring. But by no means should that stop you from doing so! If a mentor doesn’t want to mentor an exercise with failing tests, they won’t :slight_smile: They can see if it’s passing tests or not before they click the “Mentor this exercise” button. They can also see the message you left, so you can totally spell out what you are or are not looking for in the discussion, e.g. “I just want to understand why this test fails” or “I need help understanding this exercise and would like a full review”.

Understood. Thanks IsaacG.