Darts missing tests

Hi!

I’ve been mentoring for a couple of months now and noticed that some solutions pass the tests successfully while actually being quite incorrect. For example for Darts problem:

let radius = (x * x + y * y).squareRoot()
    switch radius {
        case 0...1: return 10
        case 1.01...5: return 5
        case 5.01...10: return 1
        default: return 0
    }

Have an idea to add some more precise tests just for Swift, but kinds stuck. I’ve found out it’s not that simple to add new test cases, cause everything is linked to problem-specification.

Is that true? Or can I add new UUID tests to .toml file and then update tests?

Thank you.

Another problem is that sometimes test results are akward.

This is definitely not okay. So, I’ve checked the test runner, the output xml and json looks valid. So maybe not all the tests are running (RUNALL env) in a prod environment? Any clues where to dig from here: specifically how to check RUNALL env is present and passed?

@Meatball Hi, could you please help with this?

A Line was removed in the latest Swift test runner update. Thanks for informing me about this, I will add some test cases to make sure the ci will catch this in the future.

2 Likes

What’s the problem with that code? Is it “what happens if radius is 1.001”?

The problem-spec tests are not meant to be completely exhaustive. There will always be edge cases that aren’t tested.

In this case, if it’s a mentoring session, I’d point this case out to the student and suggest that switch may not be the best tool to use.

2 Likes

What’s the problem with that code? Is it “what happens if radius is 1.001”?

Yes, it’s exactly this kind of edge case. Sure it should be mentioned during review.

The problem-spec tests are not meant to be completely exhaustive. There will always be edge cases that aren’t tested.

Does that mean the exercism tests overall not meant to cover all edge cases?
Is it possbile and actually a common practise to add track-specific tests for problems?

See these docs.

The tests in general (in the spec or on a per-track basis) aren’t meant to cover each and ever possibly case. The tests aren’t there to “grade” your work. The tests are there to guide you in the right direction. They tell you if your solution does the right thing, more or less. It’s always possible to write a solution that passes all the tests but will do the wrong thing with some hypothetical other input. At the extreme, a solution could have if (input == 56) return 56; as the first line :slight_smile: While not very meaningful, tests cannot cover every situation. And that’s fine. They are there to make sure you’re going in the right direction, not to grade you.

If there is an issue with the tests where they are failing to direct people in the right direction (eg multiple students are completely unaware their code has an issue), then it might make sense to add additional tests. At that point, unless the issue is language specific, it probably makes sense to fix that gap across all tracks via the specs. If there is something specific to the one track/language that trips up multiple students, that’s when it makes sense to consider additional track-specific tests.

1 Like