Surprisingly many "An error occurred while running your tests" messages

Recently,I’ve been getting surprisingly many of the following error

An error occurred while running your tests. This might mean that there was an issue in our infrastructure, or it might mean that you have something in your code that’s causing our systems to break.

Please check your code, and if nothing seems to be wrong, try running the tests again.

Right now it is happening in the Go track in the Election Day exercise where without editing anything, just running the tests with all methods implemented with ‘panic’. Is this the expected behaviour? Correctly implementing one of the methods (for the first task) doesn’t change anything.

I have similar problem in the Elixir track in the Stack Underflow exercise. I have implemented the first two steps yesterday and they work, but whatever I do to implement the third I get the same generic error. I have unlocked the community solution for that step and copied one of them so the code should be fine, but still I’m getting that generic error.

Is there a way to troubleshoot this?

I’ll take a look. Thanks for reporting.

1 Like

I got the same for Java Flatten Array after I made a small edit and submitted because updating the exercise gave me an eternal processing.

I suspect it’s this commit that’s done it, which has caused a somewhat large amount of solutions (every solution ever on go, so hundreds of thousands) to be running through the test runner.

There’s currently 248k left.

What this means is that the 19 that are queued normally are having to wait for the currently running Go reruns to clear before they can be run (as we have a max 30 simultaneous test runs that can be run). And I suspect that this is therefore causing timeouts.

I’ll need to do some coding and ops work to fix this. Which will take a few hours probably.


(To be clear, this isn’t the Go track’s fault at all - it’s just us trying to optimise peaks and troughs of usage with limited resources and here we’ve not got that balance right!)

The bump in mentor requests is kinda nice, but if it will take a few hours, we may need a banner.

I tweeted too: https://twitter.com/exercism_io/status/1632783368390574094

Interesting that it’s caused that.

Ups :speak_no_evil:

Just a handful of people asking why it fails online when it passes locally.

Thanks for the update. It makes sense. An interesting problem for now and the future if upgrades to tracks are causing all the previously completed solutions to be re-evaluated! Have fun optimising it, @iHiD

@michalporeba You can blame @bobahop for this :wink:

He got very frustrated having to manually update when an exercise changed so we automated it so it runs the tests and auto-updates if it can without need a user intervention. Which is a really great feature, but maybe we needed to think through slightly more deeply.

We’ve just never had a situation at this scale before. I did think the infrastructure should handle it but I was clearly wrong :slight_smile:

I will take full responsibility, however I will do nothing to correct the situation. :smile:

If it helps, I’m also having the same issue on the Python track, here’s the issue I opened (following the link on exercism troubleshoot, which probably needs to be redirected to this forum!)

Troubleshooting Information

Version

Current: 3.1.0
Latest: 3.1.0
Installed via Chocolatey

Operating System

OS: windows
Architecture: amd64

Configuration

Home: C:\Users\Fabio
Workspace: C:\Users\Fabio\Exercism
Config: C:\Users\Fabio\AppData\Roaming\exercism
API key: e8c4*****************************682

API Reachability

GitHub:

Exercism:


I’m having iterations fail when submitted via CLI (even though they passed all tests locally), but the exact same code passes when submitted via online editor. This happened with 3 different exercises:

The problem is like Visa: it’s everywhere you want to be.

1 Like

Some solutions are more ‘urgent’ to rerun. Recently submitted ones for example, and much-starred ones. Years-old solutions that no-one ever sees anymore can wait.

I don’t think that holds up actually. If Bob has completed 100 solutions per year for the last 7 years, and only the latest ones update, he’ll have 600 solutions he has to manually go through to fix. I don’t think time-since-solving is really an indication of the value of the feature. Time since the user last used the site might be.


Everyone - I think I’ve cleaned things up a bit. Could you check please?

I did not mean that these solutions need not be updated. Rather, their updating has very low priority. Using this you can theoretically stretch out large batches far into the future, reducing the burden in the short term.

Thanks! My Java Flatten Array went though.

No need to :smiley: but it’s good to know there is an option :rofl:

I can confirm that in both the Go and Elixir track the unwanted side effects disappeared! Thank you! :pray:

1 Like

That’s already what we do. Background jobs are processed after foreground jobs. But when you’re talking millions of jobs, each that takes 5-10s to run, it’s quite a challenge to orchestrate.