There’s a similar issue with ‘y’, though it is more niche, and which can be solved by adding the test word ‘mystery’. I’m unsure whether to discuss a pull request for that though, when this addition didn’t actually work yet.
Do we just need to wait for some automatic periodic build?
Not many. I was reviewing someone’s code that should have failed in a couple of different ways, but passed all the tests. This is one of the cases that failed his code. (Your “liquid” also failed, but it’s not in the test set yet.)
It’s actually kind of spectacular that the code I reviewed worked for “therapy” at all.
Looking through the most recent 10 solutions, 2 of them I think are bugged. Those do fail ‘liquid’ I think. I’m having a hard time parsing this code in my mind, so I can’t read many more than 10.
For the Python track, we don’t sync to problem specs more than once a month. It’s not a lot of effort – but it is effort, so we try to group the changes we pull in.
We have not done the sync yet for this month, and might wait until January some time, since we’re also doing some tooling and other maintenance at the moment.
Beyond that, track maintainers also have the option of rejecting a test case if they feel that it doesn’t fit with language idioms or other objectives on the track.
Fair. A de facto waiting period of 1-2 months isn’t too bad. It was confusing for me, but I’ll understand the situation next time.
I don’t see how an english language word to the pig-latin test cases could clash with any language idioms or other objectives, but I respect the maintainers authority to reject test cases for unknown reasons. It might require effort or other resources.
Trying to think of this exersise more systematically, I believe one of these words would be most likely to expose flaws in ‘solutions’: RegEx Dictionary Word Search .*y.*qu.*.