In the CoffeeScript Bob exercise, test 6, “forceful questions”, checks this:
result = bob.hey 'WHAT THE HELL WERE YOU THINKING?'
expect(result).toEqual 'Whoa, chill out!'
According to the instructions, questions that are yelled should “Calm down, I know what I’m doing!”
For example, the JS equivalent test is:
const result = hey('WHAT THE HELL WERE YOU THINKING?');
expect(result).toEqual("Calm down, I know what I'm doing!");
The PR to fix the test.
If this is accepted, we can update the example solution.
Update: my bad, this is more complicated than I originally thought.
All the current solutions, 337 of which are published, follow these tests. Updating the tests would invalidate all of them.
Might it be simpler to change the instructions alone?
If the tests get updated, they should be updated to adopt what is in the problem specs. New versions of tests should match the spec unless there is a really good reason why they shouldn’t.
You mean we should maintain the same question?
This is the problem specs test:
"description": "forceful question",
"heyBob": "WHAT'S GOING ON?"
"expected": "Calm down, I know what I'm doing!"
However, my question is, should we update the tests or the would it be easier to update the instructions? As far as I know, we use
.append to modify
problem-specifications exercise instructions. But in this case, we have to cut out this line.
I would go with updating the tests to match the problem specs, even if that means breaking existing solutions. However, I’m not a maintainer on this track.
Could you open the PR? I’ve pushed the commit modifying the example solution, but perhaps it needs to be reopened for the new commit to show up on the PR.
I’m not sure who the CoffeeScript maintainer is. Most of the recent PRs seems to be merged by Erik.
It’s currently unmaintained.
That link gives me a 404 error. Is it perhaps available only to
That link is only available to maintainers.
My bad, I had assumed you were a Python maintainer since you’ve been pretty active there. For what it’s worth, these are the launched but now-unmaintained tracks I see using PyGithub.
There was an unterminated string in the fixed test case. I created a PR for this little fix:
@ladokp could you relay the error message you encountered for you to find this out?
@safwansamsudeen I didn’t save the error message and since the error is fixed now and the tests run smoothly again it’s difficult to reproduce the error message. I’ll try to reproduce it, but basically the test run threw an error regarding an untermitated string and pointed to the line of the “forced question” test.
The error was like this:
error: missing '
result = bob.hey 'WHAT'S GOING ON?'