and while this technically is the right solution and it passes the tests, I don’t think it’s the intended way to solve this. Of course we can have some automated feedback to tell the students that they shouldn’t do that but It seems that they rarely read that, so I came up with a test that would enforce a solution that isn’t hardcoded:
import * as fs from 'node:fs'
test('no hardcoded solutions!', () => {
const regex = new RegExp(/\s*return\s*.(Norwegian|Japanese).\s*/gim);
const test = fs.readFileSync(`${__dirname}/zebra-puzzle.js`, {
encoding: 'utf-8',
});
expect(test).not.toMatch(regex);
});
I don’t think that this is perfect, for instance it relies on Node and it won’t catch solutions like this one:
zebraOwner() {
let zebra = 'Japanese'
return zebra
}
but I feel like it’s at least worth discussing if something like that is worth investing time and effort into.
Consider who is harmed by cheating. It’s the person submitting the shortcut solution. If someone chooses not to solve the exercise in the"right" way, that’s their choice. What do they gain? Perhaps a badge or a medal on the website.
I expected that the policy with these kinds of solutions would be non enforcing but I thought I’d have a go at the problem nevertheless.
Realized that i have no idea how to write a test that checks for a specific implementation, which shouldn’t come as a surprise, since in the real world the actual implementation wouldn’t matter.
Spending time dealing with hardcoded solutions isn’t worth the effort. There are better things we can do that add value to the site than trying to catch those who prefer shortcuts.
@Cool-Katt This solution isn’t intended. Is there any chance that those people who used shortcuts haven’t understood the assignment? Or is it clear that they are trying to “cheat”?
I think it’s pretty clear that they’re trying to cheat the exercise. The instructions clearly say:
Obviously, you could simply write two single-statement function if you peek at the test program to see the expected solution. But the goal is to develop an algorithm which uses the given facts and constraints for the puzzle and determines the two correct answers.
In the case of javascript specifically, there are about 50 thousand representations in the queue expecting auto feedback, and i’ve gotten through maybe 100 so far so it. This specific exercise alone yields an entire page of representations.
Not that this isn’t expected, but it is overwhelming to the point where I would rather do anything other than deal with representations.
And from what i’ve seen on this track, people rarely ever even notice the auto feedback, even with the dialogue at the end before submitting, most would just skip the check because there’s an option for that.
In addition to those hurting themselves by “cheating”, there is those hit by false positives of automatic cheating detections. Maybe I have invested lots of energy in the “best ever” solution, that contains return Norwegian.leftNeighbour(); or such. I’d be not amused.
Perhaps they are simply focused on quickly increasing their reputation or collecting medals. Trying to catch them might slow them down but likely won’t achieve much else.
It’s worth noting that there are many other ways to cheat. Even if specific implementations are restricted through tooling, they can easily bypass this by copying and pasting existing solutions. My preference is to simply add an appendix file with a suggestion if the instructions are insufficient, but I won’t take any further action to actively pursue cheaters. I prefer spending my time on those who are genuinely interested in learning rather than wasting it on those who are not.
It doesn’t. But the analyzer does. This is where this sort of tests belongs. It will still stop the student progressing (to the same level failing tests do).
But yes, I don’t think it’s needed. But if you wanted to learn how the analyzer works and get into it, it’s maybe a straight-forward one to get started with