Prevent hard coded solutions for `zebra-puzzle` and similar exercises

I noticed that a lot of the submitted solutions are something similar to

waterDrinker() {
  return 'Norwegian'
}

zebraOwner() {
  return 'Japanese'
}

and while this technically is the right solution and it passes the tests, I don’t think it’s the intended way to solve this. Of course we can have some automated feedback to tell the students that they shouldn’t do that but It seems that they rarely read that, so I came up with a test that would enforce a solution that isn’t hardcoded:

import * as fs from 'node:fs'

test('no hardcoded solutions!', () => {
  const regex = new RegExp(/\s*return\s*.(Norwegian|Japanese).\s*/gim);
  const test = fs.readFileSync(`${__dirname}/zebra-puzzle.js`, {
      encoding: 'utf-8',
    });
  expect(test).not.toMatch(regex);
});

I don’t think that this is perfect, for instance it relies on Node and it won’t catch solutions like this one:

zebraOwner() {
  let zebra = 'Japanese'
  return zebra
}

but I feel like it’s at least worth discussing if something like that is worth investing time and effort into.

This topic comes up frequently.

Consider who is harmed by cheating. It’s the person submitting the shortcut solution. If someone chooses not to solve the exercise in the"right" way, that’s their choice. What do they gain? Perhaps a badge or a medal on the website.

2 Likes

I expected that the policy with these kinds of solutions would be non enforcing but I thought I’d have a go at the problem nevertheless.

Realized that i have no idea how to write a test that checks for a specific implementation, which shouldn’t come as a surprise, since in the real world the actual implementation wouldn’t matter.

Spending time dealing with hardcoded solutions isn’t worth the effort. There are better things we can do that add value to the site than trying to catch those who prefer shortcuts.

@Cool-Katt This solution isn’t intended. Is there any chance that those people who used shortcuts haven’t understood the assignment? Or is it clear that they are trying to “cheat”?

And there already exist mechanisms that could be used catch this:

  • track representers and automated feedback
  • track analyzers
2 Likes

I think it’s pretty clear that they’re trying to cheat the exercise. The instructions clearly say:

Obviously, you could simply write two single-statement function if you peek at the test program to see the expected solution. But the goal is to develop an algorithm which uses the given facts and constraints for the puzzle and determines the two correct answers.

In the case of javascript specifically, there are about 50 thousand representations in the queue expecting auto feedback, and i’ve gotten through maybe 100 so far so it. This specific exercise alone yields an entire page of representations.
Not that this isn’t expected, but it is overwhelming to the point where I would rather do anything other than deal with representations.

And from what i’ve seen on this track, people rarely ever even notice the auto feedback, even with the dialogue at the end before submitting, most would just skip the check because there’s an option for that.