New Exercise Proposal: Mazy Mice

Drive-by comment: seeds don’t transfer between two prngs such as two different languages.

Everything else: yeahhhh

Drive-by comment: seeds don’t transfer between two prngs such as two different languages.

I explicitly specify a prng, to obtain the same results in any language. Like many Exercism test suites, I keep to numbers below 2^31. This maximizes language support.

Always using a fixed scheme to generate the decisions makes it hard to find a seed that produces a maze that’s well suited for testing a specific edge case.

That is a one-off challenge during test case selection. One could instrument an example solution to report if each edge case is encountered, and search for a suitable seed. (In a similar way, I randomly searched for game-of-life input that covered 18 cases: dead and live cells with 0…8 live neighbors.)

“generates an unbiased sample from the uniform distribution over all mazes” (if the decisions made are actually random).

My suggested prng

X_{n+1} = (8121 * X_n + 28411) mod 134456

can be used to obtain an unbiased choice y in [0, K), as follows:

L = (134456 div K) * K
do
  obtain next X
while X >= L
y = (X * K) div L

My earlier suggestion evaded this complexity by always having K=4. so L= 134456 and all X are below L. But I certainly agree that Wilson’s is more interesting.

1 Like

Ok that seems doable… But what’s the advantage? You still have to tell users exactly how to use the numbers generated by this prng so they can implement the algorithm correctly. What’s the difference compared to just giving them those decisions as a list?

I suppose it would make the data structure in canonical-data.json more uniform. I don’t value that too highly. I use templating on the Rust track to generate tests and I think one more if-statement in the template to distinguish two different types of input is not a big deal.

The list of decisions as input also makes it easier for students to understand why exactly a test case covers a specific edge case. And it forces exercise authors to choose an exercise design that makes it easy for students to define their own test cases if they’re having trouble with an edge case the test suite designer didn’t consider.

It seems to me the biggest impact is just making it harder for the test suite designer. If they want to go that route, I’m not opposed to it, but I’m not seeing the benefit yet.

Yes, this is a good idea. Sorry, I saw that and read it as “a PRNG” such as “what I wrote”, instead of “We should use a PRGN, namely this one: …”.

Is your proposal to not generate numbers but instead give people which sides of each box should be a wall / which path to take from start to finish? What’s the proposal of “decisions” exactly?

Yeah, pretty much. Whatever you would get from an RNG, you get from a predefined list instead. For Wilson’s algorithm, that’s too things:

  • pairs of arbitrary cells between which to perform a random walk (coordinates)
  • steps along the random walk, (up, down, left, right)

So that could translate to an input that looks like this:

input: {
    cells: [[10,16], [8,23], [12,11], /* ... */],
    steps: "udlrrldrlurddudrld..."
}
1 Like

To match the vibes of a PRNG, tracks should probably implement a design that supplies these decisions with an iterator-style interface. E.g. a function you can call to get the next decision of the algorithm.

1 Like

Thanks for the elaboration, that really helps my understanding.

To pick one or the other probably means making a decision about the teaching/learning value of implementing a decision maker based on a number.

This can go one of four ways I think:

  • add a comment in the problem spec how the seed value should be used, and only supply seed values. Optionally add tests that verify implementation of the PRNG, if this is a teaching goal of this exercise, otherwise add in the comment that tracks should supply the PRNG as a function in the stub/lib/require/import chain/make available.
  • add a comment to each test case which seed was used to generate a list of decisions. Then use the actual decisions in the test case. In this case the teaching should be to build the maze, not anything else.
  • have two inputs per test case, one that’s PRNG seed and one is the resulting decisions. Add comment for both (see previous two points), and add comment that tracks are free to implement which one they want to use.
  • same as previous but have separate test cases, one with input seed, and one with input decisions.

Based on what the teaching goals are, my preference is different. First one is the purest, and from a glance, implementing the random number generator → decision can have value. I think I’d like to solve that in an exercise. If that’s less important in some tracks, then I would like the third. If it holds no value, than we should do the second.

Remember that test cases are optional, and thus way 4 works in all cases. Also agreed to abuse the input with two properties for tracks to choose instead of two inputs for a function to test is fine, but not everyone may agree so consensus may be a bit harder.

To clarify, my idea is that the explicitly provided decisions don’t have to be derived from a random seed necessarily. While designing the test suite, one can just think of an edge case to test, than arbitrarily pick decisions that will lead to that edge case. No random seed involved.

1 Like

Yes, I did understand this. I took it as “relatively trivial” to use the random seeds based on @keiraville here:

1 Like

Any of these 4 sound okay to move forward with this idea @senekor @mk-mxp? I’d also love to here your opinion @rabestro .

I’d love for more random-based exercises now that we are adding the concept.

Yeah, sounds good to me.

@SleeplessByte I currently don’t have time / capacity to think through ideas. Sorry.

1 Like