Robot Name exercise doesn't test randomness (at least in Go)

my non-random solution

I struggled with this exercise for quite a while. I unlocked community solutions and puzzled over how simple (and brute force) some of the solutions were for generating newly initialized robot names 75,000 times without collision.

Then I noticed that they were mutating the state of the package namespace instead of an instance of the Robot struct, thereby passing state between different instances of said struct.

The exercise became trivial as a result, but I had already written a generally good cycler for my “registers” that could cycle through all possible values non-randomly until namespace exhaustion occurred.

I then removed all calls to math.rand and tested it. It still worked. As a result, I can assert that, contrary to the test description, this is not a test about randomness. This test teaches you about package-level state.

Perhaps the intent was to do this as every solution must employ this technique. If the intent is to exercise this concept, then I believe that the README should be changed to reflect this intention. If the intent is to ensure randomization, then a test should be written that can do a rough test for predicting the next value. Even a naive one that checks for a consistent difference between each iteration would do.

In either case, the proposed changes would do a better job of ensuring that the concept intended to be taught is actually taught and might even get people to a solution and gaining that knowledge more quickly, maximizing the opportunity cost of spending time on the exercise.

Additionally, the test for namespace exhaustion is a tad bit naive, as a comment even admits. A couple of lines make it more robust, and I’ve tested them myself to ensure this.

I’d be happy to submit a PR for any of these suggestions, but community contributions are closed at the moment.

Is this the right space to discuss it? Do these things seem like a non-issue? Thanks in advance for any comments and opinions!

The exercise is supposed to be solved using random values and a state tracker.

Testing the names are random can be tricky; once people decide to take a certain approach (e.g. generating sequential values), they tend to stick with it and work their way around any tests thrown there way. If you check that names are not sequential, they can just iterate by 2 values at a time, or bump both a letter and number.

It might be naive but it gets the job done :slight_smile: If it isn’t broken, does it need fixing?

Robot Name seems to be a source of frustration across many tracks.

@ErikSchierboom Any thoughts site-wide?
@junedev @andrerfcsantos Any thoughts from a Go perspective?

I’m not against making no changes if it’s delivering the intended experience. I would personally consider it broken; the reason is that it exhausts the namespace, creates a new instance of Robot, and then checks if the namespace is exhausted. There is a single codepath that isn’t tested here, and that’s “What happens when you exhaust the namespace and then try to pull another name from it without creating a new Robot?”

I wouldn’t advocate for it if it wasn’t something I couldn’t write myself in a few lines. I try to prioritize value per line for things like this. Here’s how I modified the test locally to exercise my code: diff for the testing on my personal GitHub

I spun my wheels on the randomness detection and concede that it’s basically impossible to test when a pattern won’t repeat at least twice and naive approaches would throw false positives. People are here to learn, and I’m sure it’s valuable to them to do things that are in the prompt but not tested. Thanks for the insight!

While I have people here talking about Robot Name, I had one more thing that didn’t seem very consistent in messaging with regard to side effects.

Another exercise I did, Kindergarten Garden, intentionally checks for mutating state outside of a given stack frame, going so far as to comment in the tests that it’s “bad practice” to do so.

I’ve got no problem with that; I like reducing side effects.

But then I go to this exercise and the core mechanism, by every community solution I’ve read, is inducing side effects. I think it’s important to somehow let the reader know that they are going to need to do something (generate side effects) that previous exercises have explicitly told them is a bad thing to do. Maybe that’s a good meta-lesson in pragmatism too.

Edit: Reflecting on this post, I believe I was wrong about it being side-effects. I think it’s more aptly described as mutating state beyond the local scope. I think I was conflating that concept with “the implicit, secondary effects of some operation”.

As a result, pretty much disregard all the words in the post. Leaving it for posterity.

1 Like

Most of my solutions generate the names in sequence, shuffle them, and then take from the shuffled names in sequence. No collision, so it is fast. It uses up a lot of memory up front, but if keeping track of state for all 676K names, it will use that memory for all names anyway, with the added performance hit of a lot of collisions. I would be aggrieved at any track change that makes such an approach fail the tests.

To me it has seemed strange to have “production runs” use random names. It seems they should be in sequence so that you could report something like “units GR232 through GS123 had a manufacturing defect.”

To be fair it is a bit strange that an exercise that is so explicit in requiring randomness doesn’t test for deterministic solutions (given different seeds).

If testing is difficult, then there could be other requirements that could take its place, e.g. some manufacturer’s follow a pattern in serial numbers to lessen consequences of typos (each instantiated robot must differ with at least two character, thus skipping when necassary). One can prohibit ambiguous characters (like 0 and O, 8 and B, etc), and so on.

100%. This is far more efficient than any “check if I also have generated that name” approaches.

I agree too, the story of Robot Name has always been fairly problematic. We should probably come up with improvements to the exercise, maybe streamlining things by just having it be about generating random names?

In abstract, this exercise encourages understanding of namespace collisions. That’s super valuable knowledge to have worked through and integrated into any programmer’s repertoire.

Maybe the focus on random could be shifted to more explicitly challenging the knowledge of this principle.


I don’t think I can reply to multiple people here, but this next bit is just a general part of the discussion.

The reason I gave up on detecting randomness was due to the following thought:

  • Patterns can be the result of random generation. Detecting a pattern (which has no generalized proof) is not detecting randomness.

It tells me that you’d want to approach it from a different angle. I saw that @superfastjellyfish mentioned seed manipulation as a possible tool that helps here, but I gave up when I got to “what if they are intentionally using a different source to seed their solution”. It would follow that they can use a PSRG to generate the seed for another PSRG, leaving detection by changing a global seed ineffective.

I think people that want to exercise that will do it of their own volition, and probably shouldn’t be learning about whether or not something is truly random from an exercise about naming robots (lol).

That’s not problem :slightly_smiling_face: In fact the solution would have to be automatically or manually seeded to pass the test. The problem statement I suggested would be something like:

“if the program is restarted, it should not produce the exact same sequence”

That is what I would consider a minimal test for something to be random. Even though it is still predeterministic, one cannot simply run the program once, print out all the names and then know the exact sequence that are generated the next time the program starts.

That being said, I don’t actually think randomness should be necessary for the exercise. I only meant that if randomness is written as an explicit requirement then I would also expect such a test. I would actually advocate for removing randomness from the requirement, or substitute it for another one. Hopefully that makes sense :smiley: