Educational value of accidental complexity - input validation case


It might be an opinion piece, but I’ll do my best to look at it constructively, and objectively as soon as I get it out of my chest first: input validation in programming exercises can be really frustrating!

OK. Now, let’s look at the problem and potential solutions.

Exercism is great at teaching new programming languages, especially when exercises are grouped in syllabi, so you can explore specific topics, one by one. It is so good, in my opinion, because you can focus on a concept and try to explore, understand and practice it. Any accidental complexity is not only distracting from learning and detracting from the effect, but can be really frustrating. Input validation is particularly bad in that regard.

Here is an anecdote to illustrate it. It is my own experience from the last two days, but it happened to me many times before, and I’d like to think I’m not the only one so challenged.

I was doing an exercise All Your Bases, to do with integers on the Elixir track. A possible solution, one I followed was a recursive one. Some things were working, but then at one step all I was getting are timeouts, clearly I’ve got an infinite recursion somewhere. I spent a lot of time trying to figure out how rem and div functions in Elixir could not follow normal math rules that would cause my end recursion guard not trigger only to eventually realising that the problem was, that the test I was working on passed, and moved to the next one, which was feeding the function with rubbish data.

Of course, there is the argument that real programmes have to take care of the input data. I know. Input validation is an important concept but is it something we should be practicing with every exercise? Wouldn’t it be better to focus on the topic taught? After all, it is not real life applications we write, but we engage in educational exercises. I have some 20 years of programming experience and a number of languages I’m familiar with, and yet I still get caught by things like that when I try to learn something new. If something is not working and I get a generic error message, is spend a lot of time and eventually get annoyed with myself, the platform, the language. Is the problem that I misunderstood the syntax? Is the problem that I did something wrong? Or is it that there is a concept I haven’t considered yet, or that the tests feed some incorrect data? And in real applications, I think it is a good practice, to handle the validation ahead of time, and definitely outside of the business logic, isn’t it?

I think “fixing” it would make learning much easier and here are a few, non-exclusive, options to make it better, and I think it should apply to all the tracks. I would like to know what you think about the options

  1. Have tests that deal with input validation first, so input is handled before the main exercise begins in earnest. This alone will provide much better user/learner experience without changing the exercises themselves.
  2. Clearly state at the top of the exercise whether input validation is necessary or not. The current descriptions often send learners to focus on the algorithm rather than input validation first which ends in errors.
  3. Remove tests that feed incorrect data from most exercises, and keep them only in exercises that teach input validation.
  4. Create input validation concepts in syllabi to focus on good, idiomatic ways of handling input validation

While I would love to see points 3 and 4 above implemented, I think that option 1 is simple enough to do with little effort and huge benefit to people new to languages. And I’d be happy to make the changes as I go along through exercises.

What do you think? Has this been discussed before?


Great post. Appreciate your frustration and how you both vented and suggested solutions! So thank you :slight_smile:

To provide some historic context here, pretty much all of Exercism’s Practice Exercises are in many ways “legacy” to Exercism, in terms of that they were created long (nearly a decade!) before the Concept Exercises. They were initially designed to be a “bucket of exercises” - a set of things to do to challenge you and help hone skills. One of the challenges as Exercism has evolved is that they now need to neatly slot into a syllabus and that they are partially about learning a Concept (in terms of the fact you’re practicing something you’ve just learnt, rather than just doing a random exercise that could have any random things in). This often means they’re not suitable for the task, but have been shoe-horned into the syllabus at a relevant slot.

One of the things we’re doing this year (soon to be announced in fact!) is going through all of the exercises in Problem Specifications (our main corpus of Practice Exercises) and refining them all. We’re starting with a pass on all the instructions to standardise and refine them (something @kytrinyx has already started working on), and then going to continue with other sets of improvements as the year progressing. So this post is very timely in terms of giving us some things we need to address.

Onto the meat, firstly I agree conceptually that input validation shouldn’t get in the way of the key concepts in the exercises. However, I do think input validation is a useful skill to practice (I’ll say more in the bullet points below).

In terms of your concrete suggestions:

  1. Have tests that deal with input validation first: Conceptually great, but we’d need to work out whether this works in practice across all exercises. It might do, or we might find places that doesn’t work very well (we’d need to check and if that’s a piece of work you’d like to do to try and prove or disprove this as an assumption, it would be valuable and appreciated, and we could work out how to best achieve that). Even if it didn’t always hold true, I could see this being a “soft standard” though, so something that is desirable but that we acknowledge might not always be possible
  2. Clearly state at the top of the exercise whether input validation is necessary or not: Similar feelings to (1). I like this in principle, but there may be variations in different languages that we need to account for. One challenge we have is that languages can opt in and out of different tests (or add their own) so we have to be careful putting statements in the instructions that may not always hold true across all languages. But there are things we could maybe do to mitigate this. This again feels like it would need a bit of a research and definitely feedback from maintainers across a variety of languages.
  3. Remove tests that feed incorrect data from most exercises: I love this less. I thin input validation is often a useful part of TDD and is also useful practice for developers more junior than yourself. I also don’t think it would be necessary if (1) and/or (2) were implemented. It would also involve deeper surgery on the exercises than I feel we want to do.
  4. Create input validation concepts in syllabi: Absolutely love this idea. These are the sorts of exercises that I feel add real value beyond the language “basics”. If you could come up with a good Concept Exercise for one language (maybe pairing with a senior maintainer) then it’s something we could reuse across tracks (we have a copy/paste/edit approach to Concept Exercises - rather a canonical set as per Practice Exercises).

I don’t think this has been widely discussed before, and we’ve certainly not approached it as a project. But I feel like it might be something that could be well structured into a project and that @kytrinyx might like to lead with you alongside her other improvements to problem specifications. She can chip in next week.

In the meantime, I’m interested in both maintainers’ opinions (knowing the exercises and their implementations better than I do), and also everyone else’s opinions as people solving the exercises.

1 Like


I think this is a straightforward solution, with the caveat @iHiD mentioned.

On JS/TS we defaulted to no input validation is necessary unless stated otherwise. I would personally prefer that all across Exercism, because input validation is almost always done at the wrong place.

That, and combined with point 4, which I will hapily champion if no one else does.

1 Like

I notice I am a bit confused. What exactly do we mean by «input validation» here? To get an even better grip on it: are there cases where it is unclear whether the term applies? (Which ones?)

I regularly wish for my input data to be dirtier. Only on the Haskell track though. Sometimes I feel test cases allowed/suggested by the scaffolding are missing. So here too there is language variation.

Hi @MatthijsBlom

You said

Could you expand a little to help me understand? Do you miss those as a contributor or learner? Why do you wish the for dirtier data in exercises? Why only in Haskell track?

1 Like

As a learner, I do not want to have to guess whether I’m doing it right. I want to be able to check. The obvious way is by looking at the spec; here the spec consists of the tests.

Protein Translation is a good example. The stub proscribes a certain type. That Maybe there seems sensible. However, when exactly should it be used? The tests do not contain any dirty input (search for Nothing), leaving me wondering. This has confused at least two supermentors and at least two of my students.

It is the only track that I have had this problem with yet.

Haskell makes it easy to express distinctions that other languages find too subtle. Most/all exercises here were not designed for Haskell.

Edit: I should add that Haskell also makes error handling very easy, with its various combinators.

1 Like

While I find the specific error message requirements are sometimes a bit cumbersome, I find having invalid inputs and forcing input validation builds really, really good practice and I think it’s high value. I agree they may get in the way for concept exercises, but for practice exercises, I think it’s high value.

1 Like

For exercises such as gigasecond, I don’t think calling it with non-date inputs or invalid dates adds any value to the exercise.

I don’t think we can say that in general it’s a good idea. The decision of not adding it to exercise was not made lightly and over the years, additions to problem specs were often rejected for making the exercise unnecessarily harder. Input validation should, in most cases, only occur at input boundaries which is “reading i/o”. You can keep the data in an unvalidated state and defer, but defensive programming is one of the causes for churn an rigid code.

I don’t think we will benefit in general adding input validation and in fact, I think you’ll push people to do this in their own code as well, which more often than not is an objectively bad idea.

In the same category: output validation generally is reserved for “writing i/o”, aka, the output boundaries. Yes, you may validate (assert) your generated output is valid (this can be smart to do if you’re uncertain the algo does what you want it to do), but it’s not a practice you’ll want to do in general.

Why? Because the majority of people advocating for this will make an arbitrary decision about what should be validated and what shouldn’t.

1 Like

As a “protector” of this exercise in the case of Ruby, I can reflect what is said in this message.

Indeed, if this were the Decasecond exercise, rather than Gigasecond, it makes sense in a language where ducktyping comes to play, where instead of a moment in time, I might give a moment in motion, and supply some seconds as the base unit of a Degree class rather than a Time class, and expect it to work intuitively.

In Ruby, for instance, the Time class was used as a convenience of not giving away the most simple solution for this, which is simple addition of an integer.

It also has other learning value as well, for using that one class over the other during mentoring.

Sometimes when an exercise is over specified, we lose learning opportunities and freedom of how to solve the exercise, and what the results are. The Practice exercises should give the most freedom to solve.

I think some input validation can be really useful.
For example, I’m doing the Gleam track and they are very big on dealing with errors, with many of the standard library methods also returning Result values, which encode an error/success state.
Having some exercises that required me to do input validation was really helpful and I expect this to also be the case for other exercises.

In general, validating input is a key skill that real-world code is often chock-full of, so being able to practice it is great.
That said, I definitely don’t want all exercises to have input validation.
There are several reasons for that:

  1. In many cases, input validation is not what the exercise is about, so in that sense it detracts from the “core” of the exercise
  2. It adds to the difficulty of exercises. Especially some of the less difficult exercises really benefit from not requiring input validation I feel.
  3. This is subjective, but I don’t necessarily enjoy writing input validation logic. To me, it often feels to get in the way of actually trying to solve the exercise

This could work, but it might also be a non-optimal workflow for a student.
I expect students would probably prefer working on the non-validation parts first.
Many exercises have input validation be the last things the student does, which I personally quite like.

As iHiD mentioned, this can be greatly dependent on tracks. We’d have to do some research, but I expect this to be hard to do.

IIRC we’ve done this in the past for some exercises. There might be a couple of exercises where it could still make sense (I’m looking at you space-age).

Lovely idea, but it would only work for tracks that have a syllabus, which is a minority (at the moment)

1 Like

I will still argue that the arbitrary choice of which input to validate against is problematic. Languages that have “railway” design to deal with errors often don’t need to deal with this because an error will bubble up as… an error, instead of being thrown as an exception.

That said, those tracks, like Gleam, should probably add extra tests if its core to the language. I still believe that it will mostly detrimental to do this en-masse or as default behaviour.

Extra idea on top of the syllabus exercise: we can make various practice exercises that clearly are designed to deal with user input or output, right? That would solve the “no-learning track” issue.

Yep, that would be a lovely option.

Input validation is a cross cutting concern with major implications in security. I feel it should be elevated to be of greater programming concern.

Regardless, I found it a frustrating aspect of solving the exercises that many of the tests revolve around input validation rather than assertions around the exercise’s primary logic.

As is the programmer’s wont, I found that I first solved the algorithmic aspect of the exercises and then worked on input validation. This often had the unintended consequence of requiring a significant refactor later.

After I learnt raku grammars, I rewrote many exercises to use them for upfront input validation. Grammars appear to be an intentional language design approach to deal directly with the issue of input validation:

grammar ISBN {

    rule TOP { <digit> ** 9 [ <digit> | X ] }
    token ws { <:Dash_Punctuation> ?        }


Certain exercises, like the ISBN Verifier, proved ideal grounds for focus on input validation.

This is an interesting take, and completely opposite to my experience. Is there a mechanism in Exercism already to do some user research on wider scale than putting agains each other two subjective opinions?

For what it’s worth mine is that I prefer to deal with validation up front in exercises and in day to day programming. I want to know what are the constraints of my input before I implement the algorithm or the business logic. I find that this approach results with better concern separation and simpler implementation - I can safely assume that the data I’m processing is correct, as I have just validated it.

So, could we ask tens or hundreds of people and see what is the prevailing option?
Or would it be possible to give people the option in exercises with validation in which order they do it?

I’ve been looking at a number of exercises recently and I think this type of a research might be an interesting idea. @kytrinyx I’d be happy to help with problem specifications.

(FYI, we discussed this on the community call this week. You can find the link to watch it back here (available this week only). I asked people to post on this thread so hopefully we’ll hear more thoughts next week.)

(Thanks for sharing. You haven’t scared me away. Quite the opposite. It’s just that life sometimes gets in the way as it was last week, but I’m back).

The points raised in the call are great, even more perspectives! It would be good to see them written up here too to keep the record and help get to a conclusion.


As I said on the community call, this is a topic that I am thinking about for a long time as well. I just didn’t take the time to raise it so far so I’m happy it came up.

I currently see two main problems:

  • Input validation usually has to do with errors/exceptions. To get to the point in a concept tree that you can properly understand those, you need to learn a lot of other concepts first. E.g. in JavaScript errors are instances of a class, so you need to know classes, for those you need to know simple objects and the functions concept etc. Now when we imagine most practice exercises need errors to solve them, that means all of these can only be unlocked very far down in the concept tree.
  • I feel that the benefit of practicing input validation over and over again is limited.
    • Practice exercises are about practicing the concepts of the language to get fluent in them. From that point of view, if 20-30% of the practice exercises (spread over various difficulties) would include input validation, that would be plenty of practice. For the remaining 70% of exercises, input validation would just be an add-on to the actual concepts needed for the exercise that does not lead to more learning. So students might perceive it as annoyance only.
    • Regarding “input validation makes it more real world like”: I thought about this a bit more and I don’t think this is a strong argument. In most real world applications that have some structure to it, there would be specific parts of the application that do the input validation. E.g. you would validate some input you got in an HTTP at the beginning but as the value travels through various functions in your application you would usually not validate over and over again in every function you write as part of your business logic. (There are some expections of course as always.)

I don’t feel “a track can just not implement the input validation related test cases” is a practical solution. Imagine a track maintainer wants to have practice exercises for the top of the concept tree and wants to tune down to the 30% mentioned above to realize this. Reasons:

  • It would fall on the track maintainer to sort out for which exercises input validation is a good fit.
  • The exercises cannot be auto-updated via configlet sync anymore because for every new/re-implemented test case the maintainer manually needs to check whether it is about input validation and needs to be excluded.

The point above about errors being far down the concept tree can be mitigated by saying if the input is invalid, return some value" instead of saying “an error should happen”. But that still does not mitigate the other point about the limited value of input validation for the learning experience.

My personal ideal target state:

  • Exercise description clearly states whether all inputs will be valid or whether to guard against invalid inputs.
  • Around 70-80% (which you could call “most”) of practice exercises state that the input will always be a valid value so that students can focus on practicing all the language concepts they came here to practice.
  • The remaining exercises that have input validation should be mostly exercises where the input validation adds value to the exercise or makes sense from the context of the exercise.
  • If the exercise is commonly an easy exercise in tracks/ should be solvable early in the concept journey but really requires input validation for some reason, problem spec should not require errors but state that some fixed value should be returned so the exercises can be unlocked early in the learning journey without a lot of extra effort for the maintainers.

I do not have strong feelings about whether input validation test cases should come first or last. If only some exercises require it, it probably does not matter that much anymore.


So I keep thinking about it and I think I will need to explore a few more languages to have a better perspective (I’m not quite the polyglot Erik is).

Can you recommend languages where error or exception handling (or just validation) are novel, less common? Haskell was mentioned above already. I think Rust is also important to look at. But what else should I add to my list?

1 Like

I quite like how Gleam does error handling, in that anything that can go wrong returns an Result(a,b) type. It’s also a fairly easy language to start out with.

This document explains what Railway-Oriented Programming is, but you do have to know a bit of F# to understand it.

1 Like

Would it be fair to say that all of F#, Haskell, PureScript, Elm, Gleam, and Rust handle this in basically the same way?