Since array is in the exercise name unfortunately, I think we should use array in the description for consistency and add appends to clarify what’s being used. Pyret has an append already because arrays and lists are handled very differently by the standard library. Arturo needs an append currently because lists don’t exist and we use lazily evaluated blocks. If we switch to just saying arrays, I can change the blocks to eagerly evaluated arrays easily and no append is needed.
Just re-read these instructions. I think they need more than “array” normalizing. The code example implies arbitrary nesting depth and heterogeneous data — which may or may not be how arrays/vectors/lists/whathaveyou are implemented across programming languages.
For example:
input: [1,[2,3,null,4],[null],5]
output: [1,2,3,4,5]
The example to me implies a fixed nesting depth instead. A tweak like [1,[2,3,null],[[null,[4]],5] indicates the arrays won’t have a fixed nesting depth.
Whelp. Not sure if we should tweak it or not. The canonical data does seem to mostly stick to numeric data. But this test case has 5 levels of nesting.
This one has 6 levels with nulls (which the notes say not all languages implement).
I’m tempted to suggest this moves in the other direction and adds, say, a mixed-numbers-and-strings data … but that might not be feasible in some tracks :D
For Python, we may want to add both an addendum and additional test cases that include other types of nested data, since that’s how lists are often used in the language.
But for canonical data, I don’t think that will work for many languages, depending on their array implementation.