YAMLScript Language Track

OK sounds good.
I’m not really optimizing for anything at this point except for automating the correct generation of new exercises for my track.

Regarding base the docker image, you might find this interesting yamlscript-test-runner/Dockerfile at main · exercism/yamlscript-test-runner · GitHub

and wow, node:20-bookworm-slim starts at 199MB (not that I’m optimizing :wink: )

Slim is ~60 MB (https://hub.docker.com/layers/library/node/20-bookworm-slim/images/sha256-d95b0e1893404d4402e6000827682efa63d3d319403b751537cc64b34d759deb), but ubuntu’s base image I think is only ~25?

That said, debian is what I know, and I know their security process so I didn’t optimise further than “grab the slim one” ;)

I was looking at these numbers:

1 Like

The Idris track has scripts I am finding helpful:

https://github.com/exercism/idris/blob/main/bin/create-exercise.sh

https://github.com/exercism/idris/blob/main/generators/generate.py

Various other tracks have something comparable.

As a starting point when choosing difficulty, I estimate the median from other tracks

1 Like

Yeah, I’m thinking to scrape and median all the metadata I can.

Since all the YAMLScript exercises’ tests can be run by the user using make test I realized I can use different layouts for different exercises.
For simple ones I’ll go with foo.ys and foo-test.ys.
For others I might go with lib/foo.ys and test/foo-1.ys.
I’ll document the possible file layouts and the user will be able to choose any valid layout they want.
The Makefile will recognize the setup and DTRT.

Could they be set up to always use the same layout? That would make things simpler for users that are working locally.

1 Like

You don’t need practices, prerequisites or topics unless you’re building Learning Mode (which I presume you’re not atm)

1 Like

I’m not sure yet what makes the most sense as I’ve only finished 8 simple exercises.
I was thinking that some exercises might have multiple test files but some quick research shows that they all seem to have exactly one test per exercise.
Only these languages have a test directory:

  • clojure
  • cpp
  • dart
  • elixir
  • erlang
  • gleam
  • haskell
  • lfe
  • perl5
  • purescript
  • raku

I was also thinking that some exercises might be best solved with multiple source code files. I don’t if that’s the case.
If it’s always one code file and one test file, then yeah I can just go with that.

It feels like currently the main point of exercism is implementing exercises and passing tests. Not so much setting up a real world software project where things would be much different.

I believe this is accurate :slight_smile:

2 Likes

It’s not always one code file, but it should only be one test file. See java/exercises/concept/remote-control-competition/src/main/java at main · exercism/java · GitHub as an example where the student has multiple files pre-defined in the browser. Otherwise, a student working locally can submit multiple files making up their solution through the CLI.

Good to know.
I have all the 71 active language repos cloned and I’m writing scripts to find out interesting things like this.
I’ll share some my findings here.

H``

Here’s the average difficulty of all the practice exercises for all the active language tracks:

The first number is the number of tracks with that exercise.

The median could also be worth looking at. Averages tend to get skewed by outliers and default values.

1 Like

The interesting thing is we have a single occurrence of pythagorean-triplets when the expected canonical slug is pythagorean-triplet. I traced that back to the Euphoria track (see euphoria/exercises/practice/pythagorean-triplets at adca22b8c582ce3559cb2c117124ea470aee5e70 · exercism/euphoria · GitHub). This exercise seems to have identical content to pythagorean-triplet so I think. @ErikSchierboom, do we want to fix this apparent discrepancy?

Here’s with medians too. difficulty.txt · GitHub

I had a mistake detecting deprecated exercises. Fixed here: difficulty.txt · GitHub

This is 100% accurate.

@ErikSchierboom - PR for review

1 Like