Hello,
I thought it would be nice to have a benchmark for the knapsack exercise so I added one inspired by the parallel letter frequency exercise.
The benchmark implementation is a simple brute-force.
If you think it can be useful, I have created this PR (I created as a draft hoping it would have survived the autoclose).
So I got around to this, and I’m currently not seeing the value of this benchmark. The instructions point to wikipedia, which has pseudo code of the optimal implementation.
I think there can be value to a benchmark if it leads the student on a path of learning. But copy-pasting from wikipedia and translating pseudo code into Rust syntax is not what we want to encourage students to do, I think. Maybe knapsack is just not the right exercise to add a benchmark for.
Let me know if you have other thoughts why this might be valuable to students!
Probably you looked at my solution and not at the actual benchmark linked in the message above, because the benchmark doesn’t contain a wikipedia link and it implements a simple brute force, not the optimal solution.
The benchmark is similar to the one included in the parallel letter frequency and in my eyes it has value in showing how dynamic programming is more performant than brute force in solving the knapsack.
My concern is that knapsack is not an interesting problem to optimize. (parallel letter frequency is!) By adding a benchmark, we’re essentially communicating to our users that they should consider optimizing their solution for performance. Which, in my opinion, we should only do if there’s something to be learned there.
knapsack is not an interesting problem to optimize.
I really don’t understand the reasoning behind your opinion. Why is parallel letter frequency interesting and knapsack no?
There is quite a lot of research about knapsack optimization and it is a good example to show the power of dynamic programming. The benchmark allows the student to assess the performance gain against brute force.
I didn’t write the exercise and if you believe that it doesn’t bring value remove it entirely (I wouldn’t), but I don’t understand why that would be a motivation for not having a benchmark.
@Marcondiro If we’re to have a benchmark, I’d want it to be as an article (e.g. via an article like this). That’s the right way for us to do this.
For that, we’d need the various different implementations that are worth benchmarking, and some interesting differences to show off.
If you have 3 or 4 different approaches and benchmarks for those, that would be great to see, and allow us to move forward with this. So far the conversation is more in theory than in practical data, so if you can give us some good data, then that’ll demonstrate the value really well.