Please consider include `tailcall` crate in exercism

I think tailcall crate provide no trick for solving problems apart from let you practice in more (tailcall-)recursive style. Because of that, it’s maybe worth to include in the allowed crate list for the purpose of practicing that style.

I’m gonna say no to that. It’s not that popular at 50k all-time downloads. More importantly, it’s just a performance optimization. You can write the exact same code without this crate. And if you do, LLVM probably even optimizes the tail call, it’s just not guaranteed. I think this is not useful in the context of exercism.

I’m really just learning Rust so I probably don’t know what I’m talking about (and certainly nothing about tailcall crate).

But, exactly because we don’t know whether LLVM will optimize it or not, I refrained from writing recursive code to solve exercises, even if that solution would be wonderfully natural. Writing code that can be blatantly (I think) not performant (even if elegant) is not my goal in Rust. The point being even if I can write the exact same code, I wouldn’t.

I cannot judge if it’s useful in the context of exercism – it could be, if tailcall is decently stable, maintained and very popular / used out there in significant projects. Having said that, it looks like it isn’t a common thing to do in Rust, and therefore I think I’m better off not relying on tail recursion in this track.

For reference, the output of cargo-expand for the example in tailcall’s docs in the following:

use tailcall::tailcall;
fn factorial(input: u64) -> u64 {
    fn factorial_inner(accumulator: u64, input: u64) -> u64 {
        let mut tailcall_trampoline_state = (accumulator, input);
        'tailcall_trampoline_loop: loop {
            let (accumulator, input) = tailcall_trampoline_state;
            return {
                if input > 0 {
                    {
                        tailcall_trampoline_state = (accumulator * input, input - 1);
                        continue 'tailcall_trampoline_loop;
                    }
                } else {
                    accumulator
                }
            };
        }
    }
    factorial_inner(1, input)
}

It’s a very simple loop. If I had to code review the addition of this library to a project I work on, I would likely ask for the dependency to be removed in favor of the loop. So it’s not clear to me that this library even enables more “idiomatic” code.

As far as I know, the uncertainty comes from destructors. Rust’s semantics dictate that destructors are run after the last function call, which makes tail call optimization harder. LLVM essentially has to prove that it’s ok to reorder the destructors, which it cannot always do.

So, if you want tail call optimization via LLVM, you can just call all destructors manually before your tail call (or use scopes to achieve the same) to tickle the optimization out of LLVM.

Fundamentally, this is the world we live in: We rely on compiler optimizations for our programs to go fast and few of them are guaranteed. If you really need maximum performance, you’re always gonna have to know a thing or two about how compilers operate, how to inspect the produced assembly and write benchmarks.

1 Like

I don’t argue against that. I’m saying that rather than risk writing code that has a good chance to perform poorly, because I don’t know, at least not as a beginner, “a thing or two about how [this] compiler will operate etc”, I’d rather write a simple loop. Writing deeply recursive code that isn’t optimized can get your code lost in the woods for good.

I was just trying to slightly amend what you said: “You can write the exact same code without this crate.” – I can, but shouldn’t.

I hope it makes sense.

Most beginners should worry about getting the code to work and be somewhat clean before worrying too much about optimization :grin: If the solution runs before the test runner times out, it’s fast enough. People generally don’t (or shouldn’t) avoid writing code because it “might” be slower. Write the simple solution first and see if performance is an issue. You only need to worry about optimizing the solution if performance is actually an issue.

I really don’t know why we continue this given we all basically agree. Anyway – sorry I brought it up, apparently.