Part 1 was is straight forward. Walk down the list, accumulate the severity, and do not forget to update the list when on an empty field
Part 2 is easy as well, I thoughtā¦ Well, in fact the theory is easy: generate a stream of all possible firewall configurations, attach the delay that fits this config, reject those where we were caught on the first field, and then find one where we have a severity of 0 with the walker from part 1.
This would probably work, but didnāt get me any result after about 10 minutes (went downstairs with co-workers and we talked while they were having a cigarette). So I cancelled the run and added a second walker, which does not keep book about severity but cancels as soon as it is caught and returns false then, true if it reaches the end. I used that to filter the stream of all firewalls, took the first one and returned its delay.
PS: my current solution to part 2 does still take 25 to 30 seconds on my office VM with MIX_ENV=prod. My laptop at home running the code natively and on a current OTP (20 vs 19) is doing it in 20 to 25 seconds.
NB: Office host has an i5-6600K @ 3.50GHz, laptop at home an i5-4210M @ 2.60, Iām wondering if OTP version or running native has a bigger impact on those timesā¦
Iām quite happy with the simplicity of my solution for day 13. The running time of ~20sec makes me somewhat unhappy, although the desired delay is quite large. I wonder how much would I gain by replacing streams with recursion, but donāt have the time to try it out
Thanks @sasajuric, based on your code, I was able to get down to 6 - 8 seconds on my office VM, and 4.5 to 5.5 at homeā¦
An important step, additionaly to use your way to represent the firewall, was to replace s |> Enum.take_while(¬_empty?/1) |> Enum.count() by s |> Enum.reduce_while(0, &(if not_empty(&1), do: {:halt, &2}, else: {:cont, &2 + 1})) (rough transcriptions of actual code)
Just using take_while/2 and count/1 were even slower on my office VM than my previous code.
Thatās strange. I just tried out your solution locally, and I get no performance improvement. In other words, I observe no significant perf difference between Stream.take_while |> Enum.count and Enum.reduce_while. In both cases, I get about 20sec of execution time.
For me (only tested in office) applying the following diff, calculation took ~34-36 seconds, 5 times in a rowā¦ Before the patch it was in the range of 6 to 8ā¦
Using this monstrosity, I was able to shave it off by some 40%. The code is quite ugly, but the idea is fairly simple, based on sieves. Iām generating failed positions in sorted order, and then stop when the difference between two consecutive positions is > 1. Not sure if the code is 100% correct, but it gives the same output for my input
The trick I used is to sort the firewall so I try smaller ranges first. They will more often disqualify a specific delay, so short circuiting the instant one fails skips a lot of work. I feel the code is still pretty easy to follow, since I just used Enum.any?/2 to handle the checks.
This one is quite beautiful and to the point. Great job!
The funny thing is that my published solution uses the same approach, but itās done in a lazy way through streams. I did expect some penalty for that, but not so much (more than 10x on my machine).
It would be interesting to see how much faster it would be with any? instead of filter. I think it should help a lot since for this problem only 1 out of 4M iterations goes through the whole list
The problem is not in the filter, Iām using Stream.filter with Enum.empty?, so this stops as soon as we find the first severity for the given delay. The number of inner comparisons should in fact be the same. However in my solution there are three indirections in the inner loop (Stream.filter, Stream.map, and Enum.empty?), and this is where I guess the problem is. With the loop being tight and long, and inner calculation very simple, the overhead of streams seems to dominate a lot.
Those who know about Streams and especially about the fact that one can create them from thin air and does not need to have an Enumerable.t() to feed them, should be done in no time with this.
What bugs me a little is, that simply generating thus many numbers takes a lot of time.
Part a takes about 30 to 35 seconds and Part b 14 to 16.
Im wondering if handcrafting generator functions would sped up everything at least a little bit. It would remove some of the indirections of Stream, but introduces its ownā¦
By generator function I mean functions that return a tuple with the next value and a function to generate the next tuple, roughly like this one: