Advent of Code 2017


Here’s my day 6 solution. It was nice how part 2 fell into place.


Still learning elixir but here is my repo:


I also started playing with it and chose to do it in Elixir, so that I learn it a bit more. Here are my solutions:

I particularly like how the solution for Day 6 turned out so simple:


Here’s my day 7 solution, and my day 8 solution.

I’m not super happy with my day 7 solution, but it works. I spent a lot of time wringing my hands trying to figure out the best way to construct a tree from a list of edges, but eventually just decided to use a Map of edges to model my solution.

That said, I did learn about Erlang’s digraph and digraph_utils modules, which are pretty awesome for building a tree backed by ETS.


And here’s my day 9 solution. I feel like this problem was especially well suited to solving with Elixir.


I really enjoyed day 10.

We’re a bunch of people at work doing it and three of us are trying to solve them as quickly as possible.


I had fun with day 9, implementing a quick/dirty ad-hoc parser combinator. I feel this helped me clearly describe the input grammar.


Sounds like fun :wink: I did actually just use a stack based FSM with 3 states (:init, :garbage and :group) and 2 symbols on the stack (:group and bottom/⊥) to calculate the groups score, :init | ⊥ was used as initial state and as the acceptor. I cheated a bit though, and passed around the current depth as well as the overall score, those were basically sideeffects of the automaton…

For the second part I just used a simple 2 state FSM (with the count as “sideeffect” again). Since I didn’t see any dependency between the nesting of groups and the garbage count, I simply ignored the grouping and only scanned for garbage.


Day 10 wanted us to implement a circular list, which we can not represent without some trickery in the BEAM. Even worse, we were required to slice and dice this circular list, while we had to remember where the start is, especially this last requirement made me cry and I had to think about 2 hours how to do it best, before I implemented it using a map from index to value and extracting the slice and Enum.into/2 back into the map.

Part 2 cost me another hour of debugging until I realised that I had forgotten to append the constant to the end of the input, but after that solution was instant.

PS: Also I had some weird results, because l |> Enum.with_index() |> Enum.into(%{}) isn’t quite as I wanted it to be :wink:


I did it without using a map, but I had to to the reverse shift at the end of each step. Here’s the crucial bit. I think I could have avoided that reverse shift by keeping each element as {original_pos, value}, but I was a bit lazy this morning :slight_smile:


I had a lot of fun with the Day 9, and the solution turned out so simple in the end!

Until now I struggled the most with day 7. Well, not really struggled, but I think that the solution I came up with could be improved.


Day 11 seemed almost too easy to do :slight_smile: I cheated a bit and the code works only for the supplied input since I saw that there are a lot more north moves. It would be easy to make it work for the general case though.


That code doesn’t work for my input, and I’m not quite sure how would you generalize it, since you completely ignore se/nw movements. I’d be curious to see the generalized solution.

I never worked with hex grids before, so after some pondering, I concluded it’s better to search for the algorithm :slight_smile: I was able to find an excellent site here, with distances explained here. My solution based on that is available here.


Thanks for the link @sasajuric! The provided information about using “cube coordinates” on a hex grid helped me a lot.

I think I was able to use the informations to build my solution in a robust way which works with any input:


@sasajuric Does everyone get a different input? I didn’t know that :grin:

I’ll take some time and make a general solution in the afternoon. The gist of it is this:

  1. The order of the moves does not matter. n,n,n,s,s,s is identical to n,s,n,s,n,s. This holds even if you have other moves. So the order of the moves does not matter, just the number of moves in particular directions.
  2. You can always change n and s moves to only n or only s moves. For example n,s,n,s,n is identical to n. So you just subtract the number of both to get how many moves you make. This is why I have this line.
  3. You can always change ne and sw moves to only ne or only sw moves. For example ne,sw,ne sw,ne is identical to ne. Again, you just subtract the number of occurrences of the two. This is why I have this line.
  4. Just as in 3. you can do the same for nw and se. I’m not using this at all, but this is particular for my input. You would need to handle this as well. Why I’m not using this is explained below.

With the above rules you are only left with 3 moves. For my input these three were n, nw, ne. In general it could be any combination.

But you can also cancel out nw and ne like this:

  1. ne,nw,ne,nw,ne is identical to n,n,ne. As you can see the total number of moves in the shortened case is exactly the same as the number of ne moves. In general the number of moves is equal to the max of the number of occurrences.
  2. You can do something similar for all the possible combination of the three moves that you are left with. My input is easy because everything we north, but you can do the same for other directions. For example:
    sw,nw,sw,nw,sw is identical to w,w,sw.
  3. You are now left with just moves that you cannot shorten and you can just sum them up, since there are in someway orthogonal to each other. This is why I have this line.

PS: Can you send me your input, it’ll help me write the general solution


As far as I know, everyone gets different input from a set of possible inputs, as the input given needs to be validated to be solveable without any ambiguity.

For some days this set is larger, for others it is smaller, for some days it may even be complete random data for everyone, because its easily verifiable.

Mine is available in my repo.


You can get my input here.

I started out with the same idea as you described (which is IMO significantly more involved than the code you linked, where you merely cancel out n/s ne/sw pairs), and I do feel that this might lead to the solution. However, while thinking about the problem, my thought was that this will end up being somwhat complicated, since we have to do iterative substitutions, since once you blend two instructions, you need to recheck the blended instructions against all the other ones (even the previously checked ones). I do feel that this should converge into the outcome with minimal steps, but it felt like too much work. Perhaps it was too early in the morning for me to properly think it through :slight_smile:.

So instead, I did some research and found the linked article. I find the approach in that article quite elegant, as we need to pass through the input once, keeping only the current position as the state, and we don’t need to do any post-processing. Based on that property, I was able to build a purely streaming, forward-only solution with constant memory usage :slight_smile:


I did the same for day 10. The restriction that each length can never surpass the length of your loop meant that we could pretty easily implement the knot algorithms with some well placed splits, reverses, and concatenations.

I kludged my way through day 11. I didn’t realize you could do Manhattan distance on hexagonal grids, so I retraced the shortest path to the final destination and counted its length. TIL.


I made a generalized version of my Day 11 solution. You can find it here.


Yeah! Today is day 12, and again the exercise felt relatively simple.

No new things to learn, just using a map as unidirectional graph (the text promised that each side of the pipe will report it in the input!).

For the first part I simply started at pid 0, did a depth first search and inserting all childs of the current node to a set, removing the current node and then continuing the search. I cancelled depth search when a given link pointed to a node that was already removed and continued searching in breadth then. At the end I counted the elements in the resulting set.

The second part used that way as well, but ignorig the set, only using the updated map in each iteration until it was empty. Each iteration incremented a counter by 1. I used to actually get a pid to start the search from.

Solution | Input