Advent of Code 2020 - Day 15

After struggling with the part 2’s for the past couple days this felt a little weird. Like when the heroes think like they won, but it turns out the villain is going to burst through a wall and take one of them out.

Stay frosty. Day 16 is coming for us! :japanese_ogre:

defmodule Day15 do

  @input [15,5,1,4,7,0]

  def part_1 do
    @input
    |> start(%{}, 1)
    |> play(2020)
    |> IO.inspect()
  end

  def part_2 do
    @input
    |> start(%{}, 1)
    |> play(30000000)
    |> IO.inspect()
  end

  def play({_, turn, last}, max) when turn > max, do: last
  def play({seen, turn, last}, max) do
    number = get_last(seen, last)
    seen = update_last_seen(seen, number, turn)
    play({seen, turn + 1, number}, max)
  end

  def start([], seen, turn), do: {seen, turn, 0}
  def start([h|t], seen, turn) do
    start(t, update_last_seen(seen, h, turn), turn + 1) 
  end

  def get_last(seen, key) do
    case Map.get(seen, key) do
      {f, s} -> f - s
      _ -> 0
    end
  end

  def update_last_seen(seen, key, turn) do
    Map.update(seen, key, turn, fn
      {f,_} -> {turn,f}
      x -> {turn, x}
    end)
  end
end
2 Likes

FWIW, I tried with ets noticed slight gain compared to process dictionary.

On my machine, process dictionary takes ~13s and ets takes ~8s for part two.


    defp next_turn(_hist, last, max_turns, max_turns), do: last

    defp next_turn(hist, last, turn, max_turns) do
      case :ets.lookup(hist, last) do
        [] ->
          true = :ets.insert(hist, {last, turn})
          next_turn(hist, 0, turn + 1, max_turns)

        [{_, prev_turn}] ->
          true = :ets.insert(hist, {last, turn})
          next_turn(hist, turn - prev_turn, turn + 1, max_turns)
      end
    end

    def run(input, max_turns) do
      hist = :ets.new(:hist, [])
      true = :ets.insert(hist, Enum.with_index(input, 1))
      next_turn(hist, List.last(input), length(input), max_turns)
    end

Had lots of fun building it, not so much running it (~40s for part 2). There is a version using Process dictionary which runs ~15s though, but I love the non-Process code more :slight_smile:

defmodule AdventOfCode.Y2020.Day15 do
  def run_1, do: [6, 19, 0, 5, 7, 13, 1] |> prepare() |> speak(2020)
  def run_2, do: [6, 19, 0, 5, 7, 13, 1] |> prepare() |> speak(30_000_000)

  def prepare(dat) do
    {List.last(dat), for({v, i} <- Enum.with_index(dat, 1), into: %{}, do: {v, [i]}), length(dat)}
  end

  defp speak({val, _, stop}, stop), do: val

  defp speak({val, mem, turn}, stop) do
    case {Map.get(mem, val), turn + 1} do
      {[_], t} -> speak({0, Map.update(mem, 0, [t], &[t | &1]), t}, stop)
      {[a, b | _], t} -> speak({a - b, Map.update(mem, a - b, [t], &[t | &1]), t}, stop)
    end
  end
end
1 Like

I also tried converting the num => {round_a, round_b} format to two entries: {num, 1} -> round_a and {num, 2} -> round_b. My thinking was that this would only produce 2 tuples for each number, compared to n tuples (where n is the number of times that number is mentioned). I’m not sure if duplicate tuples are handled efficiently under the hood, though. Anyway, it didn’t provide any performance improvement. :man_shrugging:

I cut my runtime in half, from 30s to 15s when using :ets instead of a map :slight_smile:

Yeah got that too. Anyone that can explain why that’s so much faster to a relative newbie? :grin:

1 Like

I’m no expert, please correct me if I’m wrong.

I think it’s mostly because ets not really a data structure like map or list. With map everytime you overwrite a value, you have to GC old term (when it hits the threshold). Also likely we cross initial process heap size inbetween. On the other hand Ets is mutable and lives outside process memory, gc is no longer relevant and updates are truly updates in-memory, unlike maps, where update creates a new map (even though it’s optimized and doesn’t actually copy whole thing, there is still non zero overhead)

Anyway, today realised that, without measuring and proving it’s hard to make predictions/guesses about performance (like the one i made above:)). I tired multiple things today for part two to improve from 30s (using map), guessing what might be the bottleneck. And none of them improved performance much (max improvement i got was 5-6s)

Mainly I tried:

• Since accessing keys from “spoken map” is not equally distributed (some keys are accessed often than rest, such as 0,1,2,3), I tried creating a cache for these keys, to reduce lookups and updates to map
• map.get_and_update so we can lookup and update on one pass. (I know it’s constant, but wanted to still reduce potential overhead)
• tried erlang array

I dislike using ets for solving these sort of questions, because it’s an escape hatch. I prefer solving with proper functional data structure (part of the appeal) :slight_smile:

4 Likes

Thanks a lot and nice to hear about the other approaches you tried.

Completely agree with you about solving using functional data structures for solving, that’s the solution I will save but I’ve learnt a lot from reading through other peoples solutions and musings today.

2 Likes

Solution using atomics is under 3s on my comp.

3 Likes

Brute force not using Enum.reduce and tuple pattern matching seems even faster (~20%).