# Advent of Code 2020 - Day 15

It’s a bit verbose but I prefer it this way, since the subject was an entire by-one-off-index-error trap.

My solution

I’m guessing Elixir with its lists and recursion encourages us to implement this efficiently from the start. I wonder if the assumption was that people would maybe implement part 1 by storing all the turns and indexing them, or something?

Part 2 solved with the same solution for me too, with a slightly uncomfortable wait… I’m wondering if using a mutable data structure would speed it up.

I’m certain it would!
I tried to replace my Elixir map by an erlang array but it made things even worse. I think the solution would be to use a NIF backed mutable data structure.

1 Like

I used a map of two-element tuples to store previous state data.

``````defmodule Day15 do
# [0, 3, 6]
[13, 0, 10, 12, 1, 5, 8]
end

run(input, 2020)
end

run(input, 30000000)
end

def run(input, target) do
Enum.with_index(input, 1)
|> Enum.reduce(%{}, fn {number, turn}, turns -> Map.put(turns, number, {turn, -1}) end)
|> loop(List.last(input), length(input), target)
end

def loop(_turnsmap, number, turn, turn), do: number

def loop(turnsmap, prevspoke, turn, endturn) do
{say, newmap} =
Map.get(turnsmap, prevspoke, {-1, -1})
|> nextsay(turnsmap, prevspoke, turn)

loop(newmap, say, turn+1, endturn)
end

# number does not exist, add it
def nextsay({-1, -1}, turnsmap, prevspoke, turn) do
{0, Map.put(turnsmap, prevspoke, {turn, -1})}
end

# number was said once before (possibly starting number?)
def nextsay({turn, -1}, turnsmap, _prevspoke, turn) do
{z1, _} = Map.get(turnsmap, 0, {-1, -1})
{0, Map.put(turnsmap, 0, {turn+1, z1})}
end

# usual case: number was said at least twice
def nextsay({prev1, prev2}, turnsmap, _prevspoke, turn) do
newsay = prev1 - prev2
{s1, _} = Map.get(turnsmap, newsay, {-1, -1})
{newsay, Map.put(turnsmap, newsay, {turn+1, s1})}
end
end
``````

My original brute force solution for part two took about 41 seconds to run (using a map to keep the necessary history). I couldn’t come up with a clever solution, so I did the thing we are not supposed to do and I used the process dictionary for my data structure and got an impressive speed boost using the same algorithm!

``````MAP :
Name           ips        average  deviation         median         99th %
1           2.03 K      0.00049 s    ±23.90%      0.00047 s      0.00093 s
2        0.00002 K        41.78 s     ±0.00%        41.78 s        41.78 s

PROCESS DICTIONARY :
Name           ips        average  deviation         median         99th %
1           6.28 K      0.00016 s    ±27.22%      0.00014 s      0.00029 s
2        0.00007 K        14.87 s     ±0.00%        14.87 s        14.87 s
``````
3 Likes

I’m guessing the creation of all the tuples is the other half of the problem - so using mutable structures for those would probably make it even faster.

Had to try swapping a map for `Process.get/put` and saw an improvement from ~15s to ~5s (which seems inline with the improvement you saw as well).

I was hoping it would run sub 10s on the M1!

Todays (16th) problem was just a big off by 1 trap. Nothing fancy to see here

``````defmodule Aoc2020.Day15 do
def part1(input) do
input
|> setup()
|> play_until(2020)
|> elem(0)
end

def part2(input) do
input
|> setup()
|> play_until(30_000_000)
|> elem(0)
end

def play_until({_, turn, _} = state, target_turn) when turn == target_turn + 1, do: state

def play_until(state, target_turn) do
state
|> play()
|> play_until(target_turn)
end

def setup(numbers), do: setup(numbers, 1, %{})
defp setup([number | []], turn, map), do: {number, turn + 1, map}

defp setup([number | numbers], turn, map) do
setup(numbers, turn + 1, map |> put_number(number, turn))
end

def play({last_number, turn, seen}) do
case Map.fetch(seen, last_number) do
:error ->
{0, turn + 1, seen |> Map.put(last_number, turn - 1)}

{:ok, previous_turn} ->
new_value = turn - previous_turn - 1
{new_value, turn + 1, seen |> Map.put(last_number, turn - 1)}
end
end

def input(path) do
|> String.trim()
|> String.split(",")
|> Enum.map(&String.to_integer/1)
end
end

input = Aoc2020.Day15.input("input.txt")

Aoc2020.Day15.part1(input)
|> IO.inspect(label: "part1")

Aoc2020.Day15.part2(input)
|> IO.inspect(label: "part2")
``````

I actually converted my tuples to function arguments… but I could not see any difference performance (I might have screwed something up though).

After struggling with the part 2’s for the past couple days this felt a little weird. Like when the heroes think like they won, but it turns out the villain is going to burst through a wall and take one of them out.

Stay frosty. Day 16 is coming for us!

``````defmodule Day15 do

@input [15,5,1,4,7,0]

def part_1 do
@input
|> start(%{}, 1)
|> play(2020)
|> IO.inspect()
end

def part_2 do
@input
|> start(%{}, 1)
|> play(30000000)
|> IO.inspect()
end

def play({_, turn, last}, max) when turn > max, do: last
def play({seen, turn, last}, max) do
number = get_last(seen, last)
seen = update_last_seen(seen, number, turn)
play({seen, turn + 1, number}, max)
end

def start([], seen, turn), do: {seen, turn, 0}
def start([h|t], seen, turn) do
start(t, update_last_seen(seen, h, turn), turn + 1)
end

def get_last(seen, key) do
case Map.get(seen, key) do
{f, s} -> f - s
_ -> 0
end
end

def update_last_seen(seen, key, turn) do
Map.update(seen, key, turn, fn
{f,_} -> {turn,f}
x -> {turn, x}
end)
end
end
``````
2 Likes

FWIW, I tried with `ets` noticed slight gain compared to process dictionary.

On my machine, process dictionary takes ~13s and ets takes ~8s for part two.

``````
defp next_turn(_hist, last, max_turns, max_turns), do: last

defp next_turn(hist, last, turn, max_turns) do
case :ets.lookup(hist, last) do
[] ->
true = :ets.insert(hist, {last, turn})
next_turn(hist, 0, turn + 1, max_turns)

[{_, prev_turn}] ->
true = :ets.insert(hist, {last, turn})
next_turn(hist, turn - prev_turn, turn + 1, max_turns)
end
end

def run(input, max_turns) do
hist = :ets.new(:hist, [])
true = :ets.insert(hist, Enum.with_index(input, 1))
next_turn(hist, List.last(input), length(input), max_turns)
end

``````

Had lots of fun building it, not so much running it (~40s for part 2). There is a version using Process dictionary which runs ~15s though, but I love the non-Process code more

``````defmodule AdventOfCode.Y2020.Day15 do
def run_1, do: [6, 19, 0, 5, 7, 13, 1] |> prepare() |> speak(2020)
def run_2, do: [6, 19, 0, 5, 7, 13, 1] |> prepare() |> speak(30_000_000)

def prepare(dat) do
{List.last(dat), for({v, i} <- Enum.with_index(dat, 1), into: %{}, do: {v, [i]}), length(dat)}
end

defp speak({val, _, stop}, stop), do: val

defp speak({val, mem, turn}, stop) do
case {Map.get(mem, val), turn + 1} do
{[_], t} -> speak({0, Map.update(mem, 0, [t], &[t | &1]), t}, stop)
{[a, b | _], t} -> speak({a - b, Map.update(mem, a - b, [t], &[t | &1]), t}, stop)
end
end
end
``````
1 Like

I also tried converting the `num => {round_a, round_b}` format to two entries: `{num, 1} -> round_a` and `{num, 2} -> round_b`. My thinking was that this would only produce 2 tuples for each number, compared to n tuples (where n is the number of times that number is mentioned). I’m not sure if duplicate tuples are handled efficiently under the hood, though. Anyway, it didn’t provide any performance improvement.

I cut my runtime in half, from 30s to 15s when using :ets instead of a map

Yeah got that too. Anyone that can explain why that’s so much faster to a relative newbie?

1 Like

I’m no expert, please correct me if I’m wrong.

I think it’s mostly because ets not really a data structure like map or list. With map everytime you overwrite a value, you have to GC old term (when it hits the threshold). Also likely we cross initial process heap size inbetween. On the other hand Ets is mutable and lives outside process memory, gc is no longer relevant and updates are truly updates in-memory, unlike maps, where update creates a new map (even though it’s optimized and doesn’t actually copy whole thing, there is still non zero overhead)

Anyway, today realised that, without measuring and proving it’s hard to make predictions/guesses about performance (like the one i made above:)). I tired multiple things today for part two to improve from 30s (using map), guessing what might be the bottleneck. And none of them improved performance much (max improvement i got was 5-6s)

Mainly I tried:

• Since accessing keys from “spoken map” is not equally distributed (some keys are accessed often than rest, such as 0,1,2,3), I tried creating a cache for these keys, to reduce lookups and updates to map
• map.get_and_update so we can lookup and update on one pass. (I know it’s constant, but wanted to still reduce potential overhead)
• tried erlang array

I dislike using ets for solving these sort of questions, because it’s an escape hatch. I prefer solving with proper functional data structure (part of the appeal)

4 Likes

Thanks a lot and nice to hear about the other approaches you tried.

Completely agree with you about solving using functional data structures for solving, that’s the solution I will save but I’ve learnt a lot from reading through other peoples solutions and musings today.

2 Likes

Solution using atomics is under 3s on my comp.

3 Likes

Brute force not using `Enum.reduce` and tuple pattern matching seems even faster (~20%).