Not the prettiest but it works
Stuck in Part 2. My algo works for the sample input, but not the real input. Any help?
It took me a long time to find an off-by-one error in part 2. The combined runtime for both parts is 0.4 seconds.
Finally found the problem. I should not swap a file with a space behind that file.
My solution takes 11 seconds adventofcode/lib/solutions/2024/day09.ex at main · lud/adventofcode · GitHub
I have an entry in the disk (a map) for each block. I will see if I can keep all contiguous file parts as a single map entry.
I am not proud of the code I wrote today.
I totally misunderstood the part 2 problem, came up with some code that gave the right answer anyway somehow but it was too slow, so I rewrote it after actually understanding what I was supposed to do, but it was even slower, so I scratched my head a bit and then rewrote it again.
For such an innocuous-sounding puzzle, that did my head in.
Runtime:
Name ips average deviation median 99th %
day 09, part 1 9.26 108.03 ms ±0.66% 107.88 ms 111.50 ms
day 09, part 2 7.28 137.37 ms ±0.68% 137.22 ms 139.75 ms
Very slow solution today. I might come back optimizing it:
again just scrolling ultra fast down to not see any spoilers … is there a bug in day 9 description?
The first example above, 2333133121414131402
, represents these individual blocks:
00...111...2...333.44.5555.6666.777.888899
but:
left: “00…111…2…333.44.5555.6666.777.8888…99”
right: “00…111…2…333.44.5555.6666.777.888899”
i just made the function by spec …
for extremely simple algo that fails see here advent_of_code/lib/2024/day_9/Part1.ex at master · jarlah/advent_of_code · GitHub
ill eat my hat if it isnt something do with the pattern matching for the parse function … i have even added more tests that prove the more simpler examples is parse correctly
No. The example ends with 402
so there’s 4 eights, 0 gap, and 2 nines.
What does your code here advent_of_code/lib/2024/day_9/Part1.ex at master · jarlah/advent_of_code · GitHub do if n2
is zero?
Your 90909 example is also wrong - it should be 000000000111111111222222222
. The description says:
A disk map like
90909
would represent three nine-block files in a row (with no free space between them).
ofc duh! thanks
https://www.reddit.com/r/adventofcode/comments/1h9hiuk/is_there_an_error_in_the_assignment/
There’s been an bug in the problem on only one day that I can remember, in ten years of AOC. And there was a mad uproar about that and it was fixed within an hour.
<3 btw i just boils down to implicit requirements and to understand the problem. Thanks again
I’m glad to see it was not just me that was struggling / writing… less than pretty code for this one.
I didn’t complete part 2, because although it worked on the sample, like @Aetherus I hadn’t accounted for the fact that moving a file that’s next to a space will leave a bigger space behind - and it’ll need a completely different approach to fix
It should be easy to remember where to search:
If you move a file that starts at block 123, from now on you will not search free space after 123.
Even more, for a file that starts at block 100 and is of size 5, then you will only look for free space in the blocks 0..(100-5)
This is my new version using only lists, the version with maps was too much of a chore to my taste. This one was fun! (and best run was 33ms)
defmodule AdventOfCode.Solutions.Y24.Day09 do
alias AoC.Input
def parse(input, _part) do
input
|> Input.read!()
|> String.trim()
|> String.graphemes()
|> Enum.map(&String.to_integer/1)
end
def part_one(problem) do
rev_blocks = build_disk(problem)
blocks = :lists.reverse(rev_blocks)
blocks = compress(blocks, Enum.filter(rev_blocks, fn {_, v} -> v != :free end), [])
hash(blocks)
end
defp build_disk(problem) do
build_disk(problem, :file, 0, 0, [])
end
defp build_disk([0 | t], :file, block_id, file_id, acc) do
build_disk(t, :free, block_id, file_id + 1, acc)
end
defp build_disk([0 | t], :free, block_id, file_id, acc) do
build_disk(t, :file, block_id, file_id, acc)
end
defp build_disk([h | t], :file, block_id, file_id, acc) do
build_disk([h - 1 | t], :file, block_id + 1, file_id, [{block_id, file_id} | acc])
end
defp build_disk([h | t], :free, block_id, file_id, acc) do
build_disk([h - 1 | t], :free, block_id + 1, file_id, [{block_id, :free} | acc])
end
defp build_disk([], _, _, _, acc) do
acc
end
defp compress([{bid, :free} | blocks], [{fbid, file_id} | movables], acc) when bid <= fbid do
compress(blocks, movables, [{bid, file_id} | acc])
end
defp compress([{bid, file_id} | blocks], [{fbid, _} | _] = movables, acc) when bid <= fbid do
compress(blocks, movables, [{bid, file_id} | acc])
end
defp compress(_blocks, _movables, acc) do
acc
end
defp hash(blocks) do
Enum.reduce(blocks, 0, fn
{i, f}, acc when is_integer(f) -> acc + i * f
{_, :free}, acc -> acc
end)
end
def part_two(problem) do
rev_blocks = build_disk(problem)
{free_chunks, files_chunks} =
rev_blocks
|> :lists.reverse()
|> Enum.chunk_by(fn {_, v} -> v end)
|> Enum.map(fn [{_, fid_or_free} | _] = list ->
{Enum.map(list, &elem(&1, 0)), fid_or_free}
end)
|> Enum.split_with(fn {_, id} -> id == :free end)
free_chunks = Enum.map(free_chunks, fn {bids, :free} -> {bids, length(bids)} end)
rev_files_chunks = :lists.reverse(files_chunks)
blocks = defrag(rev_files_chunks, free_chunks, [])
hash2(blocks)
end
defp defrag([{[low_bid | _] = bids, file_id} = h | rev_files_chunks], [{[high_free | _], _} | _] = free_chunks, acc)
when high_free < low_bid do
space = take_space(free_chunks, bids)
[bid | _] = bids
case space do
{[free_bid | _] = free_bids, free_chunks} when free_bid < bid ->
defrag(rev_files_chunks, free_chunks, [{free_bids, file_id} | acc])
_ ->
defrag(rev_files_chunks, free_chunks, [h | acc])
end
end
defp defrag(rest, _, acc) do
rest ++ acc
end
defp take_space(free_chunks, bids) do
take_space(free_chunks, length(bids), [])
end
defp take_space([{vids, larger_len} | rest], len, skipped) when len < larger_len do
{vids_used, vids_rest} = Enum.split(vids, len)
{vids_used, :lists.reverse(skipped, [{vids_rest, larger_len - len} | rest])}
end
defp take_space([{vids, same_len} | rest], same_len, skipped) do
{vids, :lists.reverse(skipped, rest)}
end
defp take_space([skip | rest], len, skipped) do
take_space(rest, len, [skip | skipped])
end
defp take_space([], _, _) do
nil
end
defp hash2(blocks) do
Enum.reduce(blocks, 0, fn
{bids, f}, acc when is_integer(f) -> Enum.reduce(bids, acc, fn b, acc -> acc + b * f end)
end)
end
end
Not a very impressive golf today. There’s a lot of essential complexity here I failed to circumvent.
LOC: 39
defmodule Aoc2024.Day09 do
def part1(file) do
{blocks, frees, _} = parse(file)
n = Enum.reduce(blocks, 0, fn {_, range}, sum -> sum + Range.size(range) end)
blocks = Enum.flat_map(blocks, fn {id, range} -> Enum.map(range, &{id, &1}) end)
{left, right} = Enum.split_while(blocks, fn {_, index} -> index < n end)
right = Enum.zip(Enum.map(right, &elem(&1, 0)) |> Enum.reverse(), Enum.flat_map(frees, & &1))
Enum.reduce(left ++ right, 0, fn {a, b}, sum -> sum + a * b end)
end
def part2(file) do
{blocks, frees, _} = parse(file)
Enum.reduce(Enum.reverse(blocks), {frees, 0}, fn {id, br}, {fs, checksum} ->
s = Range.size(br)
{l, r} = Enum.split_while(fs, fn fr -> s > Range.size(fr) or fr.first > br.first end)
case r do
[fits | rest] ->
{o, p} = Range.split(fits, s)
{l ++ if(p.step == 1, do: [p | rest], else: rest), checksum + id * Enum.sum(o)}
[] ->
{fs, checksum + id * Enum.sum(br)}
end
end)
|> elem(1)
end
def parse(file) do
s = file |> File.stream!(1) |> Stream.reject(&(&1 == "\n"))
chunks = s |> Enum.map(&String.to_integer/1) |> Enum.chunk_every(2, 2) |> Enum.with_index()
Enum.reduce(chunks, {[], [], 0}, fn {[nb | rest], id}, {blocks, frees, c} ->
new_free = if((nf = List.first(rest) || 0) > 0, do: (c + nb)..(c + nb + nf - 1), else: ..)
{blocks ++ [{id, c..(c + nb - 1)}], frees ++ [new_free], c + nb + nf}
end)
end
end
i liked it your solution. I failed part2 today and used your code as a basis to understand how you solved it. The gist of it which i really need to get it into my head is Enum.chunk_every. That was perfect for for this problem, coupled with with_index. I will see if i can refactor this another day, rewrite it with the core idea of blocks and ranges etc. probably with structs. To make it a) more readable and understandable and b) just because i like structs
One reason I’ve started doing more this year in Livebook is that I can easily copy and paste my part 1 soln into a new cell and start hacking away at it to solve part 2. That came in handy for this one.
Part 1:
defmodule Fragmenter do
def parse_disk_map(map) do
map
|> String.to_charlist()
|> Stream.map(&(&1 - 48))
|> Stream.chunk_every(2)
|> Stream.with_index()
|> Enum.flat_map(fn {[used, free], id} ->
(for _ <- Stream.cycle([0]) |> Enum.take(used) do id end) ++ (for _ <- Stream.cycle([0]) |> Enum.take(free) do nil end)
{[used], id} ->
(for _ <- Stream.cycle([0]) |> Enum.take(used) do id end)
end)
end
@doc """
## Examples
iex> "2333133121414131402" |> Fragmenter.parse_disk_map() |> Fragmenter.defrag() |> Fragmenter.checksum()
1928
"""
def defrag(disk) do
cl_disk = disk |> Stream.with_index()
bak = cl_disk |> Enum.reverse() |> Enum.reject(&(elem(&1,0) == nil))
cl_disk
|> Enum.reduce(
{[], bak },
fn
{c, i}, {fwd, [{_, j} | _] = bak} when c != nil and i <= j ->
{ [c | fwd], bak}
{c, i}, {fwd, [{_, j} | _] = bak} when c != nil and i > j ->
{ [nil | fwd], bak}
{nil, i}, {fwd, [{c, j} | rest]} when i <= j ->
{ [c | fwd], rest }
{nil, i}, {fwd, [{_, j} | rest]} when i > j ->
{ [nil | fwd], rest }
end
) |> elem(0) |> Enum.reverse()
end
def checksum(disk) do
disk
|> Stream.with_index()
|> Stream.reject(&(elem(&1, 0) == nil))
|> Enum.reduce(
0,
fn {c, i}, acc -> acc + c*i end
)
end
end
Part 2:
defmodule Fragmenter2 do
def parse_disk_map(map) do
map
|> String.to_charlist()
|> Stream.map(&(&1 - 48))
|> Stream.chunk_every(2)
|> Stream.with_index()
|> Stream.flat_map(fn {[used, free], id} ->
[Stream.cycle([id]) |> Enum.take(used), Stream.cycle([nil]) |> Enum.take(free)]
{[used], id} ->
[Stream.cycle([id]) |> Enum.take(used)]
end)
|> Stream.reject(&(&1 == []))
end
def defrag(disk) do
files = disk |> Enum.into([])
x = Stream.flat_map(disk, &(&1)) |> Stream.reject(&(&1 == nil)) |> Enum.into(MapSet.new())
defrag(files |> :queue.from_list(), :queue.new(), x)
end
def merge_empty(file, empty) do
[file, empty |> Enum.take(Enum.count(empty) - Enum.count(file))]
end
def defrag(dq, fragged, x) do
if :queue.is_empty(dq) do
fragged |> :queue.to_list()
else
case :queue.get(dq) do
[] -> defrag(:queue.drop(dq), fragged, x)
[n | _] = file when n != nil ->
n_available? = MapSet.member?(x, n)
defrag(
:queue.drop(dq),
:queue.in(
if n_available? do
file
else
Stream.cycle([nil]) |> Enum.take(file |> Enum.count())
end,
fragged
),
x
)
[nil | _] = empty ->
fit_q = :queue.filter(
fn [n | _] = l -> MapSet.member?(x, n) and (l |> Enum.count()) <= (empty |> Enum.count())
and (l |> Enum.at(0) != nil)
end, dq)
cond do
fit_q |> :queue.is_empty() -> defrag(
dq |> :queue.drop(),
:queue.in(empty, fragged),
x
)
true ->
first_fit = fit_q |> :queue.get_r()
[first_fit, empty] = merge_empty(first_fit, empty)
defrag(
dq |> :queue.drop() |> then(&:queue.in_r(empty, &1)),
:queue.in(first_fit, fragged),
x |> MapSet.delete(first_fit |> Enum.at(0))
)
end
end
end
end
def checksum(disk) do
disk
|> Stream.flat_map(&(&1))
|> Stream.with_index()
|> Stream.reject(&(elem(&1, 0) == nil))
|> Enum.reduce(
0,
fn {c, i}, acc -> acc + c*i end
)
end
end
This is probably some of the ugliest Elixir I’ve ever written, but this one took me like 2.5hrs between part 1 and 2 to get right and I kept hitting stupid bugs. No clue if the use of :queue
was necessary or even helpful here and I have yet to go back and try to optimize it (pt 2 takes about 3.5s on my machine, pt 1 is about 35ms).
Edit: I think @sevenseacat says it best:
I am not proud of the code I wrote today.
Glad it helped! I’ll also recommend @bjorng’s solution (or anyone else who used a map instead of a list to track state). Map-based solutions often end up being more performant. I’m trying to write really terse solutions this year so I end up sacrificing speed for LOC.
I started with maps but in the end it’s the list version that is the faster for me. Around 40ms.
(but yeah my map version was probably just bad )
I stand corrected! I think 40ms may be the thread leader.
I generally expect maps to perform better when the lists are sparse, but when the lists are dense it’s a toss-up. I’d just assumed that because mine was so slow (~4s) it was one of those cases but I didn’t check.