this post was submitted on 09 Dec 2024
24 points (96.2% liked)

Advent Of Code

920 readers
2 users here now

An unofficial home for the advent of code community on programming.dev!

Advent of Code is an annual Advent calendar of small programming puzzles for a variety of skill sets and skill levels that can be solved in any programming language you like.

AoC 2024

Solution Threads

M T W T F S S
1
2 3 4 5 6 7 8
9 10 11 12 13 14 15
16 17 18 19 20 21 22
23 24 25

Rules/Guidelines

Relevant Communities

Relevant Links

Credits

Icon base by Lorc under CC BY 3.0 with modifications to add a gradient

console.log('Hello World')

founded 1 year ago
MODERATORS
 

Day 9: Disk Fragmenter

Megathread guidelines

  • Keep top level comments as only solutions, if you want to say something other than a solution put it in a new post. (replies to comments can be whatever)
  • You can send code in code blocks by using three backticks, the code, and then three backticks or use something such as https://topaz.github.io/paste/ if you prefer sending it through a URL

FAQ

you are viewing a single comment's thread
view the rest of the comments
[โ€“] Acters 2 points 4 weeks ago* (last edited 4 weeks ago) (9 children)

Thanks! your Haskell solution is extremely fast and I don't understand your solution, too. ๐Ÿ™ˆ lol

My latest revision just keeps a dict with lists of known empty slots with the length being the dict key, including partially filled slots. I iteratively find the slot that has the lowest index number and make sure the lists are properly ordered from lowest to highest index number.

looking at the challenge example/description, it shows a first pass only type of "fragmenting". we can be confident that if something did not fit, it can just stay in the same spot even if another slot frees up enough space for it to fit. so just checking if current index is lower than the lowest index number of any of the slot lengths would just be enough to stop early. That is why I got rid of last_consecutive_full_partition because it was slowing it down by up to 2 seconds.

in example, even if 5555, 6666, or 8888 can fit in the new spot created by moving 44, they are staying put. Thus a first pass only sort from back to front.

00...111...2...333.44.5555.6666.777.888899
0099.111...2...333.44.5555.6666.777.8888..
0099.1117772...333.44.5555.6666.....8888..
0099.111777244.333....5555.6666.....8888..
00992111777.44.333....5555.6666.....8888..
[โ€“] VegOwOtenks 2 points 4 weeks ago (7 children)

Thank you for the detailed explanation!, it made me realize that our solutions are very similar. Instead of keeping a Dict[Int, List[Int]] where the value list is ordered I have a Dict[Int, Tree[Int]] which allows for easy (and fast!) lookup due to the nature of trees. (Also lists in haskell are horrible to mutate)

I also apply the your technique of only processing each file once, instead of calculating the checksum afterwards on the entire list of file blocks I calculate it all the time whenever I process a file. Using some maths I managed to reduce the sum to a constant expression.

[โ€“] Acters 2 points 4 weeks ago* (last edited 4 weeks ago) (6 children)

yeah, I was a bit exhausted thinking in a high level abstract way. I do think that if I do the checksum at the same time I could shave off a few more milliseconds. though it is at like the limits of speed, especially for python with limited data types(no trees lol). Decently fast enough for me :)

edit: I also just tested it and splitting into two lists gave no decent speed up and made it slower. really iterating backwards is fast with that list slice. I can't think of another way to speed it up past it can do rn

[โ€“] VegOwOtenks 2 points 4 weeks ago (1 children)

Trees are a poor mans Sets and vice versa .-.

[โ€“] Acters 2 points 4 weeks ago* (last edited 4 weeks ago)

ah well, I tried switching to python's set() but it was slow because of the fact it is unordered. I would need to use a min() to find the minimum index number, which was slow af. indexing might be fast but pop(0) on a list is also just as fast.(switching to deque had no speed up either) The list operations I am using are mostly O(1) time

If I comment out this which does the adding:

# adds checksums
    part2_data = [y for x in part2_data for y in x]
    part2 = 0
    for i,x in enumerate(part2_data):
        if x != '.':
            part2 += i*x

so that it isolates the checksum part. it is still only 80-100ms. so the checksum part had no noticeable slowdown, even if I were to do the check sum at the same time I do the sorting it would not lower execution time.

load more comments (4 replies)
load more comments (4 replies)
load more comments (5 replies)