this post was submitted on 16 Dec 2024
8 points (83.3% liked)
Advent Of Code
920 readers
60 users here now
An unofficial home for the advent of code community on programming.dev!
Advent of Code is an annual Advent calendar of small programming puzzles for a variety of skill sets and skill levels that can be solved in any programming language you like.
AoC 2024
Solution Threads
M | T | W | T | F | S | S |
---|---|---|---|---|---|---|
1 | ||||||
2 | 3 | 4 | 5 | 6 | 7 | 8 |
9 | 10 | 11 | 12 | 13 | 14 | 15 |
16 | 17 | 18 | 18 | 20 | 21 | 22 |
23 | 24 | 25 |
Rules/Guidelines
- Follow the programming.dev instance rules
- Keep all content related to advent of code in some way
- If what youre posting relates to a day, put in brackets the year and then day number in front of the post title (e.g. [2024 Day 10])
- When an event is running, keep solutions in the solution megathread to avoid the community getting spammed with posts
Relevant Communities
Relevant Links
Credits
Icon base by Lorc under CC BY 3.0 with modifications to add a gradient
console.log('Hello World')
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Interesting, how would one write such a finder? I can only think of backtracking DFS, but that seems like it would outweigh the savings.
took some time out of my day to implement a solution that beats only running your solution by like 90 ms. This is because the algorithm for filling in all dead ends takes like 9-10 milliseconds and reduces the time it takes your algorithm to solve this by like 95-105 ms!
decent improvement for so many lines of code, but it is what it is. using .index and .rindex on strings is just way too fast. there might be a faster way to replace with
#
or just switch to complete binary bit manipulation for everything, but that is like incredibly difficult to think of rn.but here is the monster script that seemingly does it in ~90 milliseconds faster than your current script version. because it helps eliminated time waste in your Dijkstra’s algorithm and fills all dead ends with minimal impact on performance. Could there be corner cases that I didn't think of? maybe, but saving time on your algo is better than just trying to be extra sure to eliminate all dead ends, and I am skipping loops because your algorithm will handle that better than trying to do a flood fill type algorithm. (remember first run of a modified script will run a little slow.)
as of rn, the slowest parts of the script is your Dijkstra’s algorithm. I could try to implement my own solver that isn't piggy-backing off your Dijkstra’s algorithm. however, I think that is just more than I care to do rn. I also was not going to bother with reducing LOC for the giant match case. its fast and serves it purpose good enough.
[ BigFastScript Paste ]
Those are some really great optimizations, thank you! I understand what you're doing generally, but I'll have to step through the code myself to get completely familiar with it.
It's interesting that string operations win out here over graph algorithms even though this is technically a graph problem. Honestly your write-up and optimizations deserve its own post.
If you are wondering how my string operations is able to be fast, it is because of the simple fact that python's
index
andrindex
are pactically O(n) time.(which for my use of it after slicing the string, it is closer to O(log(n)) time ) here are some more tricks in case you wish to think about that more. [link] Also, the more verbose option is simply tricks in batch processing. why bother checking each node individually, when we already know that a dead end is simply straight lines?If there was an exceedingly large maze was just a simple two spirals design, where one is a dead end and another has the "end flag" then my batch processing would simply outpace the slower per node iterator. in this scenario, there is a 50/50 chance you pick the right spiral, while it is just easier to look for which one is a dead end and just backtrack to chose the other option. technically it is slower than just guessing correctly first try, but that feels awfully similar to how a bogosort works. you just randomly choose paths(removing previously checked paths) or deterministically enumerate all paths. while a dead end is extremely easy to find and culls all those paths as extremely low priority, or in this spiral scenario, it is the more safe option than accidentally choosing the wrong path.
What would be the fastest would be to simply convert this to bit like representations. each wall could be 1, and empty spots could be 0. would have to be mindful of the location of the start and end separately.
ah yes, I was right. simply string slicing away lines that were checked does make the regex faster. while the code is smaller, it is still slower than the more verbose option. which is only because of the iterative approach of checking each node in the
While(True)
loop, instead of building 2 lists of lines and manipulating them with.index()
and.rindex()
[ Paste ]However, if you notice, even the regex is slower than my iterative approach with index by 3-5 milliseconds. While having one line for the regex is nice, I do think it is harder to read and could prove to be slightly more cumbersome as it could be useless in other types of challenges, while the iterative approach is nice and easily manipulable for most circumstances that may have some quirks.
Also, it shows that the more verbose option is still faster by 7 ms because of the fact that checking each node in the
While(True)
loop is rather slow approach. So really, there is nothing to it overall, and the main slow down is in you solver that I didn't touch at all, because I wanted to only show the dead end filling part.I tried to compartmentalize it. the search is on its own function, and while that
fill_in_dead_ends
function is extremely large, it is a lot of replicated code. match k case statement could just be removed. A lot of the code is extremely verbose and has an air of being "unrolled" for purposes of me just tweaking each part of the process individually to see what could be done. The entire large af match case all seemingly ended up being very similar code. I could condense it down a lot. however, I know doing so would impact performance unless plenty of time is spent on tweaking it. So unrolled copy pasta was good.The real shining star is the
find_next_dead_end
function because the regex before took 99% of the time of about ~300 ms seconds. Even with this fast iterative function, thefind_next_dead_end
still takes about 75% of the time for the entire thing to finish filling in dead ends. This is because as the search ran deeper into the string, it would start slowing down because it was like O(n*m) time complexity, where n in line width and m is line count being searched through until next match. My approach was to store the relative position for each search which conveniently was thecurr_row,curr_col
. Another aspect to reduce cost/time complexity on the logic that would make sure it filled in newly created dead-ends was to simply check if the current search for the next dead end was from the start after it finished checking the final line. Looking at the line by line profiler from iPython, the entire function spends most of the time at thewhile('.' in r[:first_loc]):
andfirst_loc = r[:first_loc].rindex('.')
which is funny because that is still fast at over 11k+ hits on the same line with only a 5-5.5 microsecond impact for each time it ran the lines.though I could likely remove that strange logic by moving it into the
find_next_dead_end
instead of having that strange if elif else statement in thefill_in_dead_ends
logic.there is so much possible to make it improved, but it was quick and dirty.
Now that I am thinking about it, there would be a way to make the regex faster by simply string slicing lines off the search, so that the regex doesn't spend time looking at the same start of string.
ah well, my idea is at high level view. Here is a naive approach that should accomplish this. Not sure how else I would accomplish this without more thought put in to make it faster:
[ Paste ]
edit: whoops, sorry had broke the regex string and had to check for E and S is not deleted lol
This is how the first example would look like:
This is how the second example would look like:
for this challenge, it will only have a more noticeable improvement on larger maps, and especially effective if there are no loops! (i.e. one path) because it would just remove all paths that will lead to a dead end.
For smaller maps, there is no improvement or worse performance as there is not enough dead ends for any search algorithm to waste enough time on. So for more completeness sake, you would make a check to test various sizes with various amount of dead ends and find the optimal map size for where it would make sense to try to fill in all dead ends with walls. Also, when you know a maze would only have one path, then this is more optimal than any path finding algorithm, that is if the map is big enough. That is because you can just find the path fast enough that filling in dead ends is not needed and can just path find it.
for our input, I think this would not help as the map should NOT be large enough. This is naive approach is too costly. It would probably be better if there is a faster approach than this naive approach.
actually, testing this naive approach on the smaller examples, it does have a slight edge over not filling in dead ends. This means that the regex is likely slowing down as the map get larger. so something that can find dead ends faster would be a better choice than the one line regex we have right now.
I guess location of both S and E for the input does matter, because the maze map could end up with S and E being close enough that most, if not all, dead ends are never wasting the time of the Dijkstra's algorithm. however, my input had S and E being on opposite corners. So the regex is likely the culprit in why the larger map makes filling in dead ends slower.
if you notice from the profiler output, on the smaller examples, the naive approach makes a negligible loss in time and improves the time by a few tenths of a millisecond for your algorithm to do both part1 and part 2. however, on the larger input, the naive approach starts to take a huge hit and loses about 350 ms to 400 ms on filling in dead ends, while only improving the time of your algorithm by 90 ms. while filling in dead ends does improve performance for your algorithm, it just has too much overhead. That means that with a less naive approach, there would be a significant way to improve time on the solving algorithm.