These are my scripts for the puzzles at https://adventofcode.com/2024 . Same as last year, I will be uploading all days that I solve with a short commentary on thought process/description. All the puzzles (so far) have been solved without looking up any hint related to the event. Note that my naming process for the input texts is input + the date. So for the first day it was input1.txt, the second day input2.txt... etc.
The first days are generally fairly easy, so I didn't expect much. Part 1 was mainly about sorting 2 lists and taking the absolute differences of their elements, initially I messed up because the puzzle said to find their "distances" and I thought it meant the difference between their positions. Part 2 was similarly easy it just required a simple operation between the lists.
Day 2 wasn't too difficult either. It was basically a bunch of if statements to find the allowed sequences. Part 2 was just the same thing but ignoring one of the columns each time.
Day 3 was mostly about text parsing. It didn't have anything too special but part 1 took me a bit because I was reinitializing my result at every line and couldn't find the problem. Part 2 was easy as well, it just required a couple of extra parsing functions.
Day 4 wasn't hard, the main idea for part 1 is essentially finding all the "X"s and then figuring out how many "XMAS" words they create in each of the 8 directions. The trap I fell for here was forgetting that with my loops you could go to negative indices and would cause lists to warp around so I counted a few extra words this way. Interestingly the way I figured that out was by forgetting to do another typical thing which was to remove the "\n" from every line. After I removed it I found out I was counting more words which shouldn't happen so then I realized that in fact that column of "\n" was actually protecting me from overcounting due to the horizontal warp arounds and so I added another row of dots to prevent the vertical warp arounds from happening. Part 2 was simple too, the idea here is to iterate over all the "A"s and then check if they fulfill the conditions.
Day 5 was the first puzzle that wasn't straightforward in its solution. Part 1 was fairly easy, for each list of numbers we just had to check a bunch of conditions on the order of its elements for whether to reject it or not. Part 2 was much more interesting. We had to reorder the list to make it fit the conditions, the conditions worked like an ordering, "a" before "b". My first idea was to take each list and check it against the conditions, each time the list failed one of the conditions (e.g. "b" was before "a") I would swap the two elements. There are small problems with this method, first one is that not every two elements in the list are directly linked to one another so we would have to run it multiple times to make it fully operational. Ideally, this would work like a bubble sort on each list of numbers. I'm sure that if run enough times on the list, it would work and I could always do something like 'while not ordered' 'run the list against the conditions', however, I didn't like this solution that much. My second idea was to take the list and create something like a "total ordering" meaning have a complete list of conditions from start to finish e.g. "a" before "b" before "c" before "d"... That was one possibility, another was having multiple "unrelated" conditions like "a" before "b" before "c" and "1" before "2" before "3", from now on I'll write "<" instead of "before". Implicitly I assumed transitivity worked globally on the set of conditions meaning that if we had "a"<"b" and "b"<"c" then "a"<"c" and so I proceeded to use Zorn's lemma to create some form of total order on the set. If you are not familiar with it Zorn's lemma basically says that if the above assumptions are true then there exists a maximal element let's call it "el" such that there is no element that is greater than that (or in our case that goes after it). Then we can separate that element and find the maximal element for the remaining list and if we do this properly we should be able to make either a totally ordered list or a bunch of totally ordered smaller lists. Unfortunately, when trying this out I discovered that in fact the conditions had loops like "a"<"b", "b"<"c", "c"<"a". This meant that transitivity didn't hold, at least not on a global level. I was sad to abandon the idea since I liked Zorn's lemma but then I realized I didn't have to trash it completely. The total order might not work on a global level but it HAS to work on a local level (i.e. for each individual list of numbers) otherwise the problem would not be solvable. So for each set of numbers, I isolated the relevant conditions (as a curated set of rules) and on those I now used my previous idea of Zorn's lemma. Finding the maximal element for each list of numbers was then easy and then I could search for the next maximal element and thus completely order them. There were a couple of other technical details, for example, how to identify the next maximal element without reprocessing and re-curating the set of conditions but those are not too important. Overall pretty interesting puzzle.
Day 6 was another interesting one. Part one was basically an algorithmic implementation of the description, it wasn't too complicated, it just required some creativity to add all the parameters of the description (e.g. position, direction, position of obstacles). Part 2 made the problem much more computationally intensive, now we needed to do the procedure an
Day 7 was fairly easy, it just required basically doing all possible combinations of multiplication and addition operations between numbers in a list. I just did it the brute force way with some small optimizations. For part 2 it was almost exactly the same but now we had three operations (addition, multiplication and concatenation). I performed a very slight modification to the code and it worked immediately. All in all, an easy day.
Day 8 was quite easy as well, part 1 just required to find for some pairs of antennas that are in a grid, the points that are collinear with them and are twice the distance from the first antenna than from the second one (and not in between apparently). The solution was just essentially requiring some vector additions and checks to see whether the new points are in bounds. Part 2 was slightly more tricky, it required to find ALL the collinear points between each 2 antennas of the same type. To solve I just took the vector difference between the two antennas and divided it by the greater common divisor of its components (to get the smallest primitive translation vector) and then add or subtract the vector as many times as needed to stay within the bounds. Overall, pretty easy as well.
Day 9 was decent. It mainly involved moving elements of a string from right to left to "unoccupied" positions. Part 1 was about moving the characters of the block in the string one by one while part 2 was about moving the entire block to the first unoccupied space. It required some clever manipulations but the idea wasn't that complicated.
Day 10 was a funny one. Given a height map, we had to find out how many top points we can reach from each bottom-most point using only steps of +1 increasing height. I accidentally read the description wrong initially and solved a different problem. I thought instead it asked how many different paths are from each bottom-most point to all top points. The way I did it was by iterating over the map and assigning values to each height. Each height of "9" (the top most) would get a value of 1, then each height of "8" would get a value as the number of "9"s it neighbored. And then each subsequent height "k" would get a value based on the sum of the values of the "k+1" heights around it. Then I would just need to sum over the values of height "0". Unfortunately, that is not what the problem was asking us to do. So I modified my solution, instead of values I used a list of the possible "9"s that can be reached by each point, then for a height "k" I just had to add the lists together of all the neighboring "k+1" heights while removing the duplicate elements. Then for the height "0"s I just had to sum the lengths of their lists. That was part 1. For part 2... I had to find all the distinct paths... which is exactly what I solved with my wrong solution so I just inputted that answer and got it correct immediately. As I said before, pretty fun day, first time it happened to me that I "guessed" what part 2 was going to be (even by accident).
Day 11 was a very interesting one. We have to perform a certain set of operations a number of times on a list of numbers. For part 1, we can just perform the operations naively and it will still compute relatively quickly. For part 2 though we need optimizations since the computation time gets exponentially large. The first optimization that we could do is by noticing that 1-digit numbers behave very predictably under the operations. For example, the number 2 after one step becomes 4048 and then after another step it becomes 40,48 and then after a total of 3 steps it becomes the numbers 4,0,4,8. Similarly we can find such patterns for all 1-digit numbers, another example is that the number 6, after five steps becomes the numbers 2,0,4,8,2,8,8,0. With this method, we can solve the problem recursively for all 1-digit numbers and essentially eliminate the computation time whenever we encounter a 1-digit number on our list. Small caution has to be exercised when doing this for the number 8 since it has a very small modification compared to the other numbers. I coded this optimization by hand for each of the 1-digit numbers. Since all 1-digit after a few steps become lists of 1-digit numbers, the optimization saves a lot of time. That was initially the program I ran to find the solution, which it did after a few minutes. I knew however that it could be optimized further. After a bit of testing I found out that the next biggest time loss came from the 3-digit numbers. This makes sense since all 2-digit numbers would split into 1-digit ones and 4-digit numbers would also eventually split into 1-digit ones. The insight is that after a ton of steps we would probably have computed the evolution of most 3-digit numbers, many times over. So the idea is to compute a few steps of evolution for each 3-digit number and then save it to some dictionary. Then each time we would have to compute again the evolutions of 3-digit numbers, we would just pull it from the dictionary. By only computing the evolution of the 3-digit numbers to about five steps, we were able to reduce the total computation time to just about a few seconds. Overall, pretty interesting day, might have been the hardest one so far but last year's day 12 prepared me for this so it wasn't too bad.
Day 12 was a quite interesting geometry puzzle. Part 1 involved finding the area and prerimeter of various regions in a grid. The area is just the number of points that are inside and the puzzle already gives a hint for how to do the perimeter, you just need to compute the number of neighboring connections in each region. Each point then adds 4 units to the perimeter and each neighboring connection removes 2 units. To find the points in each region and the neighboring connections we just do a nested function checking the neighbors at each point and then sort of "eliminating" them to not double count. Part 2 was a bit more interesting. Instead of the perimeter we now have to find the number of sides for each region. The idea is that the number of sides (edges) is equal to the number of vertices. So now we have to count the number of vertices. To do that we essentially just have to account for all possible situations of neighbors around a point. For example if it has 0 neighbors, it contributes 4 vertices. If it has 1 neighbor it contributes 2 vertices. For the other cases it's a bit more complicated, for example, if it has 2 neighbors then if those neighbors are on the same line as the point then it contributes 0 vertices. If the neighbors form an "L" shape then it depends on the label of the point inside the "L" angle, if there is a point of the same region then we have 1 vertex if there is isn't a point there then we have 2 vertices. We then have to perform similar checks for 3 and 4 neighbors and then we can calculate the number of sides. All in all, a nice geometric exercise.
I found day 13 to be fairly easy, although, I'm not sure if it is the same for everyone. Part 1 essentially required us to solve a linear equation within a range of inputs between 0 and 100. I am assuming the expected naive method was to use iterations which would work for part 1 but would be unoptimal for part 2. Realizing it was a linear equation I just used the determinant method that I also used in the previous year to solve the system. Part 2 was supposedly where the difficult part would come and where the iterative method from part 1 wouldn't work since it was just the same problem but with bigger numbers. The determinant method worked just fine on both parts though with minimal changes. Fairly easy day I'd say although if you are not familiar with linear equations it could be puzzling.
Day 14 part 1 was quite straightforward, we just have to figure out the evolution of a system of positions (of robots) and velocities after 100 steps. There is also the added detail of the positions warping around when they get over the limits (thus adding something like the topology of a torus) and so requiring modular arithmetic. Part 1 can be done either by simulating all steps one by one, however, it is much more elegant to just immediately jump to step 100 and then take the mod of the coordinates. Either method solves the problem instantly. Now for part 2, I must say I was quite disappointed with the second part of this exercise. The premise was that after a number of steps, the positions align to form a "Christmas tree" and we were required to find after how many steps this happens. Given the inherent periodicity of the problem because of the warp arounds at the edges, there are only a certain number of steps up to which we should check. That number is the least common multiple of the sizes of the grid (in our case 101x103 so around ~10000). The problem with this puzzle was that there was no mention of how the Christmas tree should look like. For example, is it hollow, is it filled, is it centered? Secondly, the example was quite useless for this part as well since we don't know what we are looking for. Potentially we could brute force the example by "eye" and then make the Christmas tree appear there and then use it to find the same pattern on the main input, however, I don't like that method, it feels too crude. If they had given us the number of steps that it takes for it to appear on the example then it would have been much better. Anyway, the way I tackled it was by assuming that the tree will be connected and that out of all positions, it should have the maximum "connectedness" among all other configurations. In other words, for every configuration I measured how many robots have neighbors next to them and figured out that the configuration that maximizes this number should be the Christmas tree configuration. Indeed, the algorithm worked and I found the Christmas tree. The algorithm isn't that fast, it takes a couple of minutes to run but given the information that I had, it's good enough for me. Now that I know how the tree should look like, I could potentially make it much faster but I don't think I will bother. Overall, could have been a fun exercise but it felt too ambiguous for part 2.
Day 15 was pretty fun. We had a room full of boxes and walls and a robot moving around and pushing the boxes. The goal was to find the final position of the boxes when the robot finishes its set of moves. To do it we just have to figure out whether each move actually moves the robot (as opposed to it pushing against a wall or pushing against boxes resting on a wall), then actually manifesting that move. For that I used lists where I popped and inserted elements. Part 2 was quite a bit harder. The problem was almost the same but almost everything doubled in width, empty space, walls and boxes became 2 units wide, while the robot remained 1 unit wide. This meant that now a box could push 2 other boxes if those boxes are adjacent and the first box is exactly between them on the horizontal direction. Furthermore, these 2 boxes could push 3 boxes and those 3 could push 4... Essentially the pushing range now becomes like a cone. To solve it now I used recursion to check for each box, which other boxes it pushes. Then there was a small matter of figuring out how to mark the boxes being pushed and how to make it so we don't double count them. Finally, we would need to figure out how to perform the actual move which was done row by row and moving the boxes one by one. As a bonus, for both parts, I added a print output that shows the map and the robot for each step. It is currently commented-out but if we uncomment it and resize the terminal properly, then we can see the robot moving boxes around as if it's an animation. All in all, quite a nice exercise. Not easy to write in my opinion since there were many ways to mess up the code.
For day 16 we had to find the path of least steps through a maze, furthermore, each time we turned there was a penalty score so in fact we had to find the paths with the fewest number of turns and out of them the path with the fewest steps. There might be a way to classify paths with this condition, however, the moment I saw the problem I realized I could solve it with a method from the previous year. Day 17 of 2023 taught me a method of solving such "maze" problems by simultaneously taking all the paths. The way we do that is by assigning a score to each tile equal to the lowest possible score to get there. The way to do THAT is to start from the starting square with the initial values and then adding the minimal value possible to get there, for each adjacent square. We have to iterate over the whole grid but it still won't be enough. We have to perform the process several times and then stop only when the whole system has "stabilized". The penalty of the turns is translated in each point on the grid having two values, one for arriving via the horizontal (left or right) and one for arriving via the vertical (up/down) there. Then once we have done this process enough times to stabilize the grid, we can look at the value at the end and figure out the least of the two scores there. For part 2, we had to figure out all the points that belonged to paths with the lowest score. Apparently, there are multiple paths that achieve the same minimal score. Now for that, I used a separate list of values for each point and it was whether they belonged to such a path or not. If a point belonged to such a path then it must also be adjacent to another point on the path. Starting from the end, we can figure out the directions of those paths. The program works recursively and calls the function again each time on each new such point. To avoid counting the same point multiple times, if the function steps on a point that has already been counted, it stops (thus ending that branch of the recursion). We need to be careful though because each point has two values, so in order for the point to be skipped, it has to have been counted before from the same direction (horizontal or vertical). The same point could be part of two completely different paths while it (the point) has different values on each path. At the end we just count the points that had been marked at least once. Very interesting day altogether. A nice little throwback to last year but not nearly as difficult.
Day 17 was quite interesting. Part 1 was insanely convoluted to read, but once you understand it, it's fairly straightforward to implement the instructions. The idea is that you have a program with 3 registers and a set of instructions, and the goal is to find the output of the program. Again most of the difficulty in part 1 is understanding the description. In fact, it should even be solvable by hand in a fairly short amount of time. The meat of today's problem was part 2. Part 2 was asking us what should be the input in "A" (one of the 3 registers), such that the output matches the set of instructions. To solve it first we have to understand the way the instructions worked. For my case (and I assume for most people), the instructions looped for every digit in base 8 of "A". The way it worked is that after each loop, A would be divided by 8 (taking the floor if it's not an integer) and the program would stop when A reached 0. There was no other operation modifying A each loop and the values of "B" and "C" (the other 2 registers) would essentially reset for each loop. So there was no "memory" between loops save for the value of A. Furthermore, there was only one output each loop which was a digit in base 8. To solve this in a reasonable amount of time we had to be clever, of course, trying to find the value of A that would output the instructions directly would take ages as it should be 16 digits long in base 8. Instead, we could try to find it digit by digit starting from the end. Since the program has no memory apart from A, we could look initially, at which values of A produce the last digit of the instructions. These values should be numbers from 0 to 7 since anything higher than that would produce 2 outputs. Now for each value that we found, we could look at 2-digit numbers in base 8 whose first digit is the value and then check for those which such 2-digit numbers output the last 2 digits of the instructions. Doing it like this, digit by digit, from the end to the beginning allows us to find essentially all the values that when inputted into the register A would produce the instructions. The puzzle asks us to find the smallest such value. While there is an argument to be made about always picking the smallest n-digit input that would output the last n digits of the instructions, I believe that this method doesn't always work since some inputs that work for "n" digits might not give any results for "n+1" digits. So I think until we have all the inputs that produce the whole set of instructions, we shouldn't discard any inputs prematurely. That being said for my case it would work and maybe the puzzle is coded this way so that it always works. This "debate" can be seen in my code with the commented-out "continue" near the end. If we remove the "#" then the program would run faster but there is the risk of running with an input that works initially for the last "k" digits of the output but fails later on and so we would get no answer. Ideally, we could use some hybrid method where we run with the smallest input but if it fails then we have the next smallest input afterwards to run with and so on. This would probably be the fastest while still being always correct but it would be extra effort to code and it's not worth it for a program that already runs almost instantly. Overall, this was a great puzzle. After overcoming the initial confusion of the description and solving part 1, part 2 was quite a brain teaser. A small possible shortcoming is that you have to look at the input file and start deciphering the instructions to realize the optimal way to solve it (mainly by realizing that the "program" runs a finite number of loops, the values of the other 2 registers reset each loop, the number of outputs per loop etc...) But that's not to say that it was bad. It was in fact very fun to think about.
Day 18 started as another pathfinding exercise. Like day 16, for part 1 we had to find the path with the fewest amount of steps through a "maze". Now we didn't have any sort of penalty for turning so we just had to minimize the number of steps. Again I used the same method although now each point had 1 value. For part 2, the maze would change, as time passed, more walls would appear and we had to find out the exact moment at which the maze turned from solvable to unsolvable. To do this we used a similar method. First, to determine whether the maze was solvable we used the method from last year too (and the one used on day 16 of this year). We iterated over the entire maze to find the least number of steps to get to each point and when these values stabilized then we can find our answer: if the value at the last tile is some finite number then the maze is solvable. If not then it's not solvable. Now we had to determine the moment this change of state occurs. We had around ~3000 total walls coming down, so naturally we check whether the maze is solvable after ~1500 walls have been placed down. Depending on whether the solvability answer is yes or no, we go to step ~750 or ~2250. Doing this over and over (halving the distance between the current number of steps and the closest one in the other state), we arrive pretty quickly at the exact value at which the state of the maze changes. All in all, a pretty nice exercise. My solution was not the fastest, it takes a few seconds to run but it works nicely and without needing to check too many mazes.
Day 19 might be the hardest one so far. Even from part 1 the exercise was tricky. We had a list of small strings and a list of big strings and the question was how many of the big strings can be written entirely from the small ones. My first thought was to use recursion. That is, to check for each small string whether it was at the beginning of the big string and then remove it from there and recheck the reduced big string against all small strings. Unfortunately, given that there was a big number of small strings, this took too long. Then I had a look at the list of small strings. I noticed that some of its elements were redundant, they could be written by using other elements of the list and so were not needed themselves for checking the big strings. So I tried to create a refined smaller list such that only the fundamental small strings were there (those that can't be written by concatenating other elements of the list). To do that I simply had to use the recursion algorithm on the smaller list, being careful to skip checking the string against itself. Those that were fundamental I then put in another list (the refined one) and then I ran my original recursion program on the big list but now only testing it against the refined list. This proved to be a success as the program ran quite quickly and gave me the correct answer. Unfortunately, this idea would fall apart in part 2 (at least a-priori). For part 2 we had to find all the combinations of strings from the small string list that would create each string in the big string list. After some thinking, I kept the idea of using the refined list. First, we had to find all the combinations of strings from the refined list that would make each big string. Now for each of those combinations, we had to find out how they could be written using the terms from the non-refiend list. To do that, we started joining the elements of the combination one by one and checking whether the concatenation was a part of the small string list, if so then we fed off the remaining elements of the combination as input again to the same function. To reduce runtime, we created a memory dictionary such that it would remember the number of possible ways to write each combination and it would instantly provide the answer if we ever needed to compute the same number. This quickened the computation by a lot. Sadly, this was not the solution yet because we were actually overcounting. It is possible to get the exact same combination of small string list elements from 2 different combinations of refined list strings (that constitute the same big string). That can happen if both fundamental combinations are similar but differ somewhere down the line just a little, but their difference when joined together actually makes an element of the small string list. An example of that would be if the small string list had the elements [r,w,b,g,bu,uw,buw] and we had a big string like "rrrbuwggg". In this case, the refined list would be [r,w,b,g,bu,uw] (since buw can be written as bu+w or b+uw) and the big string would be fundamentally written by two combinations: r+r+r+b+uw+g+g+g or r+r+r+bu+w+g+g+g. When doing the method naively, for both combinations we'll get to a point where we are trying to compute the combinations of r+r+r+buw+g+g+g, but since we are doing it twice, it's obviously double counting so we don't want that. To avoid this situation, we created another memory dictionary that would remember if this exact combination of elements from the small string list has been tried before, and if so it would discard this computation. After all this the algorithm was able to finish within a few minutes. Clearly not optimal but I don't think I want to spend much more time on this problem. Overall, a pretty difficult day, it reminded me a lot of day 12 from last year, which is why I tried a lot of the same methods such as recursion on a smaller input and the memory functions. I assume there is some better combinatorics way to solve this problem quickly but I couldn't think of it. If there is, it's probably some way to separate the refined list combinations creating some "independent" lists and then using the multiplication rule of independent scenarios. But, as I said I probably won't spend more time on this problem. It was pretty fun though, glad to see that up to this day there hasn't been a problem that has taken me more than a day to solve.
Day 20 was quite easy. We had a path again counting the number of steps from a starting point to an ending point. The problem was linear so no need to use last year's day 17 method. Now on that path we had the ability to go through a wall once and the question was how many such skips exist that save at least 100 steps. To solve this we simply iterated over all points on the path and looked around in all directions, if we encountered a different part of the path we calculated the steps saved and if they are above 100 we would count it. Part 2 was slightly harder but not by much. Instead of being able to avoid collision with walls for 2 steps, now we were able to avoid wall collision for 20 seconds. Again we had to find out the number of skips that save at least 100 steps. Again we iterated over the path, now to implement the 20 step skip we looked at a "circular" region (of radius 20) around every point and found the points there that belonged to the path and of those we took the points that saved 100 steps or more. There might be a faster way to compute it but it was fast enough so I was happy. All in all, a pretty easy day.
Day 21 was by far the hardest day so far. We had a keypad which we needed to press a code on but to press the keypad we had to input directions on a robot's arm like left ("<"), right (">"), up ("^"), down ("v"), and enter ("A"). To input instructions in this robot we had to use a second robot with the same idea (up, down, left, right, enter), and finally, we had to input the same thing manually on our own "instruction keypad" to control the last robot (this is what I will call depth here and for part 1 we had depth 4). The question was basically what is the smallest number of inputs to enter the code. A couple more constraints, there were a couple of forbidden regions that the robot arms must not go on. Initially, I made some working assumptions, all of which turned out to be wrong. Those assumptions were that maybe the order of operations does not matter (as long as we didn't zigzag) but it turns out that it does matter later on. Another assumption was that maybe the forbidden regions don't matter (it turns out that at least for depth 4 only the forbidden region in the initial keypad matters). For part 1 I tried a lot of methods, they involved recursion and trying to make some kind of rule for choosing the directions (whether to go vertical first then horizontal or the other way around, this implicitly has another assumption: that zigzags would not be efficient which turned out to be correct, if we want to go 2 units diagonally top left then we can either go 2 units up and 2 units left or the other way around, everything else would be inefficient). These solutions always came in short. Since I figured solving it rigorously would be too complicated, for part 1 I basically picked paths at random and minimized the length of the input, ignoring mostly the condition about the forbidden regions except for the region on the first pad. After finding the answer for part 1, I read part 2 and as expected, it took the complexity to the extreme. Now instead of having a depth 4 "recursion" (first keypad, 2 robots and finally actual inputs), we now had a depth 27 recursion. I am not going to lie, this took me way too long. I realized early that the solution would probably be something like finding the answer for depth 9 and taking some kind of "cube" of that solution. I ended up going for a "recursive" list of dictionaries such that every new element of the list was created from the previous one using some basic rule. At first, because I ignored the forbidden regions, I made the assumption that the only thing that matters is the vector connecting the current position of the robot arm with its target position, so for going from number "0" to number "7" we would have the vector (-3,-1) (as a list). Unfortunately, since the forbidden regions did matter, we would need to hardcode every starting to ending position. So we would need a different rule for the "0" to "7" movement compared to the "A" to "8" movement (even though they have the same vector). Similarly for the other keypads, the rule for "<" to "^" would have to be hardcoded and might be different than the rule for "v" to "A". Learning all these facts took a lot of trial and error especially since the advantage of some rules over others does not become apparent until greater depths. I ended up inputting by hand almost all the rules because extracting them from code turned out to be very confusing (although it should be doable but it's easy to mess up and lose time). In the end, after the solution worked, I checked for any patterns in the hardcoded rules. Indeed, there seemed to be a pattern like if the target is to the right, it should go vertical first but if it's to the left it should go horizontal (excluding forbidden regions which should be hardcoded). After adding these general rules, the code should work for every input. I ended up finding the solution in 2 steps instead of 3 so I calculated depth 13 basically and doubled it (I made sure that what I calculated was in correct depth via an exponential rule that I'll explain shortly). I probably could have done a more efficient solution since the current one basically calculated all patterns up to depth 13 but it worked fairly quickly (less than a minute) and I didn't want to spend more time on this problem. Another interesting detail that I found was that the ratio of input lengths from depth to depth was almost constant (it basically was converging towards a constant) that constant is about 2.4709793796 so every depth was scaled by approximately that compared to the previous depth. This method allowed me to calculate which depth I was on at all times in case it wasn't easy to figure out. I'm not sure what that constant is exactly but I imagine is some kind of average number of distance on the directional keypad between inputs (weighted by the frequency of their appearance maybe). Overall, this was an extremely hard day. It took me 2 days to complete so what I said on day 19 is no longer true. I did enjoy the rush again of coding late at night trying to find the solution though.
Day 22 was pretty easy, I didn't check it until day 23 (thanks to day 21 taking me 2 days) but both parts were solved fairly quickly. For part 1 we just had to code some kind of hashing algorithm from the description and run it 2000 times on each input. For part 2 we had to use only the last digit of each iteration of the algorithm and figure out a sequence of 4 such digits that maximized the number of inputs containing sequence times its last digit (I read that slightly differently on my first read so I did it slightly less efficiently but it still gave the correct result). My solution was just a dictionary of each sequence which I just found the maximum after the algorithm was done. All in all, a very easy puzzle, solved it very quickly after the horrors of the previous day.
Day 23 was also fairly easy. We had a bunch of connections between nodes and we just had to find out how many sets of nodes contained 3 or more nodes (and take only the sets that contain a node starting with "t"). I used an idea from last year and made a dictionary for every node containing all the other nodes it connects to. Then for each connection of two I just had to find out if both of them connect to a third and add the trio to some list (being careful to avoid duplicates). For part 2 we had to find out the largest set of nodes that connect to all others within the set. To do this I worked as follows: I took a connection between two nodes then checked the intersection of all nodes these connect to and made a set out of all of them. If these two nodes are part of the largest interconnected set, then this set would be a subset of the intersection of the nodes each of the initial nodes connect. Then I checked if that intersection was interconnected (via a function I defined) and if not, which node breaks the interconnectedness. Then I removed that node and would check again, eventually we will arrive at an interconnected set of nodes. If that is the largest set we have found we keep it separate. Eventually, when this process ends, we would have the largest set of interconnected nodes. Overall, an easy day although part 2 required some thinking. I'm glad it didn't take long since it allowed me to catch up after day 21 pushed me back 2 days.
Day 24 was another difficult one. Part 1 was a bunch of gates (AND, OR and XOR) between inputs and between other gates and we had to read the "output" gates after performing all the operations, it was fairly quick. Part 2 said that the gates in fact should be a very big addition (with two inputs of 45 bits, x00 through x44 and y00 through y44), however, there were 4 pairs of "wires" between gates that had been swapped (so the addition wasn't quite correct). The puzzle was to find which 4 pairs had been switched. This proved to be harder than expected, I tried many ideas which failed. Some key ideas to solve the problem were the following: we have to find a way to identify which outputs were correct, related to that, we had to find which gate outputs had the potential to be in the wrong place and how to make this set as small as possible. The first idea to identify the correct outputs was checking their initial inputs, we had a function that could show for each output, which initial inputs it took. Now for a multi-bit addition, it's fairly easy to find what should be the correct pattern (in this puzzle the output z00 would need to connect to x00 and y00, z02 would need to connect to x00,y00,x02,y02 and twice to each of x01 and y01). Knowing this we could make a first estimate as to which gates are connect (or rather which are wrong). Unfortunately, it's not entirely correct since it only checks which inputs each output connects to but it doesn't say how it connects (what combination of AND,OR,XOR gates). It probably would have been possible to write out exactly the correct pattern for 45-bit addition but I thought it was too complicated. So what I did was I checked from the inputs, if the output was as expected. Unfortunately, for this method, sampling which inputs to use to test each output isn't very easy so we had to "gamble" a little, particularly for output "08" I would test additions of the form
Day 25... the final day... I did not know what to expect from the final day but it turned out much easier than expected. Like last year, the last day had only one part so I expected it to be fairly tricky. However, it turned out to be quite simple. We had a bunch of patterns that we had to identify whether they are keys or locks and then we had to identify all the key/lock combinations that fit. This was very easy, even the description gave a hint, we just had to convert each key/lock into a height map and for every key/lock combination check whether the sum of their height maps does not surpass the total available height. There was nothing more to it, in fact, it was the only day that my answer worked immediately and I hadn't forgotten some minor detail.
I wasn't sure whether I would do the puzzles this year too but I'm glad I did, like last year, they were all pretty fun, clever, tricky, brainteasing... I also felt like I improved my skills from last year since I finished all puzzles by Christmas and used some of the tricks from last year. The most difficult day was easily day 21 which took me 3 days to solve, followed by day 24 which took me a bit more than a day. Then probably day 19 which took a few hours but was solved without needing to go to bed in the meantime. Overall, I had a lot of fun this year too and I might try again next year if I have time. I'm not sure if I'll revisit this repo, maybe to make some improvements or correct a typo in README but it's unlikely.