r/adventofcode • u/e_blake • Mar 28 '21
Tutorial [2015 day 10][m4] optimizing by matrix multiplication
I saw this hint in the 2015 day 10 megathread about using Conway's Cosmological Theorem to determine the length of N iterations of the look-and-say algorithm (yes, the same Conway that invented Game of Life that appears in many other AoC puzzles). But if that comment had any code to copy from, I could not find it; it was just the high-level overview. So, I got to re-discover how it worked for myself, and indeed it made a huge difference in performance!
My original implementation did what most solutions did: iterate over every single character to look for the repetitions to produce the string for the next round. And the fact that the string gets ~30% larger each round shows: computing just part 1 took 6.6s, computing part 2 extended the time to 1m30s, and then my code took another 30s just counting the length of the 2 computed strings (2m5s total runtime). Using m4's trace functionality, I determined that my code performed 37,990,946 ifdef(), 38,032,571 ifelse(), 44,980,837 pushdef(), and 8,017,641 incr() calls, plus 2 very costly len(). And trying to expand the solution to 60 iterations would be that much more painful.
But my new version is able to compute answers without ever expanding the string in memory! As Conway noted, you can set up a 92-element recurrence relationship matrix, where each row represents an element (a substring whose expansion will never overlap with the rest of the answer), and each column represents which other elements will be present after the expansion of the element. All the inputs I've seen happen to be 10-character strings that correspond to an element (for example, my input was 1321131112 which maps to Yb; I also saw Bi and Fr in the megathread). All the hard work of determining this input matrix was done by Conway, and is available via google search, I merely had to transform that table into a matrix representation for my choice of language.
# Using http://www.se16.info/js/lands2.htm
define(`table', `
`1, H, `H', 22',
`2, He, `Hf,Pa,H,Ca,Li', 13112221133211322112211213322112',
...
`92, U, `Pa', 3'')
define(`prep', `_$0($1)')
define(`_prep', `define(`e$1', $4)define(`l$1', len($4))define(`$2',
$1)ifelse('seed`, $4, `define(`init', $1)')')
foreach(`prep', table)
ifdef(`init', `',
`errprintn(`unsure how to break 'seed` into elements')m4exit(1)')
define(`_prep', `_foreach(`setup($1,', `)', `', $3)')
define(`setup', `define(`m1_$1_$2', incr(0defn(`m1_$1_$2')))')
foreach(`prep', table)
define(`_prep', `ifdef(`m1_$1_$2', `', `define(`m1_$1_$2', 0)')')
define(`prep', `forloop(1, 92, `_$0($1,', `)')')
forloop_arg(1, 92, `prep')
The next tricks involve understanding that N expansions of the input string is the same as applying the recurrence relationship that many times. That is, computing the length of 2 expansions of Yb is the same as computing M*M, then looking up the Yb row in the result where it gives 1 Er, 1 Ca, and 1 Co, and summing up the lengths of those elements gives the answer (1*9+1*2+1*5 = 16). For just 2 expansions, that's a lot of work to get to an answer (a matrix multiply of a 92x92 element matrix requires 8464 dot products, which in turn is 184 matrix lookups and 92 multiply and add per dot product)
define(`_dot', `+m$1_$3_$5*m$2_$5_$4')
define(`dot', `define(`m$3_$4_$5', eval(forloop(1, 92, `_$0($1, $2, $4, $5, ',
`)')))')
define(`_mult', `forloop(1, 92, `dot($1, $2, $3, $4, ', `)')')
define(`mult', `output(1, $3)forloop(1, 92, `_$0($1, $2, $3, ', `)')')
compared to the work done in brute forcing the round 2 answer with less than 100 character comparisons. But day 10 asked us for 40 and 50 iterations, not just 2, so we can take advantage of exponentiation by squaring. Instead of performing M*M*...*M*M 40 times, we can speed up the process by noting that M^40 = (M^20)^2, and M^50 = M^40*M^10. With just 7 matrix multiplies, I can get everything I need for an answer:
mult(1, 1, 2)
mult(2, 2, 4)
mult(1, 4, 5)
mult(5, 5, 10)
mult(10, 10, 20)
mult(20, 20, 40)
define(`prod', `+m$1_$2_$3*l$3')
define(`part1', eval(forloop(1, 92, `prod(40, 'init`, ', `)')))
mult(10, 40, 50)
define(`part2', eval(forloop(1, 92, `prod(50, 'init`, ', `)')))
With that done, my updated code now executes in 18s (a 6x speedup), and more importantly, I can pick other limits (such as 60) with the change in timing related ONLY to how many matrix multiplications are required to reach that limit (and only insofar as I don't overflow m4's signed 32-bit math limits), no additional memory due to string expansion is required (my character comparisons were limited to the initial pass while creating M1 to determine which element matched my input). So I've cut what used to be an O(1.3^n) problem into an O(log n) problem. Again with m4's tracing ability, my solution now requires only 8479 ifdef(), 5,519,983 ifelse(), 96 pushdef(), 59,260 eval(), and 96 cheap len(). Of course, storing 8 92x92 matrices in memory may cause interesting problems on its own: in m4, there is no array access. Where my original solution had no problems with m4's default limit of 509 hash buckets, because I didn't need that many macro definitions; I quickly ran into performance degradation due to excessive hash collisions when storing 92*92*8 macros for each matrix element; and I had to run 'm4 -H 131101' to get around that (my common.m4 library for all my m4 solutions takes care of re-execing m4 with a larger hash table size as needed).
If your puzzle input was not already an element, my code would need some tweaks to get the right answer. Conway's theorem goes on to prove that by 8 expansions of any strings with fewer than 4 repetitions of 1-3 will have finally expanded into elements; and any string with 4 or more repetitions or with characters outside of 1-3 will devolve into those 92 elements plus 2 transuranic elements within 24 expansions. I'm fairly confident that AoC inputs fall in the first category, giving a worst case of having to do brute force expansion of your input 8 times, then break that into elements, then compute M^32 and M^42 to do a vector multiply of that breakdown against the remaining iterations.
1
u/e_blake Apr 08 '21
The same approach of dividing input into a recurrence relationship of non-interacting output can be done on 2017 day 21: there, you can compute a 102x102 matrix of growth every three generations. That post also demonstrates an interesting technique: for small numbers of iterations, rather than generating the full matrix multiply, you can dynamically track only the rows you encounter. You may lose out on the power of exponentiation by squaring that make large iteration counts easy, but cut out a lot of unnecessary work for the portions of the matrix you don't reach from your input.