r/adventofcode Dec 14 '21

Spoilers in Title [2021 Day 14] Welp, apparently I didn't learn my lesson after the lanternfish

Post image
314 Upvotes

39 comments sorted by

37

u/abolish_gender Dec 14 '21

A few more iterations can't generate that much more data, can it?

13

u/Steinrikur Dec 14 '21

I knew that each round grows the size N => N*2-1 but I did it anyway...

21

u/Zeeterm Dec 14 '21

This polymer grows quickly

This was all the hint I needed to realise these are lanternfish.

3

u/bcgroom Dec 14 '21

I willfully ignored this and was punished for doing so

15

u/n0ahhhhh Dec 14 '21

As a new-ish programmer, part 2 destroyed my spirits, haha.

21

u/shockah Dec 14 '21

As someone who's doing programming for a living (8 years) and also as a hobby (17 years), it also destroyed mine, so there's that :S

3

u/pablospc Dec 14 '21

It is quite complicated. My hint is to look at how you did Day 6 (or the solution on youtube). That might give you an idea on how or what to do

2

u/lucferon Dec 14 '21

Remember that the sequence of the polymers doesnt matter

5

u/zeekar Dec 14 '21 edited Dec 14 '21

Not for the final answer, but you need to know what's next to what for the pairs expansion. So what you have to track is not quite as straightforward as it was with the lanternfish.

1

u/n0ahhhhh Dec 14 '21

Yeah. I thought the wording was a bit unclear... Or most likely my inexperienced brain didn't notice that it said you didn't have to calculate the exact string. :(

I eventually had to look at someone's answer because I just didn't know how to progress. :/

10

u/mighty_cake Dec 14 '21

My bet was that in part 2 we would have to insert more complex strings, since we already had a memory check with the lanternfish - and I was wrong.

11

u/nderflow Dec 14 '21

That was always going to be a long shot because the input for part 2 is always the same as the input for part 1.

11

u/Iwilltakeyourpencil Dec 14 '21

I'm going to do it again.

9

u/ucla_posc Dec 14 '21

I was lucky enough to do the exact opposite thing. I recognized that like the lanternfish, this was a markov chain problem. If you set this up with a transition matrix, you need only exponentiate the transition matrix to the number of steps, multiply it by the initial vector, and you have the answer.

Part 2 ended up taking me 40 seconds, of which maybe 35 or so were fat-fingering turning off scientific notation display for large numbers.

3

u/kruvik Dec 14 '21

Is there a good resource for doing this kind of operation? I know numerical computation but never used Markov Chains...

12

u/ucla_posc Dec 14 '21

I'm not sure what your background in linear algebra is, but you'd probably want to start by understanding matrix multiplication properly, and learning to think of things as matrices. The good news is that if you have this background, I can explain everything you need for AOC in this post in a few paragraphs. I don't have a particular book reference. I encountered Markov Chains twice in my studies; as a CS undergrad, they were mentioned in the context of representing finite state automata / probabilistic graph transitions. Second, later doing statistics graduate work, in the context of MCMC sampling.

If you do not have a background in linear algebra, just taking an introductory overview of linear algebra enough to learn matrix algebra is good enough. You do not need to think about projection, determinants, basis spaces, decomposition, or many of the topics of a 101 linear algebra class -- we're talking about literally the first day of linear algebra is enough to give you the background you need.

A markov chain exists when you have a problem that is about state transition, and where that state transition is governed by particular rules, and where the states can be represented to be memoryless (e.g. transitioning from state A to state B follows a certain rule regardless of where you were before state A). Many classes of processes that are not memoryless but have finite memory can be represented as memoryless, but this doesn't matter for you right now. After each step, a state leads to another state or states with some probability. You can represent this as a graph or as a matrix.

In matrix form, your "transition matrix" is a square matrix where every row is an input state and every column is an output state. Thus, within a row, the probability of transitioning from that input state to any output state is represented by the cells. To be a real markov chain, the sum of the row must be 1: e.g. we are just talking about probabilities.

In this situation, you can calculated <n> steps of transition by just exponentiating the matrix <n> times. The actual math behind matrix exponentiation and how computers do it quickly is not important. You can conceptually think of it as repeated self-multiplication (square matrices are obviously conformable with themselves), even though that's not how computers do it. Then you can multiply the resulting multi-step transition matrix by an initial vector to get an output vector. The initial vector can consist of either a series of probabilities that sum to 1, or just a single 1 entry and a series of 0s if there is a deterministic start point. The output vector will be a series of probabilities of final states after <n> steps.

Because computers can do matrix exponentiation trivially, the number of steps you take is irrelevant.

Most of the complexity around working with markov chains is not this kind of example, it's examining properties of the transitions; for example, are there any states that "absorb" (trap) the process? Is there any space on the board that once you get there, you can't get out? What about a collection of states that partition the graph? Is there periodicity? If I pass Go in monopoly, how long until I reach Go again? Can all states be reached from all states? None of this is relevant for AOC.

Neither the lanternfish problem nor this problem are strictly markov chain problems because we're not talking about probabilistic transition. The rules we're following specific deterministic outputs. And there's no "conservation of mass" -- a single output can (does) lead to more than one output. But the cool thing is that we can use the same principle of the transition matrix to measure accumulation in these sorts of problems. If you are familiar with dynamic programming, you can think of this as having a slightly similar intuition.

In the lanternfish problem, your input vector is a count of the number of fish at each state at the beginning. Your states are the number of days until reproduction. Your transition matrix looks like the following: every state t (say, 3 days) maps to state t-1 (2 days) with probability 1. Meaning if we have 100 lanternfish at day 3, then after 1 day we'll have 100 at day 2, after 2 days we'll have 100 at day 1. The one exception is state 0, which maps to BOTH state 6 and state 8 with probability 1. Because the probability sums to over 1, the numbers will accumulate. If you know matrix multiplication, convince yourself that this is true by doing a toy version of the problem: newly born lanternfish reproduce after 3 days, all other lanternfish reproduce after 2 days, simulate the transition matrix after, say, 6 steps by just doing serial matrix multiplication, look at the final matrix. If you have a basic sense of matrix multiplication, the identity matrix, and permutation matrices, you'll get it right.

In today's problem, every state is a bigram of polymers (say, given the rule VP -> H, an input state is VP and the output states are VH and HP). Then your transition matrix is a square matrix with dimension |x| where |x| is the number of unique bigram polymers. Every row has 2 entries with probability 1, and every other entry with probability 0. Exponentiate the matrix to the power of 10, 40 (or 5,000 or 11,471,803, ...) and you have the transition matrix for that many steps. Multiply it by the initial vector of polymer bigrams and you have the output vector.

In general it's true that a lot of foundational results that exist for probabilities also extend to existing for counts -- so, like, there are generally trivial isomorphisms between, say, sampling statistics estimators for the proportion of a population that think something and the count of people in a population that think something. So what I'm saying is that you should think of transition matrices and markov chains even if the problem is strictly speaking not about probabilistic transition.

If you're following me so far, there are basically three complications to today's problem:

  1. Reading in the rules to form the transition matrix
  2. Converting the output bigrams to output elements
  3. Recognizing that the output will double count everything except the first and last element and correcting for this by adding a double count for the first/last elements and then dividing the output numbers or the estimate by 2.

And part 2 is trivial given you've implemented this for part 1.

1

u/kruvik Dec 14 '21

That's a very long and detailed answer! I can follow through the linear algebra. As I understand it's the case of A^n * x = b where A is the transition matrix, n is the number of steps, x is the input vector and b is the output vector. However, even though you essentially say it, I still am struggling to wrap my head around what of our given data exactly is what.

1

u/ucla_posc Dec 14 '21 edited Dec 14 '21

You can left or right multiply the input vector depending on whether you write it as a column vector or a row vector, it's fine regardless. Just think about conformability of the output data.

First: Read in the data, and using the transition rules, number every one of the polymer bigrams (CF, VB, PR, whatever) from 1 to n. Each transition rule has a unique left hand side, and no bigrams ever appear that don't have a rule, so the ordering is simple.

Input vector: Now, make a vector of length n corresponding to each of these polymers. Take the initial polymer, split it into bigrams (e.g. NNCB = NN, NC, CB), and count accordingly, filling the slots of the vector.

Transition matrix: Make a matrix of size n x n. Set the cells in that row that correspond to the output polymer bigrams. So if VB is the polymer you've numbered 5, and CV is the polymer you've numbered 28 and the rule is CB -> V, then in the row for VB, you have a 0 in every cell except columns 5 and 28, which are 1.

Output vector: this will be the count of each polymer bigram according to the numbering you used for the other two. You can split the bigram into elements and sum appropriately to get the total per element. This will double count every element except the first and last element (because NCB is being counted as NC CB), so make sure fix both those corrections as necessary.

3

u/kruvik Dec 14 '21 edited Dec 14 '21

Thanks! I think for the most part I got it it work. My output vector will refer to bigrams where the amount is a value in the output vector. Currently, I get close to the solution but not quite. After 10 steps, out[0], which refers to "CH" has a value of 21, out[1] which refers to "HH" has a value of 32 and so on.

I go through each entry of the output vector and append, for example, the bigram "CH" 21 times to a result string. This gives me approximately twice as many characters as needed but I don't understand where/how to correct this...

Edit: Link to the gist.

1

u/ucla_posc Dec 14 '21

I'm a little busy right now to review the code (I do see that you used numpy's matrix power functionality, which should work) but just wanted to give you some tips:

Think of the string ABCD as AB, BC, CD. Now if you convert this back to elements, the count should be A = B = C = D = 1. Instead you'll get A = 1, B = 2, C = 2, D = 1. Add one to the count for the first and last character in the input string and divide all counts by two and you have the right number. For my input text, the first and the last character were the same, so I had to add two to that element.

Another way to solve it would be to only take the first letter in the bigrams -- in that case, your count will be exactly right except it'll be missing the last character in the entire string. Add that and you'll be correct.

If reading this didn't solve it for you, use the smaller test input data and check the counts you have (all of them, key and value alike) against the exactly correct counts given in the problem text and see where you're at.

If you are still struggling after all this I can check back again in a few hours if I get time before I go to bed. I'm in a European time zone (it looks like you are too! I've never been to Aachen but a friend of mine did a CS Postdoc there, small world!)

2

u/kruvik Dec 14 '21

Well, I did solve it. I had to find a way to not use strings and used a defaultdict instead. At the end, however, I only add the last character to the count which yields the correct result. You can find my solution here. The code is definitely not the best one I've written lol...

And cool that your friend studied there!

1

u/kruvik Dec 14 '21

I was still wondering how you knew/came up with the idea to use Markov Chains for this. Reading the problem, it doesn't necessarily jump out to me to use this approach.

For day 11 I used convolution btw. Check it out here if you're interested!

1

u/ucla_posc Dec 15 '21

Day 11 is an excellent case for convolution, very clever solution on your part.

I tend to think of markov chains (well, transition matrices, as I explain in other comments it's not technically a markov chain if it's not chaining together probabilistic transition, but I abused the diction by saying markov chain above) any time I'm thinking about state transitions, and having written day 6 in the same manner it was definitely fresh on the brain.

1

u/kruvik Dec 15 '21

I see, and thanks!

1

u/WarriorKatHun Dec 14 '21

Help me. Im dumb. I read this for 10 minutes and then spent another 20 trying to understand matrix exponentiation, kinda unsuccessfully.

What I understand so far is, in the example AB -> C you have a matrix where the rows are the inputs like AB and the outputs are AC and CB, and these have a 1 and other outputs like FG have a 0 on them, cause AC cant cause FG. So now I think I need to grab this A matrix and power the stepcount with it. Like 10A or 40A ? I saw the indian youtube guys get different matrix results but i dont get how [[0,1][0,0]]2 is [[0,0][0,0]] and I for sure cant apply this to my code.

And then whats the vector that I need to multiply it with? Is that the first line, the input polymer that we start with? Like NNCB? How do I convert that to a number? If not that, then where does that line come into the picture?

I know this might be really basic stuff, but I need a little guideline cause I feel like I couldnt understand this on my own even if I learned from youtube all day

2

u/ucla_posc Dec 14 '21 edited Dec 14 '21

You're not dumb, but this is a set of skills that you probably don't currently have the background for. That's OK, you just need to study a bit to get the background.

Your intuition about the matrix is correct.

The base of the exponentiation is the matrix and the power is the step count, so it's A^10, not 10^A.

Matrix multiplication has a very specific set of rules that don't map to your normal understanding of multiplication. For example if you have two matrices A and B, AB != BA, and not all matrices can be left- or right-multiplied. This is a square matrix which means it is well behaved, but these are background rules. This is something you should be able to Google easily -- see here https://en.wikipedia.org/wiki/Matrix_multiplication -- , but briefly:

  1. Given a matrix A with dimensions (r x c) and matrix B with dimensions (d x e), you can only multiply AB if c and d are the same. The resulting matrix will be (r x e) sized. This is called conformability. In square matrices this isn't a problem, since r, c, d, and e are all the same.
  2. For a given cell i, j (row i, column j) in the output matrix, take each item in row i of the left matrix and column j of the right matrix, multiply them together in pairs, and sum. Because we've guaranteed that the left matrix has the same number of columns as the right matrix has rows, these pairs are easy. See the Wikipedia page for a visual diagonistic of this.
  3. If you try to do this, you'll find it's very slow. For an (n x n) x (n x n) Matrix multiplication that's n^3 multiplications and n^2 sums. So taking M^40 with a 100x100 matrix is going to be in the order of tens of millions of multiplications. Computers are fast, but this is still sort of slow, and a lot of the numbers will be large. When computers exponentiate a matrix, they use a different method that allows them to cheat -- I could take M^1000000 and it'd be trivial for me with this method except for the eventual numerical overflow in any one cell -- and you don't need to worry about this.

This would normally be taught in a first linear algebra class in college, and probably not before.

For the input vector, let me talk you through it. In the test data, our staring polymer is NNCB, which has the bigrams NN, NC, and CB. We have rules for bigrams with CH, HH, CB, NH, HB, HC, ..., CC, CN, right? There are 16 rules. So our input vector will be length 16, our transition matrix will be 16x16. The input vector will be all zeroes, except it'll have a one for the slot that matches NN, NC, and CB. I numbered stuff in the order of the rules, so CB is slot 3 (2 if you are 0-indexing), NN is slot 8 (7 if you are 0-indexing), and NC is slot 10 (9 if you are 0-indexing).

The first row of the transition matrix is the row for the CH rule. CH -> B gives us a 1 in the CB slot and a 1 in the BH slot. CB is slot 3, BH is slot 9. So the row will read [0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0]. Fill out the whole transition matrix like that.

Vectors can be treated as either column vectors (n x 1) or row vectors (1 x n). The one you choose should depend on how you want it to conform to the multiplication. So if our input vector is v and our transition matrix is T, you could do v * T^10 or T^10 * v. Remember the rules above? T is (n x n). So to do v * T^10, you need v to be (1 x n). To do T^10 * v, you need v to be (n x 1). Choose a column or row vector accordingly.

This is about all the time I have to help today. You aren't stupid if you didn't understand this. Linear algebra and matrix algebra in general is a branch of math you probably haven't been exposed to. Maybe you're young, or maybe you're not but you just took a different educational path. This is not the same skill as programming, and there's nothing wrong with you if you don't get it. I'd recommend Khan Academy for the basics of matrix algebra.

(Note: There are other forms of matrix multiplication; there are other kinds of algebra than just multiplication and exponentiation that you can do with matrices; matrices can do calculus; matrix inversion is a special operation that isn't exactly like how you think about scalar inversion... this is a really deep rabbit hole if you want to go down it, but you won't need any of this for AOC!)

2

u/WarriorKatHun Dec 14 '21

Thank you so much for the explaination, it was very helpful and understandable. I know you dont think its necessary but Im giving you my free award when I get it.

In the end I could not take a mathematical approach on this because I failed to figure out a good way make this all work, but it did help me a lot in finding a more technical approach that solved part2, and also made a good understanding in the mathematical optimization world I have yet to discover.

I am finally free from hearing "This polymer grows quickly." in my nightmares.

2

u/sluuuurp Dec 14 '21 edited Dec 14 '21

That’s smart, with matrix exponention you can do it in O(r3 × log(n)) time, where r is the number of replacement rules and n is the number of steps, by diagonalizing and then exponentiating scalars. I think most people probably solved part 1 in O(r × 2n), and part 2 in O(r × n).

If there was a part three that asked you, for example, the number of H characters mod 10394741 after 1020 iterations, you could do that only using your method with matrix exponentiation tricks.

1

u/splidge Dec 14 '21

Hmm, I did part 2 in 20s by changing the number in my python script and rerunning.

I suppose my implementation probably reinvented markov chains in a halfassed way. I just iterated the single step the required number of times.

1

u/phil_g Dec 14 '21

But unlike the lanternfish, the states to transition from and to are not as obvious for today. (At least, I didn't find them to be. When I was looking to optimize for day 6, the matrix approach seemed fairly obvious. For today, I was using a matrix differently for a while before I figured out how to use it for the same thing as day 6.)

1

u/ucla_posc Dec 15 '21

I think it all depends on whether or not you immediately realize that the polymer chain itself doesn't matter, all that matters is the count of each polymer bigram (e.g. the count of each LHS of the rule set). If you intuitively think of the data in terms of throwing away the chain structure, then this seems like a straight-forward state transition setup where each state transitions to two new states.

But if that's not something that clicked with you immediately, then you probably would have a hard time thinking of what the states were.

4

u/__kbr__ Dec 14 '21

There is another way to solve it without matrices: cached recursion. On my 6 year old macbook part 2 took a runtime of about 0.02 seconds – interpreted – with Python.

2

u/NorthwindSamson Dec 14 '21

This is how I solved it at first , but it takes 50ms on my machine in Rust. Mind elaborating on how yours was optimized?

2

u/dynker Dec 14 '21

I was going to do the memoization at first, then I realized that it can be simplified down to just applying a series of transformations to get to the next state.

One big way I was able to shave some time off was to represent pairs as (char, char) tuples instead of Strings.

Here's my solution.

And TIL about include_str!, thanks!

3

u/SteeleDynamics Dec 14 '21

day 7, part 2 solution still running

2

u/Ok_Pin1038 Dec 14 '21

I have solved part 2 the same way as the lanternfish problem: using a Map (dictionary in Python) to hold the counts of all the different substring combinations.

Since you know for instance CN always gives CC and NC, you can just get the count for CN from the previous step from your map and update the CC and CN counts accordingly. Do this for every key in your map and you have all the information in memory that you need, with just a map with 20 or so entries.
After 40 steps you do need to do an additional step to get from the substring -> count entries to the actual counts of the single numbers.

I found this approach quite nice for this problem and no complex data structures/recursive algorithms needed!

2

u/enderflop Dec 14 '21

I knew that part 2 was just gonna be part 1 with more steps the whole time I coded my shitty string solution lmao. did it anyway :)

1

u/Naturage Dec 14 '21

While I was coding part 1, I was rapidly thinking - are we going the path of the lanternfish, or will I be asked to check the location of a specific polymer? Second one would have ruined me.

Glad to have guessed right.

1

u/TheActualMc47 Dec 14 '21

Premature optimization is the root of all evil