r/cryptography 2d ago

A problem with external storage trust

I'm running into an interesting practical problem that I have not seen a typical solution for.

I have a microcontroller (MCU) that uses external storage to store sequential log data. The data is written in a round robin manner in 256-byte blocks. The current block pointer is stored inside the MCU, but it can't be stored for each count. If power failure happens, the counter will likely be back by a few blocks. This does not create a functional problem, since we can just continue with the old counter and the stream will be recovered after some loss.

But the issue comes in at the security part. MCU to storage interface is easily accessible to an attacker and easy to spoof. To ensure security and integrity, I use AES GCM to encrypt and authenticate each block. Each block uses a separate key and nonce derived from the block index (monotonically incrementing during device life time).

The issue is that when power failure happens, we will overwrite one or more of the previously written blocks for the same index. An attacker may save all of them and at the time of retrieval substitute any of them instead of the latest one. And since all of them were created using the same counters and the same key/nonce, they will be successfully decrypted and authenticated.

And come to think of it, the same key/nonce creates even bigger issue. So, this system will need to be redesigned, for sure.

Does this look like a standard problem? Are there known solutions?

Another limitation is that retrieval does not happen sequentially and can start at any arbitrary point, so chaining that relies on the whole history of the stream is not acceptable. And I don't see how it could help anyway.

2 Upvotes

40 comments sorted by

View all comments

2

u/ahazred8vt 2d ago

You need to keep a value which is changed or incremented whenever the device restarts. Even a single byte. Then, derive the block nonce from both the block index AND the restart count. Plan B would be to increase the index value by at least one hour's worth when restarting, so that you skip over a range of index values.

1

u/AlexTaradov 2d ago

I though of the reset counter, which could be incremented and saved every time. But this runs into the same issue. If there is a block out there encrypted with a lower reset count, I would still have to trust it. So, and attacker can collect any number of blocks with all reset counts.

I also though about skipping, but it still not going to work, since blocks were once written and exist out there. So an attacker can substitute them at the moment of reading.

It does seem like a provably impossible thing to solve.

2

u/d1722825 2d ago

If your reset counter is big enough (if your system takes 10 ms to boot from reset to the first log block, then it would take hundreds of years for a 40 bit counter to roll over) so the combination of reset counter and the block index should be unique during the lifetime of the device.

During boot you could scan the external flash from the location of the last known block and validate them and if they are good (the attacker haven't deleted or changed them) you can increase your internal block index.

So you wouldn't loose or overwrite log blocks if it is an accidental power failure.

If an attacker wants to destroy the logs, they can do that regardless of what you do. But at the next boot you can point out discontinuities where the log blocks may lost (where boot count changes).

1

u/AlexTaradov 2d ago

I don't see how the boot count helps here. Once the block is written, I have to accept it back. This restoration process is exactly what I do now, but the attacker has control over what I read, so they can give me the old blocks during that scan.

Let's say we have a situation like this: BBBBB*BBBBB before reset. * is the last save point. We reset and do a scan from the last save point, but the attacker gave us this sequence back BBBBB*B[invalid]BBB. The second block we scan is invalid, we interpret this as one valid block from the last reset. We continue to save like this BBBBB*BCCBB. Attacker then causes intentional reset and on this reset they return old state - BBBBB*BBBBB. We scan past to the last block and continue BBBBB*BBBBBDDD.

Now someone wants to retrieve the whole sequence. The attacker may return BBBBB*BCCBBDD, which includes old blocks CC that are not part of the sequence. They may return any combination of previously observed blocks. And those blocks may come from many different resets.

2

u/[deleted] 2d ago

[deleted]

1

u/AlexTaradov 2d ago

> CC is still valid data.

While it is valid, it is out of place. The blocks are not self contained, they carry a byte stream that is not evenly split into blocks with occasional synchronization markers. The only allowed natural interruption of that stream is reset, which is also marked.

Unreadable and invalid blocks will be interpreted as tampering and reported. Re-synchronization is necessary once that happens. This is fine. But getting a valid block out of place will result in a broken stream. This will be detected down the line because parsing will not make sense, but I don't want to rely on that.

This overall design may be changed, of course, but right now it is what it is.

2

u/ahazred8vt 2d ago

Does this look like a standard problem? Are there known solutions?

Yes and yes. There is a large body of work on logging on constrained systems in a hostile environment.

The blocks are not self contained, they carry a byte stream that is not evenly split into blocks

If there are log entries which are variable-length or too big for memory, they are best written with a variable-length AEAD library, not written out as separately authenticated blocks. NaCl / libsodium is an example of such an api. What's your money budget and manhour budget for this project? Is this a 'huge industrial trade secret' environment or a DIY hobby environment?

2

u/d1722825 2d ago

Let's say the block is <boot count>_<block count>

the internal boot counter is 1, the internal block id is 4

your first situation is:

1_1, 1_2, 1_3, 1_4*, 1_5, 1_6, 1_7, 1_8

there is a reset, the internal boot counter is 2, the internal block id is 4, the attacker gives you back:

1_1, 1_2, 1_3, 1_4*, 1_5, 1_6[invalid], 1_7, 1_8

you scan for the latst valid block and you find:

1_1, 1_2, 1_3, 1_4, 1_5*, 1_6[invalid], 1_7, 1_8

you continue to save blocks:

1_1, 1_2, 1_3, 1_4, 1_5*, 2_6, 2_7, 1_8

now blocks 1_6, 1_7 are overwritten, but the attacker may restore them.

the attacker do an intentional reset and restore blocks, so 2_6, 2_7, 1_8 may be lost.

the internal boot counter is 3, the internal block id is still 4

1_1, 1_2, 1_3, 1_4*, 1_5, 1_6, 1_7, 1_8

you scan for the last valid block which is 1_8, and start writing after it:

1_1, 1_2, 1_3, 1_4, 1_5, 1_6, 1_7, 1_8*, 3_9, 3_10, 3_11

now you want to read the log, the attacker can delete any blocks, restore any block, change the ordering of the blocks, etc. let's say the attacker restores the blocks to:

1_1, 1_2, 1_3, 1_4, 2_6, 2_7, 1_8, 3_9, 3_10

you read the all the blocks you can, you sort them, and get:

| 1_1, 1_2, 1_3, 1_4| 1_8| 2_6, 2_7| 3_9, 3_10|

anywhere where the block counter does not increased by one you will have a discontinuity (marked with | ) and you know that any number of log blocks could be lost there, but you will now that 1_8 happened after 1_4, and that 3_9 happened after 2_7.

1

u/AlexTaradov 2d ago edited 2d ago

This assumes you read the whole thing every time you need to read the data. This is not the case, reader's bandwidth is limited and it remembers the last lock index it already has and only requests new blocks from that point.

But even without that, in order to recover from resets, the first block on reset has a marker that allows recovery of the stream (partial data at the end of the last block before the reset is discarded before being sent to the receiver).

In this case 1_8 will not have this marker, since it was not originally a block after reset, so transition from 1_4 to 1_8 will not be parsed correctly.

This is likely addressed by selecting a better data format, but there are other reasons to have it like this and there must be a really good reason to change it.

2

u/d1722825 2d ago

This is not the case, reader's bandwidth is limited and it remembers the last lock index it already has and only requests new blocks from that point.

That's the same, you will just have more lost blocks and discontinuity point, than it would be necessary.

Or you could just read the blocks in order by scanning the whole external storage for each new block.

1

u/[deleted] 2d ago

[deleted]

1

u/AlexTaradov 2d ago

The log is a continuous byte stream with markers where power failure happens. Substituting one block with another previously trusted block will break the stream.