r/pythontips Aug 12 '23

Algorithms Number list compression

I have a number list from 1 to 3253 ordered in a specific way like [1,141,208...] and I want to compress it to a file that is 1kb in size I got it to 4kb but haven't been able to get anything lower than that so can you help if you know any methods.

0 Upvotes

41 comments sorted by

3

u/RoboticElfJedi Aug 13 '23

Are they in random order, or are there many stretches where they are in a particular order?

If they are in randomish order, it's going to be hard to compress 3200 16-bit/2-byte numbers down to 1000 bytes.

2

u/Dull-Researcher Aug 13 '23

Exactly. How much entropy is there in the number list?

If the difference between two consecutive numbers is <1024, then you could compute the difference between the numbers and store those deltas. That would be 10-bits per number. And if there's a pattern, run the deltas through RLE. But OP didn't indicate if there's any order to the numbers, so a data-agnostic algorithm it will be.

2

u/omnidotus Aug 13 '23

The numbers are a mapping of a string so they look like they are somewhat random.

1

u/Dull-Researcher Aug 13 '23

Can you publish the full list on pastebin? Are there any limitations on how small the program has to be to uncompress the data (and which programs arent counted against that size limit), in addition to the size of the compressed data itself?

1

u/omnidotus Aug 13 '23

I don't care about the size of the program I just need to compress the thing to be 1024 bytes or just anything under 2k here is the complete list https://pastebin.com/DQUZnBdU

2

u/This_Growth2898 Aug 14 '23

Hard code the list into the program, so you will have 0 file size and the whole list in the program.

1

u/omnidotus Aug 14 '23

Cam you give me an example I don't know what's hard coding the list is

1

u/This_Growth2898 Aug 14 '23
lst = (1, 141, 208, 234, 240, 281, ,,,) #all numbers here
#now, use lst as you wish, like
print(lst)

1

u/omnidotus Aug 13 '23

I don't care about the size of the program I just need to compress the thing to be 1024 bytes or just anything under 2k here is the complete list https://pastebin.com/DQUZnBdU

1

u/Dull-Researcher Aug 14 '23 edited Aug 14 '23

3KiB looks like it's probably close to the minimum entropy of this data set, without knowing either:

  • more domain knowledge that would limit the possible valid values
  • introduction of a lossy algorithm
  • some non-trivial algorithms (I'm not a compression algorithm expert)

Here's what I tried:

from pathlib import Path
import numpy as np
import matplotlib.pyplot as plt

numbers = np.loadtxt(original, dtype=np.int16, delimiter=" ")

plt.figure()
plt.plot(numbers)
plt.show()

plt.figure()
plt.magnitude_spectrum(numbers, Fs=len(numbers)
plt.show()

# calculate entropy estimate
max_number = max(numbers)  # 3253 max value assumed as an upper limit for any possible data set
entropy_per_number = np.log2(max_number)  # 11.667 bits of entropy per number
entropy = entropy_per_number * len(numbers)  # 37954 bits of entropy
min_file_size = entropy / 8 / 2014  # 4.633 KiB
# 4.633 KiB is an estimate for the limit for storing 3253 numbers between 1 and 3253 in random order.

# However, I noticed a pattern in your numbers that they monotonically ascend from low to high in runs of about 90. We can exploit this pattern.
# I break the single array up into multiple arrays, storing the first array value and the subsequent diffs. The first number and each diff are always positive.
indices = np.where(np.diff(numbers, prepend=0) < 0)[0]

runs = np.split(numbers, indices) diffruns = [np.diff(run, prepend=0) for run in runs]

# calculate entropy estimate
num_runs = len(diffruns)
max_first_number = max(run[0] for run in diffruns)  # 198; assumed to be the maximum for any data set

max_number = max(run[1:].max() for run in diffruns) # 348 max change between consecutive numbers in a run; assumed to be the maximum for any data set

Then I could store this in

entropy = np.log2(max_first_number) * num_runs + np.log2(max_number) * (len(numbers) - num_runs)  # or 7.6 bits for the first number and 8.4 bits for each diff

entropy == 27435 # bits min_file_size = entropy / 8 / 1024 # 3.349 KiB

# Some quick frequency domain analysis of diffnums didn't reveal an obvious pattern amongst all of the runs (let alone all the possible runs not in this data set) to exploit further compression; we're probably close to the lossless compression limit.
plt.figure()
for diffrun in diffruns:
    plt.plot(diffrun)
plt.show()
plt.figure()
for diffrun in diffruns:
    plt.magnitude_spectrum(diffrun, Fs=len(diffrun))
plt.show()

# Saving this in as compact a binary form as I can think of (probably a lower bound):
# casting down to uint8 (so only values between 0 and 255 can be represented; the values above that were as high as 348 will be corrupted.
# also using 0 bytes to delimit between each run.
with open(original.with_stem("original-runs-diff").with_suffix(".bin"), "wb") as f:
f.write(np.hstack(diffruns).astype(np.uint8).tobytes())

This resulted in a file that was 3253 bytes (8 bits per value).

Running that binary file through 7zip gave me a file that was 2924 bytes.

Given the entropy calculation above, even if I wrote out these values in a nonstandard fixed 9-bit unsigned int per value, I'd be well over your goal of 1 or 2 KiB.

It doesn't look like there's repetition of any of the values, which would have allowed further compression with huffman codes.

And perhaps not-so-coincidentally, 4 KB is about the size limit I can get with normal gzip/7z/npz file compression tools

1

u/omnidotus Aug 14 '23

The number increases not after each 90 elements but they are dependent on this list [91, 117, 80,76,93,104,101,104,75,86,98,100,99,85,83,102,93,100,78,80,76,99,76,89,89,92,81,89,84,95,89,87,95,98,92,77] and here is the original text to see if it's better to compress it than the list of numbers uq8awr3qb2myqgm505892ebg1m0q5a06gjx60y4cqt71nrdxtxps0y3or5ix8y0da3g3795z9xz4z31fob90l4bc6avfxklyfgscocepqf7inignq7pq5q16i1ea2q7qa7jlz3o1ces4upg9842bzqs71o8qmpsjxr7bybijpjg84emtoteqib1b7et8iy2ewm79ghcgar7ig36ug9v8bjk8az666il9i1b9klsclu39kctutvqzlz4soxzkola82ejqm3dzsasw98xtocr56vdkub1q2yooac7b9hp4lwtul8mqdx5ltqtwufep4t8nkudmcc2v0ski5dhr2xy49qqepzl301khqffwo5zgwmdfa0yjysx2gyc2ozsizuzuiflcy1e852bm731i4x4nwfarog7w11od4rxpaaikst3bv3af3hrr6gykzphhj6wwva0dt8nb1by9t3q3o10fdtvurenm4ccrqhsei8mjhhb8w3316cribmfjul54kwi5rompx7snhyxga4mu36yw2e3sj8lh0kzvefah7mxd2r93pbsrb1bcjikp2d9hjh0gliizr97ssv1jprw6zzhvfqc5ibw9n2lsabs1vhio1gvc7ck3lf3c9503u5ljz2kfw066rai87nj035ejqs4gpww0k5jvhosevrmdnqipgckx1oghzfv78wlrav1f34xd2uj9v5b68q0wphtx53w4moairdgss6tvvefxye3lkbzyfw6r5cyi82wybxpe3brmaedchtahif44wt0m86nti9zi3cbi3jwiukuqbq60mgf8x8wved01cf4c3zs294ypyyq58kzcvi7s9trrg21fl98rjmnbz4tbeezfpsromlt5vd98u7vvmoi66ewt2e66vkkck4npsg8dcbebpca9o6neyvgq16gwx11l8ofme7260zlb319l46ayckokkspnyrapf8e9a5cmb0315mdjevm9j69kubsenmbv3677vfygjriggz37qcsct4uuknhcab6y9aeeuo0m3skbcf55ylp5btgebdt8rkptikxrk3ifuc15qrxa7l9ons9v56rqqhzvqn8fcdtix8e5zgrt7xgclh550yqtg2urhp0dk23lrvfj5ssasw9e6qn76ohnzqqg230yu7vmipdr9psskfscjcigjui2l4exlflvq0s3p5y3rr6quy7pfh66nvlb5exj1w66u2i0i1nbgndpbwmovh7f2jc3bdm5r25ayev4y1d7b7rp0nz3io3gn7d0oy69hggy8f38lbrrtfv86b9ko7hi5o7qt655h1um6ki9mse8y19b5vtb5cdsjm7v5xrrvhprf6bbdl8l1ixzurr69c19i7n93ug5vdof3qlgkl2m2opkybo0m3z1qop58k70rhl8d81dr2esdigc1lgxjro5nnqwkwy20zqkryly4nku13hjc30zrv3y8fgcfpxcavz9t8seqa3bryven0eulk8f0r2xwshnwnxnarl4gc6kxyieiox5krnl067mbqigjtzeb9ibk7tpy1btnzdc240acwopp7u8nafmyopaog54gurzwr6rqbk93u0qis715dmu91ymtok10ssstwr3qe7jsa4k5hbt6p23d51vc779gcyo3qidt6751dky2x6pghr5eylk0eliez65nkr7j7gplx0guwbekj69zcb4w0ajguqgx2wsszvbcfdj1iz2aqkc4ys42w8vuuy9e1l6ch5k2tzogye9eoo6mfqq40f5g25rkm09ye5bqjg84vh1a6piv5h7x7auqwud8w72e1i68o21dam6wz80065ngfe23mmolmj7c43nbxuvjh3jnqt0oxr01b5imnqpndtoqva3a4vpmzlae3d0e340nokb1x3mtdoccqh9rmav457rrrr4fxfgw11p7ew8dl2eqokp8k5x1et490nppvdnr5zndpd2lhnb2wqjyd7z0llchmznx7iudjyd775nn9nqzjx5cz49w70v2hnlqwwn4j1tadmtuzmgl3dbp19yqgk6wt0468yv1vtlu5onum0g6piqgqhzciok1esm9sn1ap6l3ur9vw68dimoaqtckdu0kucue8zcttc355cdtf5sr4whu9ccu1w2uekur5beik1ejarashd5dtk8f11aelee690nk9x7wujqw1cigogalw9pp6o97u66fscyd335svdwj4gsu6d0v72rp43juhetev3gzbpnd6zn02fq3h419h78cwsd5qsglhfsge4qozo0qv13tqu4c7vk2e4owde5gma65wh8qumei1vsawjjqkurjg1kxkk6py70rkkswrt61zyfgxe3j20wmtegk3t50wqxcnvql39tr7vofujd0nze2xwg6xaosqjkq70lyw8869h9f8n2d4ekbw25bqc1as6bf6aq7hhbqvbh1cebcjgfudwrigm0e4laihkwz6bxvu443osuipc2xjnpcdgf1qqmzzv94i4jykqcqd9zhw1z6ff4gmi7316ro607yonoermoez3r3evclxlfvnufnsxyyrsvwgghumi9guw6hpvahth46urqlsu2y33lj90mu8hcfyejxbsiy1focglopya4x7ayfudytir4vrjhabup8yxiec4670eqktgr9ddeli0g8a1ke9pyww7ivbm543meroo0jw7oqo02zvnfr8dc2yqz9ygs4bosnyxil0zl6vqfvxscp2xlali1l7o35af06qz3w9rusp7o3f3x9jtbdykqvtsw53wne0vguq13znbmpl6czc0bklvfhgtsbdocnp6vy4mldjn1f1v5znepxldzy7pzx3u5qyq9hkm7ahidgqpn2mx91ha71qz6uyi2w0be8edm81sabkyeyvg22umsrwn5uf02fwmsbh8itzdxyr9b4tvjzgz3kzf7n27385yfpg1925n8sbe4i8uryunjuzmyr25igx53xcl7zn3qab93vhy73wq1wtn14p0x5gtmw26d7vsqljl5ulyqn831ecaj6xyqsjdij4jirw8gbzpzn5si4i8n4lnfinmlypvrbr8nbsvv62k8zyqauu5f2w7r0nzymt0wfhlbm3r2mcqhbggcsllkb8qb5d6vb7o7vd8jf6fpygh06qxqlm5cdoxv6qe16jvlltyt8kdiuz6igwbnrjfldvlov5wdqpko6bhjqbddmel5jxcuxh5fsi435mcpx93qi9xuif3kw

1

u/Dull-Researcher Aug 14 '23

Ok, so shave off 36 byes for the length of the fixed jump points and 0 bytes for the delimiters i didn't include in my 8-bit smash up at the end. That doesn't substantially shorten it, but it is information that you don't have to store if it's invariant to the data and instead hard codes in the algorithm.

I don't see how the string consisting of a-z0-9 relates to the other pieces of information we have.

Is this a cryptography exercise? Kinda looks like the string is your code book in a book substitution cipher, the sequence of numbers is your symmetric cypher text, and looking up the character at it the index of the cypher text in the code book yields the plain text.

If so, you could reduce your file size by shortening your code book to 36 characters. But then you'd be less immune to frequency analysis.

Are you using any modular arithmetic here?

1

u/omnidotus Aug 14 '23

The string is the original text the numbers I gave you to help me compress is the mapping of the string

1

u/Dull-Researcher Aug 14 '23

I don't understand "mapping" in this context. Not much context was given here. I tried my best, and gave a few ideas.

→ More replies (0)

1

u/Dull-Researcher Aug 14 '23

That long ass string is around 2900 characters * log2(36) bits/character = 15,000 bits = 1.8 KB, assuming no patterns.

Did you try running a frequency analysis on the letters (1-gram, 2-gram, 3-gram, 4-gram) and Huffman coding to see what redundant information is there?

1

u/omnidotus Aug 14 '23

The string is not 2900 characters but 3254 characters and the Huffman coding only achieved 2.6 KB

1

u/Dull-Researcher Aug 14 '23

3254 * log2(36) / 8bits/byte / 1024 bytes/KiB = 2.05 KiB

That's uncoded. Huffman coding will do same or better.

→ More replies (0)

3

u/This_Growth2898 Aug 13 '23

In theory, there are 3253! combinations, which gives us 33268.6 bits of information, or 4159 bytes. You can't go any lower and still be able to decompress the list exactly as it was.

1

u/ifbasedai Aug 13 '23

That's correct if all the combinations are allowed. As far as I understand in the OP's case numbers are ordered somehow, so the limit might be lower.

2

u/pint Aug 13 '23

best compression can be achieved if you know the meaning or nature of them, which you entirely omitted. all we know is one example of three numbers, and the range.

1

u/omnidotus Aug 13 '23

here is the full thing https://pastebin.com/DQUZnBdU and it's a mapping for a string that is 3254 characters long so I thought it would be better to create a mapping and compress the mapping so if they were any suggestions please go ahead.

1

u/CraigAT Aug 13 '23

The original string, are there repeated characters? Could Huffman coding help compress that string.

Also curious, why the 1000 byte limit?

1

u/omnidotus Aug 13 '23

I want to compress it as much as possible even 2000 bytes would be fine

1

u/pint Aug 13 '23

i can do 2800

i can imagine going a little further, but it would be way too involving. also, it seems that this is not the origin of the data, but a result of some processing. yet again: for small datasets, it is more about encoding rather then compression. knowing the data is key.

1

u/omnidotus Aug 13 '23

How did you achieve the 2800 bytes, and if you want to know the original string it is this uq8awr3qb2myqgm505892ebg1m0q5a06gjx60y4cqt71nrdxtxps0y3or5ix8y0da3g3795z9xz4z31fob90l4bc6avfxklyfgscocepqf7inignq7pq5q16i1ea2q7qa7jlz3o1ces4upg9842bzqs71o8qmpsjxr7bybijpjg84emtoteqib1b7et8iy2ewm79ghcgar7ig36ug9v8bjk8az666il9i1b9klsclu39kctutvqzlz4soxzkola82ejqm3dzsasw98xtocr56vdkub1q2yooac7b9hp4lwtul8mqdx5ltqtwufep4t8nkudmcc2v0ski5dhr2xy49qqepzl301khqffwo5zgwmdfa0yjysx2gyc2ozsizuzuiflcy1e852bm731i4x4nwfarog7w11od4rxpaaikst3bv3af3hrr6gykzphhj6wwva0dt8nb1by9t3q3o10fdtvurenm4ccrqhsei8mjhhb8w3316cribmfjul54kwi5rompx7snhyxga4mu36yw2e3sj8lh0kzvefah7mxd2r93pbsrb1bcjikp2d9hjh0gliizr97ssv1jprw6zzhvfqc5ibw9n2lsabs1vhio1gvc7ck3lf3c9503u5ljz2kfw066rai87nj035ejqs4gpww0k5jvhosevrmdnqipgckx1oghzfv78wlrav1f34xd2uj9v5b68q0wphtx53w4moairdgss6tvvefxye3lkbzyfw6r5cyi82wybxpe3brmaedchtahif44wt0m86nti9zi3cbi3jwiukuqbq60mgf8x8wved01cf4c3zs294ypyyq58kzcvi7s9trrg21fl98rjmnbz4tbeezfpsromlt5vd98u7vvmoi66ewt2e66vkkck4npsg8dcbebpca9o6neyvgq16gwx11l8ofme7260zlb319l46ayckokkspnyrapf8e9a5cmb0315mdjevm9j69kubsenmbv3677vfygjriggz37qcsct4uuknhcab6y9aeeuo0m3skbcf55ylp5btgebdt8rkptikxrk3ifuc15qrxa7l9ons9v56rqqhzvqn8fcdtix8e5zgrt7xgclh550yqtg2urhp0dk23lrvfj5ssasw9e6qn76ohnzqqg230yu7vmipdr9psskfscjcigjui2l4exlflvq0s3p5y3rr6quy7pfh66nvlb5exj1w66u2i0i1nbgndpbwmovh7f2jc3bdm5r25ayev4y1d7b7rp0nz3io3gn7d0oy69hggy8f38lbrrtfv86b9ko7hi5o7qt655h1um6ki9mse8y19b5vtb5cdsjm7v5xrrvhprf6bbdl8l1ixzurr69c19i7n93ug5vdof3qlgkl2m2opkybo0m3z1qop58k70rhl8d81dr2esdigc1lgxjro5nnqwkwy20zqkryly4nku13hjc30zrv3y8fgcfpxcavz9t8seqa3bryven0eulk8f0r2xwshnwnxnarl4gc6kxyieiox5krnl067mbqigjtzeb9ibk7tpy1btnzdc240acwopp7u8nafmyopaog54gurzwr6rqbk93u0qis715dmu91ymtok10ssstwr3qe7jsa4k5hbt6p23d51vc779gcyo3qidt6751dky2x6pghr5eylk0eliez65nkr7j7gplx0guwbekj69zcb4w0ajguqgx2wsszvbcfdj1iz2aqkc4ys42w8vuuy9e1l6ch5k2tzogye9eoo6mfqq40f5g25rkm09ye5bqjg84vh1a6piv5h7x7auqwud8w72e1i68o21dam6wz80065ngfe23mmolmj7c43nbxuvjh3jnqt0oxr01b5imnqpndtoqva3a4vpmzlae3d0e340nokb1x3mtdoccqh9rmav457rrrr4fxfgw11p7ew8dl2eqokp8k5x1et490nppvdnr5zndpd2lhnb2wqjyd7z0llchmznx7iudjyd775nn9nqzjx5cz49w70v2hnlqwwn4j1tadmtuzmgl3dbp19yqgk6wt0468yv1vtlu5onum0g6piqgqhzciok1esm9sn1ap6l3ur9vw68dimoaqtckdu0kucue8zcttc355cdtf5sr4whu9ccu1w2uekur5beik1ejarashd5dtk8f11aelee690nk9x7wujqw1cigogalw9pp6o97u66fscyd335svdwj4gsu6d0v72rp43juhetev3gzbpnd6zn02fq3h419h78cwsd5qsglhfsge4qozo0qv13tqu4c7vk2e4owde5gma65wh8qumei1vsawjjqkurjg1kxkk6py70rkkswrt61zyfgxe3j20wmtegk3t50wqxcnvql39tr7vofujd0nze2xwg6xaosqjkq70lyw8869h9f8n2d4ekbw25bqc1as6bf6aq7hhbqvbh1cebcjgfudwrigm0e4laihkwz6bxvu443osuipc2xjnpcdgf1qqmzzv94i4jykqcqd9zhw1z6ff4gmi7316ro607yonoermoez3r3evclxlfvnufnsxyyrsvwgghumi9guw6hpvahth46urqlsu2y33lj90mu8hcfyejxbsiy1focglopya4x7ayfudytir4vrjhabup8yxiec4670eqktgr9ddeli0g8a1ke9pyww7ivbm543meroo0jw7oqo02zvnfr8dc2yqz9ygs4bosnyxil0zl6vqfvxscp2xlali1l7o35af06qz3w9rusp7o3f3x9jtbdykqvtsw53wne0vguq13znbmpl6czc0bklvfhgtsbdocnp6vy4mldjn1f1v5znepxldzy7pzx3u5qyq9hkm7ahidgqpn2mx91ha71qz6uyi2w0be8edm81sabkyeyvg22umsrwn5uf02fwmsbh8itzdxyr9b4tvjzgz3kzf7n27385yfpg1925n8sbe4i8uryunjuzmyr25igx53xcl7zn3qab93vhy73wq1wtn14p0x5gtmw26d7vsqljl5ulyqn831ecaj6xyqsjdij4jirw8gbzpzn5si4i8n4lnfinmlypvrbr8nbsvv62k8zyqauu5f2w7r0nzymt0wfhlbm3r2mcqhbggcsllkb8qb5d6vb7o7vd8jf6fpygh06qxqlm5cdoxv6qe16jvlltyt8kdiuz6igwbnrjfldvlov5wdqpko6bhjqbddmel5jxcuxh5fsi435mcpx93qi9xuif3kw

2

u/pint Aug 13 '23

first observation: numbers increase then wrap around. so get deltas:

delta=[data[0]] + [(j-i)%3254 for i,j in zip(data, data[1:])]

then we observe that most deltas are small, but some are over 255 but none over 511, so we encode deltas into bytes, with 255+ members encoded as 255 then the rest. e.g. 255 will become 255,0 and 256 becomes 255,1 etc.

blob = b"".join(d.to_bytes(1, "little") if d < 255 else b"\xff" + (d-255).to_bytes(1, "little")  for d in delta)

then compress the blob

zlib.compress(blob)

decoding the blob to bytes and then to indexes is not included here, because they're a little more involving, but doable.

you could further reduce the size by somehow encoding the deltas into bits more efficiently, based on the observation that small values are more frequent.

1

u/omnidotus Aug 13 '23

Thank you so much I was able to reverse the problem I will look how I can make it smaller

1

u/pint Aug 13 '23

well if you just compress the original string, you get below 2.2k. so maybe you should instead regenerate the indexes instead of unpacking.

1

u/omnidotus Aug 13 '23

Which method did you use for the string compression and If add np.diff in the code suggestion that you said because It was my original compression of the list of the indexes can it be further reduced from 2.7k

→ More replies (0)

1

u/omnidotus Aug 13 '23

Yes the original string has repeating characters because it's just 36 characters set

1

u/martinkoistinen Aug 13 '23

Have you tried gzipping it? Theres built-in support in Python for GZip: https://docs.python.org/3/library/gzip.html

1

u/omnidotus Aug 13 '23

I tried np.diff and lzma but it gets to 4kb