The amount of information in a 64x64x4 array would depend on the precision of the numbers in it, right? For example, a 512x512 image in 24-bit colour could be completely encoded in a 64x64x4 array if each of the 64 x 64 x 4 = 16,384 values had 384 bits of precision.
So, I wonder — what's the minimum number of bits of precision in the 64x64x4 array that would be sufficient for this to work?
According to a anecdotal test on one of the images found elsewhere in this thread, JPEG comression at 80% quality can chop a factor of 16 off the size of a 24bpp .bmp file.
384 / 16 = 24bits per [64x64x4-] array value. Integer range of 32bit float is 2^24. So "literally just a jpeg packed into 16K floats" is a option.
So, I wonder — what's the minimum number of bits of precision in the 64x64x4 array that would be sufficient for this to work?