I would be concerned that there’s some electrical reason why it’s detected intermittently, and that the extra 8GB could easily disappear while the system is running.
Low-level software is responsible for most memory issues, but there are definitely electrical reasons, too.
For example, I have a devboard that only support 1 GiB of DDR2 RAM, but it's a 64-bit system and the memory controller on the CPU was supposed to support at least 2 GiB of RAM. Meanwhile, another board that uses the identical chip runs 2 GiB RAM without problems.
The engineers of the devboard briefly explained that the problem was electrical. The memory controller itself has inadequate drive strength, adding more RAMs would increase the load on the DDR bus and destabilize the system. On the other hand, the other board had better PCB layout so the problem did not occur.
I have a Mini-ITX board, I noticed that if I activate XMP, the board will cease to work, but it works with 4 GiB of RAM at standard frequency.
So it seems memory is a general problem among Mini-ITX boards? Perhaps the reason is that these boards have less available space for routing, fewer layers, and targets a lower price, so they trend to have worse electrical characteristics?
Quality of the PCB and the number of layers definitely plays a factor I'm sure it isn't limited to ITX though. I have noticed compatible differences between super robust Intel ITX boards vs ECS thin and wobbly ITX boards, with the ECS board having more issues, quirks and what not.
I had something similar with my 2011 MBP. I had it running just fine with 16GB of ram even though it officially only supported 8GB. After having the main board replaced because of a defective graphics card I could not run it with more than 8GB without it crashing continuously.
I once had a defective 512MB RAM module. Usually it would be detected as a 256MB module and it would work fine. Sometimes it would be detected as a 512MB module, but corrupted data would crash the system within a few minutes.
You used to be able to scan memory consistency and pass a kernel parameter to skip bad regions of memory. I ran a system for years with known bad memory that way.
...and it probably already has, multiple times, but you haven't noticed because you've not used that much RAM.
The BIOS does a (relatively) quick memory check in the POST to detect how much memory is actually available, basically by writing a series of patterns to all addresses and then reading them back to confirm; some desktops have a "fast boot" option which mostly skips it (I believe it's something like testing one byte per 4KB instead of every byte), and servers usually have a much more thorough test that can take many minutes.
The best way to check whether the memory is functional when 16GB is detected is to run a memory tester like MemTest86.