Fortunately that’s not how it works. You would have to have a whole collection of known and identified CSAM to be classified as anything at all with the system Apple has announced.
No, their AI accepts some deviation. And we don't know how many false positives it takes to start a human review. And for privacy of course different people will evaluate different photos. And a human without context will flag a picture of a kid on the bath as child pornography. It is a very real possibility OP is made a suspect because of this.
Even if we accept that your image of your kids in the bath will match the hash of a known and identified CSAM picture (a real stretch), under this system the voucher payload does not contain the private key to decrypt the picture, so nobody will be able to evaluate a photo or flag it as anything. Humans have access to a “visual derivative” based on the perceptual mechanism used to create the hash, but not to the actual photo.
Other companies have CSAM scanning and reporting with much fewer safeguards and the “my kid in the bathtub” scenario hasn’t seemed to actually be a problem.