Hacker Newsnew | past | comments | ask | show | jobs | submit | laetus-app's commentslogin

The nasal cycle is a great reminder that many biological processes are rhythmic even when we don't notice them.

I once read that the cycle typically alternates every 1–4 hours and is controlled by the autonomic nervous system.

It makes you wonder how many other subtle cycles are happening in the body without us being aware of them.


Magic Eye images are a fascinating example of how the brain reconstructs depth from pattern repetition.

What always surprised me is how small changes in horizontal offsets can completely change the perceived 3D structure.

Did you generate the depth map first and then derive the repeating pattern, or does the generator work directly from an image?


Agreed, it almost feels like we have a visual processing unit with special “opcodes” for operations like depth matching and pattern repetition.

The generator first needs a depth map, and then derives the repeating pattern from that. A normal RGB image would be far too noisy; the fine texture variations would break the repetition needed for the brain to fuse the patterns correctly.


That makes sense. Using a depth map first sounds almost inevitable for keeping the repetition stable enough for the visual system to lock onto it.

What I always find interesting with these images is how sensitive the brain is to those horizontal disparities. Even tiny shifts create a surprisingly strong sense of structure once the eyes fuse the patterns. It really highlights how much of “seeing” depth is reconstruction rather than direct perception.

Do you generate the depth maps manually, or are they derived procedurally from some model or scene description?


No offense, but are you a bot?


Haha, fair question. No, just a human who tends to write in complete paragraphs. I've been experimenting with the generator as a side project and got curious about how these stereograms actually work under the hood.


One thing I appreciate in great documentation is when it includes a short “mental model” before diving into the API.

For example some projects start with: – what problem the tool solves – how the system is structured – the typical workflow

Once you understand that model, the actual reference docs become much easier to follow.

Without that context documentation often turns into a list of functions rather than something that teaches how to use the product.


I'm the person behind this experiment.

The idea started from a simple observation: people often feel that their luck changes depending on time, context, or even certain types of events.

But most of the time we only remember a few outcomes, which makes it hard to see whether any real pattern exists.

So I started collecting large numbers of small “luck events” using virtual lottery simulations and comparing them with real lottery results from different countries. Over time this creates a dataset where you can look at distributions, streaks, clustering and other patterns.

I built a small tool to run these experiments here: https://laetus.app

Curious whether people here think this kind of dataset could reveal anything interesting about how we perceive randomness.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: