Not only the lanes, but putting through more than 6 Gbps of IO on multiple PCIe devices on the N150 bogs things down. It's only a little faster than something like a Raspberry Pi, there are a lot of little IO bottlenecks (for high speed, that is, it's great for 2.5 Gbps) if you do anything that hits CPU.
The CPU bottleneck would be resolved by the Pentium Gold 8505, but it still has the same 9 lanes of PCIe 3.0.
I only came across the existence of this CPU a few months ago, it is Nearly the same price class as a N100, but has a full Alder Lake P-Core in addition. It is a shame it seems to only be available in six port routers, then again, that is probably a pretty optimal application for it.
A single SSD can (or at least NVMe can). You have to question whether or not you need it -- what are you doing that you would go line-speed a large portion of time that the time savings are worth it. Or it's just a toy, totally cool too.
4 7200 RPM HDDs in RAID 5 (like WD Red Pro) can saturate a 1Gbps link at ~110MBps over SMB 3. But that comes with the heat and potential reliability issues of spinning disks.
I have seen consumer SSDs, namely Samsung 8xx EVO drives have significant latency issues in a RAID config where saturating the drives caused 1+ second latency. This was on Windows Server 2019 using either a SAS controller or JBOD + Storage Spaces. Replacing the drives with used Intel drives resolved the issue.
My use is a bit into the cool-toy category. I like having VMs where the NAS has the VMs and the backups, and like having the server connect to the NAS to access the VMs.
Even if the throughput isn't high, it sure is nice having the instant response time & amazing random access performance of a ssd.
2TB ssd are super cheap. But most systems don't have the expandability to add a bunch of them. So I fully get the incentive here, being able to add multiple drives. Even if you're not reaping additional speed.
Until there is something in this class with PCIe 4.0, I think we're close to maxing out the IO of these devices.