For one rPIs are severely i/o limited still. May be fine with one ssd.
For two, if you like power adapters going into boxes out of which usb cables to go more external hard drives, a Pi may be fine. If you want one neat box to tuck somewhere and forget about it, they aren't.
But then people buy Intel "NUCs" where the power adapter is larger than the computer box...
And three, the latest Pis have started to require active cooling. Might as well go low power x86 then.
Exactly the route I took. I had an aging tower machine full of spinning disks running on an old LSI adapter that was doing hardware raid. They were out of space and I began to get nervous the LSI adapter could die and I would trouble replacing it. Decided JBOD for the future.
External drives were on sale, I bought several and setup with a RPI. Lots of headaches. It took effort to iron out all the USB and external disk issues. Had to work out alternative boot. Had power adapters fail for the RPI. Had to enhance cooling. etc. Kept running into popular Docker containers still not having aarch64 variants.
I finally replaced the RPI with a used Dell SFF. Kept the USB drives and it's been solid with similar power draw and just easier to deal with all around.
Though I am considering going back to a tower, shucking the drives (they're out of warranty) and going back to SATA.
I think most LSI adapters you can get a battery backup for. I've got one on mine, plus a spare battery sitting on a shelf somewhere. I admit when I put the system together for the first time I was a little hesitant to go with hardware RAID but it's worked out fine so far.
I reckon the issue is more in replacement than transient data loss: what are you going to do when you can't find a replacement controller card, or it only available at ludicrous prices?
With a proprietary on-disk format you can't exactly hook them up to any random controller and expect it to work: either you find a new one from the same controller family, or your data is gone.
Replacing your RAID controller is already major maintenance, so there's going to be downtime. I wouldn't be opposed to just wiping the drives and restoring from the latest backup. I routinely do this anyway, just to have assurance that my backups are working.
And a risk! I've had this on a premium machine put together specifically for that purpose and when the raid controller died something got upset to the point that even with a new raid controller we could not recover the array. No big deal, it was one of several backups, but still, I did not expect that to happen.
I've had mixed experiences with my NUC. It has what I think is a firmware bug that causes display output to fail if you connect a monitor after boot. Very annoying if it ever drops off the network for some reason.
There seems to be a Windows-only update tool available that might fix it, but that's rather inconvenient when it's used as a server running Linux! No update available as a standalone boot disk or via LVFS. So I haven't gotten it fixed yet because doing so involves getting a second SSD, taking my server offline to install Windows on it, just to run a firmware update.
If you use a couple of magnetic disks, the pi is fast enough. The disks will be the bottleneck. There are sata cards that allow up to four magnetic disks, and where you power that card which in turn powers the pi. It's very doable.
It's of course more work to set up than synology, and if you want a neat box, you have to figure that out yourself
You'd be surprised. A single spinning rust drive can hit 200MBps for sequential access, so that's plenty to saturate its 1Gbps NIC.
However, in my experience with a Pi 4, the issue is encryption. The CPU simply isn't fast enough for 1Gbps of AES! Want to use HTTPS or SSH? You're capped at ~50Mbps by default, and can get it up to a few hundred Mbps by forcing the use of chacha20-poly1305. Want to add full-disk encryption to that? Forget it.
The Pi 5 is supposed to have hardware AES acceleration so it should be better, but I'm still finding forum posts of people seeing absolutely horrible performance. Probably fine to store the occasional holiday photo, but falls apart when you intend to regularly copy tens of gigabytes to/from it at once.
It apparently hit 387MBps for a few hours while running the montly raid scrub. I run luks on top of mdraid though so the raid scrub doesn't have to decrypt anything.
scp to write to the encrypted disk seems to get me something in the 60 - 100MB/s range.
So long as the storage system is capable of serving a video stream without stuttering, that covers the 99% performance case for me. Anything beyond that is bulk transfers which are not time sensitive.
The alternative to a Synology NAS isn't RPi. There are plenty of alternatives - QNAP, UGreen, a tower running TrueNAS - but a messy pile of overpriced unreliable SoCs attached to SATA hats isn't an alternative for a single device with multiple hard drive bays, consistent power and cooling, and easy management.
The alternative is anything not Synology that can do NAS with SATA SSD or NVMe storage. That’s it. Anything more than that is in a class of enterprise servers that deserve its own discussion over a simple DS1522+
This is nonsense. Both a horse and a pickup truck can be used to pull a wagon, but no one seriously considers one an alternative to the other.
What you are describing is a hobby item for enthusiasts who want to enjoy tinkering, setting something crazy up and constantly debugging it. That is very much NOT the market of people who buy Synology products.
Synology is not a JBOD RAID. It is an appliance that does many things (storage, services, web access, etc.) and can automatically keep itself up to date with no additional contact necessary. It can be hooked up to a UPS for resilience. If you have a problem there are support forums and online articles. It can be synced to other Synology devices at other sites. Etc., etc.
For two, if you like power adapters going into boxes out of which usb cables to go more external hard drives, a Pi may be fine. If you want one neat box to tuck somewhere and forget about it, they aren't.
But then people buy Intel "NUCs" where the power adapter is larger than the computer box...
And three, the latest Pis have started to require active cooling. Might as well go low power x86 then.