The article is a blatant advertisement. Let me counter-advertise with my experiences here.
We did evaluations of a Fusion-io ioDrive card and a Texas Memory Systems RamSan card [1] recently. The ioDrive had strange performance characteristics (but nicely designed packaging). We ended up going with a few RamSan cards (though, they have much uglier packaging).
We have a few high throughput MySQL instances that needed to perform better. The database in question is only a few hundred gigabytes in size, so it's a perfect fit for the current generation of server flash cards.
Our path for improving improving performance went:
- Started off with 32GB RAM and RAID-10 on SAS disks.
- iowait sat between 10% and 15% constantly.
- Moved up to 64GB RAM.
- iowait cut in half.
- Installed RamSan card and moved mysql with all databases to it.
- iowait became negligible.
Now the server sits there with a few hundred gigabytes of flash, 64GB RAM, and it looks completely idle on usage graphs, but it's serving data faster than ever.
They are nice devices if you can afford them (and tolerate their quirks like needing to be completely formatted during firmware upgrades).
Have you used them long enough to gauge longevity? How long do they last you? Do you have to use any special software to ensure there are not too many rewrites?
It's MySQL with the Percona patches using InnoDB (not XtraDB yet).
I've looked into rethinkdb, but it's not quite yet ready for a bet-the-company-on-it install. We didn't order any extra $10k+ flash cards just to play around with.
The RamSan flash cards took a while to get ahold of too (the government (NSA if I recall correctly) buys them by the thousands).
Is this worth the cost premium? It seems like you could get similar performance using 4 standard SATA SSD drives in a stripped RAID and the cost would be a bit less than 2k.
MySpace's new servers also replaced its high-performance
hosts that held data in large RAM cache modules, a costly
method MySpace had been using in order to achieve the
necessary throughput to serve its relational databases.
MySpace said its new servers using the NAND flash memory
modules give it the same performance as its older RAM
servers.
Given Facebook's dependence on memcached (look at some of the work they've done at optimizing the Linux network stack) I wonder if this is something they're considering. This is a pretty big leap in terms of performance. I just wish the cost wasn't so insane.
And the longevity of these drives is a concern. What happens when you run out of good bits in the drive?
FWIW and per Google and CMU retrospectives, the vendor-published MTBF rates appear very optimistic, about 36% of large HDD populations failed with no SMART data logged, and only about half of impending HDD failures were reasonably predicted by SMART. I'd expect the predictive values of various of the SMART data points to be (very) different with SSD, too.
Until we get a population of these SSDs in the field and better studied, then we'll have a better idea of the failure rates, and whether SMART needs to be considered or reconsidered.
James Hamilton referenced this in a blog post today. He was pretty skeptical. He's been a proponent of SSDs in some applications, but he can't see how their cost can be justified yet on the basis of power efficiency alone.
The power efficiency + higher overall iops justify the up front cost in applications where there is a high ratio of iops/GB stored, but most data is not accessed frequently, and Im sure that social nets are no exception. It totally makes sense though to reduce the need for RAM caches, since it is both cheaper per GB than RAM, and draws less power.
At the point when you'd need to vertically scale up from shared spinning platters to a SSD, you probably want the performance boost of running on the actual machine. ServerBeach has some Power Servers on sale that you can upgrade to a SSD for like $160/mo, roughly the cost of a reserved Medium instance.
We did evaluations of a Fusion-io ioDrive card and a Texas Memory Systems RamSan card [1] recently. The ioDrive had strange performance characteristics (but nicely designed packaging). We ended up going with a few RamSan cards (though, they have much uglier packaging).
We have a few high throughput MySQL instances that needed to perform better. The database in question is only a few hundred gigabytes in size, so it's a perfect fit for the current generation of server flash cards.
Our path for improving improving performance went:
Now the server sits there with a few hundred gigabytes of flash, 64GB RAM, and it looks completely idle on usage graphs, but it's serving data faster than ever.They are nice devices if you can afford them (and tolerate their quirks like needing to be completely formatted during firmware upgrades).
[1]: http://www.ramsan.com/products/ramsan-20.htm