Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
MySpace replaces all server hard disks with flash drives (computerworld.com)
44 points by Flemlord on Oct 14, 2009 | hide | past | favorite | 18 comments


The article is a blatant advertisement. Let me counter-advertise with my experiences here.

We did evaluations of a Fusion-io ioDrive card and a Texas Memory Systems RamSan card [1] recently. The ioDrive had strange performance characteristics (but nicely designed packaging). We ended up going with a few RamSan cards (though, they have much uglier packaging).

We have a few high throughput MySQL instances that needed to perform better. The database in question is only a few hundred gigabytes in size, so it's a perfect fit for the current generation of server flash cards.

Our path for improving improving performance went:

  - Started off with 32GB RAM and RAID-10 on SAS disks.
      - iowait sat between 10% and 15% constantly.
  - Moved up to 64GB RAM.
      - iowait cut in half.
  - Installed RamSan card and moved mysql with all databases to it.
      - iowait became negligible.
Now the server sits there with a few hundred gigabytes of flash, 64GB RAM, and it looks completely idle on usage graphs, but it's serving data faster than ever.

They are nice devices if you can afford them (and tolerate their quirks like needing to be completely formatted during firmware upgrades).

[1]: http://www.ramsan.com/products/ramsan-20.htm


Have you used them long enough to gauge longevity? How long do they last you? Do you have to use any special software to ensure there are not too many rewrites?


Curious, did you get these performance gains with off-the-shelf Mysql+myisam, or did you consider YC company rethinkdb (http://www.rethinkdb.com/)?


It's MySQL with the Percona patches using InnoDB (not XtraDB yet).

I've looked into rethinkdb, but it's not quite yet ready for a bet-the-company-on-it install. We didn't order any extra $10k+ flash cards just to play around with.

The RamSan flash cards took a while to get ahold of too (the government (NSA if I recall correctly) buys them by the thousands).


Is this worth the cost premium? It seems like you could get similar performance using 4 standard SATA SSD drives in a stripped RAID and the cost would be a bit less than 2k.


In many cases heavy raids is an overkill. just a hardware mirror + replication on slow backup servers could be a tradeof.


This is the most interesting bit:

  MySpace's new servers also replaced its high-performance 
  hosts that held data in large RAM cache modules, a costly 
  method MySpace had been using in order to achieve the 
  necessary throughput to serve its relational databases. 
  MySpace said its new servers using the NAND flash memory 
  modules give it the same performance as its older RAM 
  servers.
Given Facebook's dependence on memcached (look at some of the work they've done at optimizing the Linux network stack) I wonder if this is something they're considering. This is a pretty big leap in terms of performance. I just wish the cost wasn't so insane.

And the longevity of these drives is a concern. What happens when you run out of good bits in the drive?


IIRC, MySpace rolled their own "memcache" server... so that statement could be misleading.


It's there some kind S.M.A.R.T. check with those SSDs like their is with hard drives, to give you a clue when the thing will fail?


FWIW and per Google and CMU retrospectives, the vendor-published MTBF rates appear very optimistic, about 36% of large HDD populations failed with no SMART data logged, and only about half of impending HDD failures were reasonably predicted by SMART. I'd expect the predictive values of various of the SMART data points to be (very) different with SSD, too.

Until we get a population of these SSDs in the field and better studied, then we'll have a better idea of the failure rates, and whether SMART needs to be considered or reconsidered.


(Please pardon my spelling.)


James Hamilton referenced this in a blog post today. He was pretty skeptical. He's been a proponent of SSDs in some applications, but he can't see how their cost can be justified yet on the basis of power efficiency alone.

The power efficiency + higher overall iops justify the up front cost in applications where there is a high ratio of iops/GB stored, but most data is not accessed frequently, and Im sure that social nets are no exception. It totally makes sense though to reduce the need for RAM caches, since it is both cheaper per GB than RAM, and draws less power.


I'm surprised no virtual hosters are yet offering SSDs (as far as I know).

I suppose there's some risk one customer could burn out the SSD write-cycles then discard the node. Solution: charge for writes.


At the point when you'd need to vertically scale up from shared spinning platters to a SSD, you probably want the performance boost of running on the actual machine. ServerBeach has some Power Servers on sale that you can upgrade to a SSD for like $160/mo, roughly the cost of a reserved Medium instance.


SoftLayer have Intel SSDs.


I only see it as an option on their dedicated servers -- not the CloudLayer computing instances. Am I missing something?


Conventional Drives or Solid State Drives, it's still the Internet's ghetto.


Conventional Drives or Solid State Drives, it's still the Internet's ghetto.

Here is the racial implication of that statement :-)

http://www.cnn.com/2009/TECH/science/10/13/social.networking...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: