I think the largest part of separation between building your own and a commercial NAS/SAN product is the redundant controllers. If I ever built my own NAS, I want a two. I prefer to have 2 of everything if possible. But I'm never one to put my eggs in a single basket. You need better hardware designed for this type of use. But the cost cutting of the BackBlaze Pod node approach doesn't work when doing a stand alone system. But when it makes sense you can make it work. There are caveats to going too big in a single chassis and they have to be weighed. Tons of CPU horsepower and lots of RAM are generally needed. RAID 60 or RAID 10 likely needed if not RAID 70. Drive speeds and types have to be considered carefully. This is where using Solaris or Illumos might make sense. Build a NAS when profits are the goal.Įven at this size, you can build a decently reliable storage device, but you have to do a lot of planning and need serious hardware to do it. Buy a NAS when politics matter over production. It was just cathartic to yell at someone. We knew who to yell at for the NAS, but that didn't get us our money back or a product that worked. We found that using commodity enterprise servers and Linux we could build something faster and more reliable for $20K that blew the doors off of the $500K "supported" NAS device. )The SAM-SD approach actually arose because the biggest NAS vendor could not produce a reliable product at the needed scale for a big investment bank. This way if it breaks I know who to yell at. If I want a production, cluster, etc system then I would look into a more true NAS or SAN system. Just curious if anyone used them and how well they performed and what they like/dislike about it. Honestly what stemmed this conversation was primarily my own curiosity. The "SAM-SD" is nothing more than a bunch of drives in a server chassis with redundant power supplies. Especially since you like to point people to essentially the same thing. If you are building a massive storage cluster with big time RAIN protection then this makes sense. Why anyone ever thought this was a good idea for something so different than what it was designed for is beyond me. It is part of a cluster and never intended to stand on its own like a NAS would be. This is not a design meant to be reliable but meant to be cheap. BackBlaze and everyone associated with it has pointed out over and over that this is NOT a NAS chassis and was never meant to be used that way. But as a NAS, this is effectively useless. In a cluster these work because access and updates go to other cluster nodes. If you can even wait that long without access to your data. The ability for even RAID 6 to reliably survive a resilver on this is very low. And the potential for additional failure with so many parts being stressed for such a long period of time means that the potential for total failure is very high. Something like a Red or Green drive array on something like RAID 6 or 7 could easily face weeks or even months of being effectively offline during a rebuild if those are big disks. It depends on some factors, like what RAID you choose, what drives you choose, etc. Usually during replication of the replica or rebuild the array. I think one of the main challenge i see is when you are doing rebuild/syncing replica on such array.Īnyone has a good idea to how to decent performance when such array is being replicated/rebuilt?
0 Comments
Leave a Reply. |