[Boards: 3 / a / aco / adv / an / asp / b / bant / biz / c / can / cgl / ck / cm / co / cock / d / diy / e / fa / fap / fit / fitlit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mlpol / mo / mtv / mu / n / news / o / out / outsoc / p / po / pol / qa / qst / r / r9k / s / s4s / sci / soc / sp / spa / t / tg / toy / trash / trv / tv / u / v / vg / vint / vip / vp / vr / w / wg / wsg / wsr / x / y ] [Search | Free Show | Home]

RAID 10 would only fail under extraordinary circumstances, right?

This is a blue board which means that it's for everybody (Safe For Work content only). If you see any adult content, please report it.

Thread replies: 30
Thread images: 1

File: storage_raid_10_desktop[1].png (90KB, 600x440px) Image search: [Google]
storage_raid_10_desktop[1].png
90KB, 600x440px
RAID 10 would only fail under extraordinary circumstances, right?
>>
>>62007842
It would take 2 disks to fail at the same time: and they would have to be both 1 and 2 or 3 and 4. you could tolerate loosing 1 or 2 and 3 or 4. So if 2 disks fail at the same time, 1/3 times you are fucked. But 2 disks failing at the same time is rare.
>>
>>62007842
It's not that hard to fall into 'extraordinary circumstances'.
For instance: Say you buy a 4 HDDs which all end up being from the same manufacturing batch. Now one of them fails around 4 years in, so you replace and rebuild the array. However during the complete data copy, the strain finishes tearing down the parallel drive, and RIP all your data.
tl;dr don't buy the same batch and keep an offsite backup.
>>
i just did a little more reading, and apparently your entire RAID is fucked if you have to rebuild and there is a single read error

it seems that you can't have peace of mind with a RAID setup unless you have full backups, but i don't know how practical that is when you have over 30TB of data
>>
ZFS should be more reliable.
>>
>>62008052
Exactly, you cant trust raid. Just go with a Cloud solution and youll find your peace of mind.
>>
>>62007842
One time over voltage on your power supply and all disks are gone. Likelihood is depending on the hardware.
It happend to me once.
>>
>>62008052
You are confusing it with RAID5. And even there it does not have to be an issue, depending on your setup.
>>
>>62007842
Uh, you're only really safe with a single drive failure. Second drive and it may be fucked, even if you have a chance that it is not.

Honestly, use RAID6 instead. Or just 2 redundant copies if you can use ceph or such a nice cloud storage thing.
>>
>>62008052
>>62008342
>apparently your entire RAID is fucked if you have to rebuild and there is a single read error
>You are confusing it with RAID5.
Unless you're using some shit Windows proprietary RAID thing, you're probably fine with a bit error. Shouldn't even fuck a block, but at worst the very unlikely right kind of error will fuck a block.

And well, depending on the data, one lost block is either just a tiny annoyance or terrible, but for the latter kind you really should have 1 or more backups anyhow.
>>
>>62008313
is this really specific to RAID? seems like too much voltage would ruin any system
>>
>>62007842
>RAID 10
It's insanely wasteful for providing just 1-disk guaranteed redundancy.

Example: 20TB overall capacity spread over 10 drives 2TB each, you get 10TB usable capacity and 1 disk redundancy.

Same setup but with RAID5 instead, you get 18TB usable capacity and the same 1-disk redundancy.

Of course the best solution is ZFS with RAID-Z1 (or -Z2 if you want 2-disk redundancy, -Z3 for 3 etc.), because you get all the advantages of RAID5 and better performance, with the great benefit of self healing, i.e. restoring silent data corruption the moment it happens.

And then you also get the general benefits of ZFS like snapshots that can be rolled back in live, self healing even without using raid (uses checksums stored on parent blocks), automatic data compression and deduplication, etc.

>what's the catch?
It's memory hungry. That's all. No special devices, or hardware RAID cards, or anything. Add some cheap RAM that you allocate (mentally) for ZFS and you're golden.
>>
>>62007842
You're better off with raid 6
if you've got a lot of disks raid 6 with a hot spare or two
>>
>>62007842

>using hardware raid in 2017

use zfs
>>
>>62008489
Adding to what this anon says: use ECC memory if the system you have supports it. It's not that expensive and it protects against silent bit errors.
>>
okay, you guys have me convinced that zfs is the way to go, but i might be too stupid to build the nas myself

how does this look?
https://www.ixsystems.com/freenas-mini/
>>
>>62008489
>Same setup but with RAID5 instead, you get 18TB usable capacity and the same 1-disk redundancy.

OP is worried about the URE issues with large arrays and possibly running into it during a rebuild.

OP could get quality drives that have 1 URE in 125 TB of data
The chance won't come down to 0% but something more manageable and maybe something OP could live with.

I think you forgot one con of ZFS, you can't just 'add' disks to it like you can with RAID.
You gotta buy all the disks up front.
>>
>>62008472
Not if you have two physically separated machines.
>>
>>62008645
>I think you forgot one con of ZFS, you can't just 'add' disks to it like you can with RAID.
>You gotta buy all the disks up front.
What??

Not at all. Usually RAIDs are generally inflexible (especially hardware ones).

Also, ZFS has the most flexible solution with vdevs and pools.

A vdev (virtual device, usually consisting of multiple drives in a RAID configuration, like RAID-Z1) can be expanded by adding new drives *live*.

Multiple vdevs make up a pool. New vdevs can be added to expand a pool *live*.

Also, ZFS is engineered to be usable even when rebuilding a vdev, unless you assign the highest priority to the rebuild process than usability of the system while it's rebuilding.

How can anything be more flexible than that?
Do you have a specific configuration or filesystem in mind?
>>
>>62008852
Or, you know, instead of a duplicate machine, a cheap ass ups.
>>
>>62008645
>I think you forgot one con of ZFS, you can't just 'add' disks to it like you can with RAID.
how the fuck can anyone be that dumb, and why do they always feel the need to spout bullshit about stuff they don't know shit about?
in zfs you CAN just 'add disks' or whole raids
in raid you CAN'T just 'add disks', and you gotta buy all the disks upfront.

0/2, try again
>>
>>62008472
we are getting into dooms day scenarios man..

Do you want to make sure the NAS can surive a flood and lightning strike too ?

I am sure you could configure some off site backup of critical files to the cloud if you wanted.

>>62008865
Sorry I wasn't clear enough about what I was talking about.
http://louwrentius.com/the-hidden-cost-of-using-zfs-for-your-home-nas.html


>>62008925
ok ok, you got me, i was talking raid in the context of pre build nas boxes , like synology or qnap
Maybe not directly with some rigid raid system but I know qnap lets you just add drives over time and change the raid level. You could go from 2 drives in raid 1 to 2 drives in raid 5 as you add drives over time.
>>
Raid 5+0.... Use 10 drives. Gg. That parity yo.
>>
>>62008290
>not everyone can upload 30+TB of data
i for one could only do such a thing if bought a business connection or else i'd be uploading with a speed of 2MB/s
with my current backup i'd be uploading for half a year
And this children is why you get a proper backup solution (tapes) and not some cloud meymey
>>
>>62008429
>but for the latter kind you really should have 1 or more backups anyhow.
how do you know that though?
if you have several TB of important data that is not consisting of a handful of binary blobs but millions of textfiles then a backup won't be of much help since who knows how many years it will take for you to find the error if you find it at all
>>
>>62009200
Sure, there's also that kind of data.

But for the vast majority of data, if some bit corruption isn't found in half a year or five years or whatever your backup scheme extends to, it probably just doesn't really matter.

But sure, if you do have immensely valuable data, I guess you're going full cloud on mixed hardware in geographically distinct locations and re-verify and re-replicate data chunks at relatively short intervals against a set of two-three chcksums.
>>
Threadlyreminder that RAID is for redundancy and not a backup solution.
>>
>>62008865
>ZFS has the most flexible solution with vdevs and pools.
lol get the fuck out of here you fucking retarded shill
if you want flexibility btrfs is king
>>
RAID 0 is good for increasing performance of certain workloads when you data can easily be reproduced and going offline isn't a big deal.
RAID 1 is good for read heavy workloads and increasing availability.
RAID 5 is good for giving you a false sense of security and losing all your data.
RAID 10 is good for increasing performance and availability, but only if you can afford to be mirroring three ways.

Points to know:
RAID is not a backup, at all.
You must he able to tolerate any two drive failures for a RAID setup to be truly fault tolerant for high availability. A second failure during rebuild is very common.
Always keep a drive available to swap, and swap asap when a disk drops.
Mix disks of different age or manufacturer between disks in your redundancy layer to reduce the risk of multiple simultaneous failure. They happen.
Don't ever for a second think RAID counts as a backup.
Tape is cheap, mirrors are expensive. For the love of god, take backups.
>>
>>62008865
>A vdev (virtual device, usually consisting of multiple drives in a RAID configuration, like RAID-Z1) can be expanded by adding new drives *live*.
That's just not true.
Thread posts: 30
Thread images: 1


[Boards: 3 / a / aco / adv / an / asp / b / bant / biz / c / can / cgl / ck / cm / co / cock / d / diy / e / fa / fap / fit / fitlit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mlpol / mo / mtv / mu / n / news / o / out / outsoc / p / po / pol / qa / qst / r / r9k / s / s4s / sci / soc / sp / spa / t / tg / toy / trash / trv / tv / u / v / vg / vint / vip / vp / vr / w / wg / wsg / wsr / x / y] [Search | Top | Home]

I'm aware that Imgur.com will stop allowing adult images since 15th of May. I'm taking actions to backup as much data as possible.
Read more on this topic here - https://archived.moe/talk/thread/1694/


If you need a post removed click on it's [Report] button and follow the instruction.
DMCA Content Takedown via dmca.com
All images are hosted on imgur.com.
If you like this website please support us by donating with Bitcoins at 16mKtbZiwW52BLkibtCr8jUg2KVUMTxVQ5
All trademarks and copyrights on this page are owned by their respective parties.
Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.
This is a 4chan archive - all of the content originated from that site.
This means that RandomArchive shows their content, archived.
If you need information for a Poster - contact them.