[Boards: 3 / a / aco / adv / an / asp / b / bant / biz / c / can / cgl / ck / cm / co / cock / d / diy / e / fa / fap / fit / fitlit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mlpol / mo / mtv / mu / n / news / o / out / outsoc / p / po / pol / qa / qst / r / r9k / s / s4s / sci / soc / sp / spa / t / tg / toy / trash / trv / tv / u / v / vg / vint / vip / vp / vr / w / wg / wsg / wsr / x / y ] [Search | Free Show | Home]

It does not matter how many you will order, this will always be a DoA.

This is a blue board which means that it's for everybody (Safe For Work content only). If you see any adult content, please report it.

Thread replies: 210
Thread images: 24

It does not matter how many you will order, this will always be a DoA.
>>
thats a fucking nice looking drive
>>
why is it DoA anon
>>
File: 1366540-1.jpg (39KB, 488x700px) Image search: [Google]
1366540-1.jpg
39KB, 488x700px
>>56555382
A challenger appears
>>
>>56555180
>10TB Seagate
I dread the day we get one of those fuckers broken in work.
>>
>>56555475
man why do 10TB drives get such amazing designs
>>
File: 6E040L0.jpg (27KB, 300x300px) Image search: [Google]
6E040L0.jpg
27KB, 300x300px
>>56555542
At some point in history manufacturers stopped making drives look cool.
>>
>>56555475
This. HGST is Seagate's biggest competitor, both are similarly priced and both have great lineups. Hitachi doesn't make high capacity drives and WD is too expensive.
>>
>>56555523
This is why RAID exists. Who the fuck would store important data without backup and/or redundancy?
>>
>>56555592
HGST stands for Hitachi Global Storage Technologies, or at least did.
>>
>>56555626
Isn't HGST a division of WD while HITACHI is a division of Toshiba? I forgot, maybe it changed.
>>
>>56555592
>>56555626
Hitachi's hard drive subdivision rebranded itself HGST when WD bought it from Hitachi
>>
>>56555650
No idea, but likely considering DT01ACAs use the same firmware as HT CLAs.
>>
>>56555650
>>56555654
Hitachi itself is a company-state

>In March 2011, Hitachi agreed to sell its hard disk drive subsidiary, Hitachi Global Storage Technologies, to Western Digital for a combination of cash and shares worth US$4.3 billion.[15] Due to concerns of a duopoly of WD and Seagate by the EU Commission and the Federal Trade Commission, Hitachi's 3.5" HDD division was sold to Toshiba. The transaction was completed in March 2012.[16]
>>
File: backblaze-survivability.png (21KB, 1600x1000px) Image search: [Google]
backblaze-survivability.png
21KB, 1600x1000px
>>56555592
HGST offsets its price difference by how much money you are going to save in the long run due to needing to replace drives significantly less often
>>
>>56555584
not posting RAPTOR X
>>
>>56555555
>>
>>56555727
HGST is owned by WD.
>>
>>56556341
How is that relevant whatsoever to my post?
>>
File: top.jpg (39KB, 512x371px) Image search: [Google]
top.jpg
39KB, 512x371px
>>56556112
This.
Shit was so cash, I remember seeing it in PC magazines and wishing I had the cash for one.

If only I knew I would have SSDs so much faster than that shit, it would have saved me many hours gazing at hardware mags.

They are still the most aesthetic HDDs.
>>
>>56556420
>If only I knew I would have SSDs so much faster than that shit, it would have saved me many hours gazing at hardware mags.
I'm sure in 10-20 years' time we'll be sayin the same about those 60 TB SSDs
>>
>>56555542
These two are Helium drives
>>56555475
>>56555180
They need to have completely sealed casings to prevent any Helium from leaking out and any moisture from coming in. You current hard drive have tiny vent holes covered by white stickers to stop dust from getting in there. That's why they look awkward and ungainly from the top view, because the manufacturer's label needs to fit around these strategically placed vents.
>>
>>56556420
I remember when they made everything windowed, even PSUs. The problem with those hard drives was that the plastic window would warp and crack over time, leading to platter death by particulates from that plastic that chipped off.
>>
>>56555584
/thread
Holy fuck was maxtor fucking shit.
ten years ago i used to work at frys and every fucking harddrive return was a maxtor, we'd get like 3 returned a day because of DOAs.

i also remember when seagate was amazing back in 05, now its pretty much you either go hitachi, WD, or samsung or dont go at all.
>>
>>56556420
these where practically the "gaming" harddrives of the mid-00's
>>
>>56556538
Maxtor is back. They're selling rebranded Samsung external hard drives with Seagate 1-6TB drives.
>>
>>56555475
My dick!
>>
>>56555475
>$720

What's the point? It's cheaper to build a full-blown NAS with RAID out of 1tb drives.
>>
File: 1450188821534.gif (1MB, 300x226px) Image search: [Google]
1450188821534.gif
1MB, 300x226px
>>56557618
And pretty much guaranteeing raid failure and losing everything
>>
>>56557866
RAID is made to not lose anything, you idiot.
>>
>>56557872
Do you even know how raids work dummy? Stop while you're ahead
>>
>>56557895
>doesn't even know what RAID stands for

>>>/b/
>>
>>56555180
>Seagate Barracuda

The early 2000s were nice!
>>
>>56557916
No no, please, tell me how you would build a raid with ten 1tb drives cheaper than $720 without making it a time bomb or having to spend $1200+. Stupid gamer ass, stop trying to act like you know anything.
>>
>>56558006
12 cheapest 1Tb WD greens in RAID 5 in a cheap NAS box.

Keep replacing drives when they fail after a while, no information lost, the speed will exceed any 10tb shitbox.

You don't even know what raid is for, do you?
>>
Petty sure 1tb drives aren't the lowest $/gb
>>
>>56558076
bhahaha I fucking knew you were going to say raid 5 with cheap home drives. Thank you for proving me right good sir. Congratulations you just built a fucking time bomb.
>10TB raid 5 of non enterprise 7200rpm drives suicide
>fake/shit tier raid card
>1 drive fails
>rebuild taking 2 fucking weeks in this setup
>99% chance of a URE during that time
>poof bye data
>>
>>56558006
>>56558076

Guys... to match a 10TB hard disk in a RAID setup you'd need 20 or 30TB of storage.....
>>
>>56558159
Yup.
5x 2TB drives a sata port replicator and the cables would run you like $250
>>
>>56558162
There is no need to be upset.
Maybe when you grow up they will teach you about raid in school, son.
>>
>>56558177
You don't have to be mirroring the drives. You actually just need 13,333TB to store 10TB.
>>
>>56558178
Pretty much. I ended up getting a bunch of 4tb drives, but that's because I wanted to maintain a little density.
>>
>>56558179
It's ok man, everyone starts somewhere. No one uses fucking raid 5 anymore, epically with 10 fucking TB holy shit lol. But keep teaching yourself , it sounds like you have a good start. Raid 10 Is the only acceptable answer. Thanks for playing though
>>
>>56558212
Someone was talking about redundancy tho
>>
>>56558212
Might as well do a raid 0
>>
>>56558232
Distributed parity, m8. It only tolerates 1 HDD failing at a time, but that's more than enough.
>>
>>56557618
>>56557866
It's not for you.
>>
>>56558264
Sure it does, and then you have a degraded array for 2 weeks and during that time everything runs at a crawl and the smallest disk error makes you lose everything. You don't even have to lose a second drive. That big of a raid you do raid 10, MAYBE 6 if you feel ballsy, and that's with enterprise grade drives. A 10tb raid 5 rebuild is Russian roulette , kills the point of a raid
>>
>>56556538
That's funny. I have like 5 old IDE Maxtor HDDs all they all work.
>>
>>56558212
>>56558264
If you know absolutely nothing about what you're saying, can you just keep your mouth shut?
>>
>>56558292
You're right, a 10tb raid 5 Is not for me. I actually know what redundancy means.
>>
>>56558383
seagate shills in full force
>>
>>56558430
How does calling someone out on b.s. Make him a Seagate shill?
>>
>>56555180
Of course! because Seagate is a piece crap and sh¡t. Lowest lifespan, is unreliable.
>>
>>56558494
Nice try changing the subject after realizing you made a fool of yourself
>>
Kind of on topic, my 1tb WD Black is finally failing (shows a high number of relocated sectors), I'm not sure what brand to go with, Hitachi or another WD drive (maybe a different color)?
>>
>>56558544
HGST or WD black,
Or if you're poor, WD blue is good
>>
I'm barely accepting of 4TB HDDs. It's clear the more platters you add the higher the risk of failure, but also the greater the temperature of the drive. SDD capacity has already surpassed HDD, but it's too expensive at the moment. I would love to load up my case with 16 16TB SDDs and swim in a bed of 16TB SDDs.
>>
>>56558563
k
>>
File: Raid.png (77KB, 1426x752px) Image search: [Google]
Raid.png
77KB, 1426x752px
Data is important
>>
>>56558179
>There is no need to be upset.
>Maybe when you grow up they will teach you about raid in school, son.
He's right though. With today's drive capacities a rebuild takes so long and is so demanding that drive failures during rebuild are pretty common. In such a case you are fucked with RAID5 and loose everything.
>>
>>56558521
Seagate is crap tier, I miss the old Western Digital's and the Maxtor's drives, those were reliable. Seagate is just an NSA backdoor.
>>
File: 1451269263307.png (22KB, 360x361px) Image search: [Google]
1451269263307.png
22KB, 360x361px
>>56559046
>maxtor
>reliable
are you even trying?
>>
>>56559046
literally the only drives that have ever failed me in fucking 20 years of IT carrer were FUCKING maxtor.

There's a reason they don't exist anymore.
>>
>>56559231
They were bought out by Seagate.
>>
File: file.png (371KB, 387x500px) Image search: [Google]
file.png
371KB, 387x500px
>>56555475
>Ultrastar HE
>>
>>56555727
>backblaze
FUCK OFF.
>>
>>56559231
I have bought like 40 seagate drives in the last 4 years, only 3 remain alive
>>
>>56559231
>>56559231
I've only had 2 hard drive failures in my ~18 years of computer use. First was 80GB Maxtor in 2002, second was a 300GB Maxtor in 2006. Would not buy again.
>>
>>56556524
My windowed PSU is like a decade old and still powers my server :)
>>
>>56559046

LOL clearly you've never actually had to rely on a maxtor drive beofre

>>56559067

srsly
>>
>>56559420
I have 2 500GB 2 1TB and 1 2TB Seagate Barracudas. Out of these only 1 1TB drive is still working. I don't buy Seagate drives anymore. I'm not fully trusting of WD either, so I've bought some HGST as well. So far none of them have died including 2 2TB WD Greens which are supposed to be high failure.
>>
>>56558345
>Sure it does, and then you have a degraded array for 2 weeks
RAIDs dont take that long to rebuild. It took me less than 2 days to change the stripe size on a 8x 4tb raid 6 which is essentially the same thing as a rebuild as it has to rewrite the entire array and recompute parity for it.

>That big of a raid you do raid 10, MAYBE 6 if you feel ballsy
RAID 10 can't always withstand a two disk failure, RAID 6 can. Think before you speak next time noraid
>>
>>56555395
Dead on arrival. As in you get it in the mail and it doesn't work.
>>
>>56556420

DESIGNING A HARD DRIVE WITH A WINDOW IS NOT AN EASY TASK
>>
File: 1473640759171.jpg (12KB, 258x245px) Image search: [Google]
1473640759171.jpg
12KB, 258x245px
>>56558076
>RAID 5
Only one parity disk for 11 fucking drives, my god whats going on .......
>Keep replacing drives when they fail after a while
>>
>>56559615
How is doing a restripe essentially the same as a rebuild on an online raid? Big Raid 6 rebuilds can easily take a week , plus I'm sure your two day rebuild was not some cheap raid card. Big Raid 6 write speeds are freaking floppy disk esque. Saying 6 Is safer than
10 is moot since your raid is basically USELESS during that nice long rebuild, while a 10 Is a simple copy, no parity to recalc. So what's really safer?
>>
>>56559828
He didn't ask what it meant, he asked why would it be DOA.
>>
>>56556504
No, there is only one vent hole and it is clearly labeled. They are ugly because they are hyperoptimized for cost.
>>
>>56559422
Rip my 300GB external Maxtor from the same time period.
>>
File: 1469847314618.png (749KB, 668x878px) Image search: [Google]
1469847314618.png
749KB, 668x878px
>>56558006
Slap a RAIDz2 across that shite with ZFS for an 8TB array and give it a 512G NAND cache.
Alternatively make it a RAID 10 using ZFS for maximum safety netting at the loss of 3TB.
>>
>>56555475
>comparing an Enterprise HDD to a Desktop drive
>>
>>56555727
>backblaze data

Also, your warranty is meant to cover this. WD, HGST, and Seagate all offer models with 5 year warranties.
>>
>>56558669
Then why do you have it setup in RAID0?
>>
File: no more reasons to live.jpg (102KB, 800x803px) Image search: [Google]
no more reasons to live.jpg
102KB, 800x803px
>>56555180
>10TB drive
>Still uses SATA 6Gb/s
>>
>>56560856
>implying an HDD for high density storage is capable of saturating a SATA III port
I don't think any HDD can
>>
>>56559407
>>56560750
How is backblaze data not valid? They list the model number and amount of drives they're using. Do you think they're out to get certain companies?
>>
>>56556559
Protip: Maxtor IS Seagate, they were aquired by Seagate back in the early 2000's, shortly thereafter, for whatever mysterious reason, Seagate's drive quality tanked to shit.
>>
File: 1469395906701.png (115KB, 444x425px) Image search: [Google]
1469395906701.png
115KB, 444x425px
>>56555475
>helium
>>
>>56560943
iirc there was something about them using used drives and keeping the drives in poor conditions, no clue if it's true or not.
>>
>>56560732
>BarraCuda Pro
That's not a regular consumer drive
>>
>>56561145
What?
>>
>>56561118
>acquired
>>
>>56561196
It's helium-sealed. In case of a leakage you lose ALL your data.
Unreliable piece of shit. Abort it
>>
>>56556492
> 10- 20

> only 50 TB and not GeoBytes
>>
>>56555612
poor people
>>
>>56561287
I know it's helium sealed, I was assuming from your post that you did not.
>In case of a leakage you lose ALL your data.
Have you really gone 18+ years without ever finding out what a backup is?
>>
>>56558393
>10tb raid 5
This is just hilarious. But sad.
>>
>>56558076
>12 disks
>12 fucking disks
>in RAID5
my sides

At least you're only using 1TB drives, which makes this slightly better than if you were actually RAID5ing 4TB drives or something.

Anyway, somebody clearly hasn't done much research on RAID before deploying his destitute nigger setup. I sincerely hope you get rekt by bit flips, write holes and multi-drive failures and suffer catastrophic data loss. At least you will learn something.
>>
>>56560065
Don't listen to him. Out in the real world there is a simple answer to the question of raid levels:

Go with mirrored copies. Always. Whether you're using mdraid RAID10, zfs mirror, or even something custom like CephOS, there's one thing that stays constant: Every successful deployment uses mirrors.

The reasons are simple:

1. It's significantly safer. RAID5 is basically asking for data loss even with relatively small drives.

2. The risk goes up as your drive capacity increases (because rebuilds take longer); a RAID6 with today's drives is essentially equivalent to RAID5 in the past (and “avoid RAID5 like the plague” has pretty much always been true), so you *really* want to be going for at least 3 parities (think raidz3) if you're using modern HDDs (e.g. 3-4 TB).

3. IOPS. A mirror doubles your IOPS, a stripe doesn't. IOPS is more important than bandwidth for 99% of use cases, so even RAID1 JBOD (e.g. zfs pool or LVM) can be better than RAID10.

4. Large pools are bad. You don't want to stripe together 20 fucking disks, you want to split them up into smaller device groups for IOPS and better rebuild performance, unless you enjoy your pool being cripplingly slow.

5. Parity rebuilds are a pain in the ass. They will be slow, costly and severely degrade your pool's performance during the rebuild.

6. Flexibility, Pooling mirrors lets you make decisions, upgrades, and investments on a 2-disk basis. With a raidz3, at least if you want to gain any amount of efficiency over a raid10 of equivalent size, you'd need to stripe together many disks at the same time. This makes future upgrades harder and raises the number of disks (i.e. points of failure) you have to be using.

If you put together all of the benefits, there is absolutely no way they don't outweight the minor storage benefit you would get from a large raidz3 parity raid.

These reasons have not changed over the years, and even the greediest of companies *all* use RAID10.
>>
>>56560732
>Enterprise
>Consumer
These classifications have no meaning except for marketing. It's a hard drive, either it lasts long or it doesn't - and if it doesn't last long, you have higher operational costs and risk of failure than if they don't.

Multiple studies by Google, Carnegie-Mellon, Backblaze etc. have confirmed no significant difference between consumer and enterprise drives (in 24/7 server use).

At the end of the day, disregard the label and consider the drive for its technical merit alone (i.e. mean time to failure, unrecoverable read error rate, IOPS, bandwidth) and not its marketing.
>>
>>56561146
You're probably referring to that completely braindead tweaktown article which “dispels” the backblaze “myth”.

That article is so full of shit, and e.g. this one debunks it instead: http://www.zdnet.com/article/trust-backblazes-drive-reliability-data/

Also, I agree about the backblaze-produced bar charts and graphs being completely useless - simply because they obscure the actually interesting data (mean time before failure) via bad presentation (annual failure rates, which are irrelevant because they depend on drive age), and they also made a mistake counting the seagate drive failures (overcounted them).

The chart I posted was an (edited) version of a KN survivability graph made by a computational biologist over here: http://bioinformare.blogspot.de/2016/02/survival-analysis-of-hard-disk-drive.html

You can find more about the statistical methods used and the way the data was presented on that page, but the gist of it is that all drives are normalized to the same starting point w.r.t age on the x axis, and the y axis tracks which percentage of the total number of drives were still alive after that point in time.

You can clearly see that there *are* some parts in the curves where it's clearly visible that some drives were subjected to great-than-normal strain (that big curve downwards), but you can also see tha this affects WD and Hitach as well, but they don't have nearly as strong a dip in survivability.

Also, the drive age was all over the place, so there's no data to support the fact that their Seagate drives were worse simply because they were older. Finally, the same statistical analysis concludes that HGST is significantly better-than-average (and Seagate is significantly worse-than-average) even when compared against the rest, and that this result is statistically significant (i.e. extremely unlikely to have arisen by chance alone).
>>
>>56561287
>>56561404
This is FUD. In the case of a helium leak it just turns into a regular drive instead, i.e. the failure rate goes up.

As a rule of thumb, you can't stop helium from leaking - it's expected for them to leak throughout their lifetime. It's still better on average than not filling them with helium, the only significant downside being cost. (Especially since helium is growing very scarce)

That said, I wonder if you get a premium for sending back used drives so they can recover the helium that's inside them? Everywhere helium is used, people tend to make a big deal about recovering it since it's so valuable and scarce.
>>
>>56562623
>Also, I agree about the backblaze-produced bar charts and graphs being completely useless

Not really. The conclusion that Hitachi is best followed by WD, and then Seagate as not a very good performer relatively speaking is still true.
>>
>>56562686
Thay may or may not be true, but that doesn't mean I can trust their bar charts without knowing the whole story. The fact that the sources agree on their outcome is only true after the fact; and I also think that the KN chart as presented is much more informative and clear to users rather than some stupid “WEW BIGGER IS BETTER” marketing-style bar chart.

Anyway, I just think they should use a presentation like this, or at least rate drives for “average age at death” instead of “how many we lose per quarter”, although that number still obscures information. (Drives that have not been in deployment long enough to see a 100% failure rate will have a severely underreported average age at death)

Anyway, both are shit, and the KN graph is the clear winner when it comes to presentation.
>>
>>56562643
this is why we need to go to the moon to farm it for helium all day with clones and stuff.
>>
>>56555475
>Companies building underwater server farms
>Decide to upgrade to new 12tb helium drives
>Entire building lifts up from floor and is now floating
Thanks hitachi
>>
>>56559407
>>56560750
>These retards thinking running all the drives harder than they should be somehow invalidates some drives failing more than others
Fuck off, retards
>>
File: 1465561467326.jpg (80KB, 766x960px) Image search: [Google]
1465561467326.jpg
80KB, 766x960px
>>56558076
>12 WD greens
>RAID
>>
>>56562743
>put heliun drive RAID in your case
>case floats up to the ceiling
thanks hitachi
>>
>>56562643
>(Especially since helium is growing very scarce)
You've been reading too many buzzword alarmist articles. We are nowhere near getting to the point where helium is so scarce on earth that it would in any way affect the cost of a fucking hard drive.
>>
>>56556553
>Gay men
>>>/V/
>>
>>56556341
WD Bought ALL of Hitachi's drive division, including manufacturing and IP. They don't just sell rebranded WD's to take advantage of the Hitachi name.
>>
>>56562643
>helium balloons
The fuck nigger, we can't be that low on helium if every political party uses a whole bottle to fill fucking balloons!
>>
>>56556312
How autistic do you even have to be.
>>
>>56562643
>Everywhere helium is used, people tend to make a big deal about recovering it since it's so valuable and scarce.

Must be a different kind of helium than the one that's used in giant, cheap minion balloons for kids on county fairs everywhere.
>>
File: hightower-legend.png (196KB, 522x294px) Image search: [Google]
hightower-legend.png
196KB, 522x294px
>>56558076
This fucking idiot
>>
>>56564200
>>56564438
>>56564501
Just because consumers are idiots and don't realize they're shooting themselves in the foot doesn't mean it won't suddenly spike up in price once we have to start fighting wars over it (seem familiar?).

Helium reclamation is a big deal in scientific circles (e.g. LHC or other labs that need superconductors)
>>
File: Areca 1883ix-24.png (702KB, 1170x576px) Image search: [Google]
Areca 1883ix-24.png
702KB, 1170x576px
>>56560065
>Big Raid 6 rebuilds can easily take a week
Read the thread retard, anon was talking about doing a 11 or 12 disk RAID. No one here has a SAN with half a rack of disk shelves.

> I'm sure your two day rebuild was not some cheap raid card.
no it wasnt, pic related.

>Saying 6 Is safer than 10 is moot since your raid is basically USELESS during that nice long rebuild, while a 10 Is a simple copy, no parity to recalc. So what's really safer?
6 of course is because it is guaranteed to be able to withstand a two disk failure.

>>56562524
>I'm going to LARP that my softraid is the same as real RAID
RAID cards do more than stripe data.

>1. It's significantly safer. RAID5 is basically asking for data loss even with relatively small drives.
We were talking about RAID 6 v. 0+1.

>2. The risk goes up as your drive capacity increases (because rebuilds take longer); a RAID6 with today's drives is essentially equivalent to RAID5 in the past
You do realize that as capacity goes up so does performance?

>3. IOPS. A mirror doubles your IOPS, a stripe doesn't.
Anyone who cares about IOPS is using SSDs, not HDDs.

>4. Large pools are bad. You don't want to stripe together 20 fucking disks, you want to split them up into smaller device groups for IOPS and better rebuild performance
lol wut? you're trying to argue that you get more IOPS from a smaller set of disks? you truly are retarded anon.

>6. Flexibility, Pooling mirrors lets you make decisions, upgrades, and investments on a 2-disk basis.
Right, having to make decisions on a 2 disk basis is certainly more flexible than a 1 disk basis.

>raidz3
there you go again talking about a poorfags softraid solution.

>>56562559
>These classifications have no meaning except for marketing
You've clearly never read a disks datasheet. Enterprise disks have things like multiple RV sensors and orders of magnitude better non-recoverable read error specs. Stay poor anon thinking that your desktop is exactly the same as a enterprise
>>
>>56558076
>the speed will exceed any 10tb shitbox.

... What kind of raid hardware do you use? I can run write jobs at 120-170 mbyte/sec on all 7 drives of my machine simultaneously; on a NAS you'd be limited to 90-100 mbyte/sec COMBINED total.
>>
>>56560943
>How is backblaze data not valid?

They are not running all drives connected to a laptop in an external enclosure, so therefore they do not reflect real world usage scenarios.
>>
>>56565715
>on a NAS you'd be limited to 90-100 mbyte/sec COMBINED total.
not that anon but lol no
>>
>>56555612
raid is a meme. enjoy your raid card failure and loosing all your data

backups!! don't rely on raid
>>
>>56565596
>RAID cards do more than stripe data.
Yes, if you're extremely lucky or get an absurdly expensive one they might even detect and recover bit flips!

>We were talking about RAID 6 v. 0+1.
And RAID 6 on modern high-capacity drives is equivalent to RAID 5 in the past. Also, 0+1 != 1+0, and I sincerely hope you're mirroring before striping rather than the other way around.

>You do realize that as capacity goes up so does performance?
So does failure rate.
http://www.zdnet.com/article/why-raid-6-stops-working-in-2019/

>Anyone who cares about IOPS is using SSDs, not HDDs.
This is possibly the most ignorant argument you could have made. “Designing a system well is irrelevant because if you cared about performance you could just spend 10x as much for the same performance”

>there you go again talking about a poorfags softraid solution.
ZFS has more built-in safety than any expensive hardware RAID controller will ever provide you with. Only a clueless idiot wouldn't realize that ZFS is the industry-leading single-system storage solution, widely deployed by companies. We use only ZFS at [insert workplace], except for one system which is on mdraid10 due to stupid reasons.

(cont)
>>
>>56566040
> enjoy your raid card failure
That's why you always keep another.
>losing all your data
BBU.
>>
>>56566056
>>56565596
>lol wut? you're trying to argue that you get more IOPS from a smaller set of disks? you truly are retarded anon.
Okay, at this point I'm pretty sure you're either trolling or just helplessly retarded, but just for the sake of setting the record straight in case other people read this:

When you issue an I/O command to a stripe, all disks in the stripe have to respond at the same time. If you pool together two 6-disk vdevs, each vdev can issue reads and writes simultaneously and independently, compared to a single 12-disk vdev. Striping trades latency for single-threaded bandwidth and nothing else, it's actively harmful for most use cases (which are bottlenecks by random seeks and latency, not throughput).

Also, when rebuilding a stripe, every single device in that stripe will essentially be useless. If you have two 6-disk vdevs in a pool, one vdev can still issue reads and writes while the other vdev is busy recomputign parity. With a single stripe across your entire pool, your performance will be useless. There's a reason every single resource on the internet recommends keeping your stripes small, which I'm sure you'd know if you put even an inkling amount of time into researching the topics you bullshit about.
>>
>>56565596
>Right, having to make decisions on a 2 disk basis is certainly more flexible than a 1 disk basis.
You're again being willfully ignorant of the fact that growing a single gigantic stripe is never a good idea.
>>
>>56560140
>>56556504
They're ugly because the aesthetics of your hard drive don't matter one single iota. Not even a little. 0% importance.
>>
>>56565838
>not that anon but lol no

I was under the impression that most NAS boxes are connected via ethernet, and 90-100mbyte/sec is the maximum that 1gb ethernet can handle. So I can't write at 100mbyte/s to each 6 drives inside the NAS simultaneously, which is something I can do when I have 6 drives internally connected in my computer.

Unless of course you use 10gb ethernet, but then I need an extra switch and network card.
>>
>>56566056
>I sincerely hope you're mirroring before striping rather than the other way around.
pic related a striped set of mirrors

>http://www.zdnet.com/article/why-raid-6-stops-working-in-2019/
>The problem with RAID 5 is that disk drives have read errors. SATA drives are commonly specified with an unrecoverable read error rate (URE) of 10^14.
Why are you and this retard at ZDNet using desktop disk error rates? My HGST UltraStar 7K4000s have 10^15 error rates. He also is LARPing that you'll lose all of your data if you have a single error rather than a bit

(Cont)
>>
>>56566185
This right here is exactly the reason why I will never understand the point of building a home NAS other than e-peen.

You're spending more money to add more points of failure and an additional bottleneck (network) that is expensive to upgrade.

It just seems so stupid to me. You are gaining nothing from having your data in a different machine compared to having it on your machine..
>>
>>56558393
>a 10tb raid 5
Who is doing this? RAID 5 is dead.
>>
>>56566056
>This is possibly the most ignorant argument you could have made.
Right, I have a SSD RAID but i'm ignorant?

>“Designing a system well is irrelevant because if you cared about performance you could just spend 10x as much for the same performance”
We're not all poor here anon.

>. Only a clueless idiot wouldn't realize that ZFS is the industry-leading single-system storage solution
lol no, its the poorfags solution. see object storage.
>>
>>56565596
>You've clearly never read a disks datasheet. Enterprise disks have things like multiple RV sensors and orders of magnitude better non-recoverable read error specs. Stay poor anon thinking that your desktop is exactly the same as a enterprise

>>56566226
>Why are you and this retard at ZDNet using desktop disk error rates? My HGST UltraStar 7K4000s have 10^15 error rates. He also is LARPing that you'll lose all of your data if you have a single error rather than a bit
Forgot to link

http://storagemojo.com/2007/02/19/googles-disk-failure-experience/
http://storagemojo.com/2007/02/20/everything-you-know-about-disks-is-wrong/

tl;dr the spec-quoted figures are completely irrelevant marketing that you will only get under the most ideal of circumstances, just like every single other manufacturer-quoted figures of any other product in existence.

Google finds no significant difference in actual observed AFR between consumer and enterprise drives regardless of the number quoted on the spec sheet.

>You've clearly never read a disks datasheet. Enterprise disks have things like multiple RV sensors and orders of magnitude better non-recoverable read error specs. Stay poor anon thinking that your desktop is exactly the same as a enterprise
I'm using enterprise HGST and WD SAS drives in my desktop system, connected via LSI 9211-8i SAS adapters. The overall cost of my system is 10,000€. Let's exclude jealousy from this argument, shall we?
>>
>>56566282
>We're not all poor here anon.
Holy shit, the irony of hearing this coming from somebody trying to defend RAID6 (you know, a RAID level that literally only exists because of people too poor to afford RAID10 like the big boys)
>>
File: IMG_20160824_025436.jpg (808KB, 2448x2448px) Image search: [Google]
IMG_20160824_025436.jpg
808KB, 2448x2448px
>>56566132
>When you issue an I/O command to a stripe, all disks in the stripe have to respond at the same time.
god this is a retarded comment. go look at disk activity lights, they dont blink in unison.

>Also, when rebuilding a stripe, every single device in that stripe will essentially be useless.
again no, you can set rebuild prioirties to specify how much of the disks time will be set to rebuilding vs fulfilling IO requests.

>two 6-disk vdevs
here you go with your softraid bullshit again.

>>56566142
did you even read what you quoted? try again

>>56566185
>he doesnt have 10GbE at home
>he doesnt have virtual switches
>he thinks 1gbit == 100 mbytes
pic related
>>
>>56566308
>citing 10 year old articles from a literally who website
include relevant quotes anon, i'm not going to make your argument for you

>>56566359
again retard, RAID 10 isnt guaranteed with withstand a two disk failure. if both disks in a mirror die, then your RAID is fucked.
>>
>>56566363
>did you even read what you quoted? try again
Okay, you can't put 2 and 2 together in your head. Let me help you:

>You claim RAID6 is more flexible than RAID10 because you can add disks individually
This relies on the assumption that using a single RAID6 covering your entire pool is a good idea, because only under those circumstances is expanding it as simple as just adding new drives as you see fit.

>>56566282
Nice setup, such a shame you'll lose it all sooner or later. Ah well, some people just gotta learn from experience I guess.
>>
>>56566227
The upside is that whenever a drive fails, you can swap it out to a new one, and you don't have to worry about losing one drives worth of data. Also, all devices in the household can access the NAS for things like work or streaming media or whatever.

Alternatively you can use the NAS only for backups, then speeds won't matter, since you only need to launch a backup job once a month. It's a bit expensive that way though, cheaper to build a 2nd PC with lots of drives, that you wake up over LAN once a month to run a backup job.

And don't quote me on this but IIRC there's a copyback function, where if you copy between two drives in the NAS, then it will process that copy internally, giving you higher speeds than what you'd get if you'd just write a file to the NAS over 1gb ethernet. Still not as fast as having 6 drives in your computer and writing at max speeds to every one of them simultaneously, but that's not a common usage anyway (unless you do crap like rendering videos, in which case you need pcieX4 ssd cards not a NAS)
>>
>>56566363
>go look at disk activity lights, they dont blink in unison.
If you use ZFS they do, my 6 disks blinks once every 5 secs, when cache is written to disk. Maybe use proper raid?
>>
>>56566400
>again retard, RAID 10 isnt guaranteed with withstand a two disk failure. if both disks in a mirror die, then your RAID is fucked.
So not only are you willfully ignorant of every single industry practice, you're also willfully ignorant of well-understood and well-quoted RAID failure risks and basic mathematics?

RAID10 is *more* reliable when it comes to disk failure than RAID6. Not the other way around.

The reason is simple: RAID10 can lose up to half your disks. Combine that fact with simple probability. If you have a 12-disk RAID10, your chance of a second disk failure wrecking your data is 1 in 11. The chance of a third disk failure wrecking your data is 2 in 10, and so on. On average, you can suffer more disk failures without data loss than you can with RAID6.

On top of this, RAID10 recover is much faster and cheaper to complete, compred to the slow and expensive RAID6 restriping you're going to be doing across your 12-disk pool.
>>
>>56566464
That's not the same thing, you're just talking about standard ZFS asynchronous queue flushing behavior. Go ramp up your pool usage and see if it's still the same.
>>
>>56566308
>http://storagemojo.com/2007/02/19/googles-disk-failure-experience/
And jesus fucking christ you're so retarded you cant even follow an argument. Your article is talking about MTBF, whereas what you quoted me saying is about unrecoverable read errors. You really have no idea how the two are different do you retard?

>>56566420
>This relies on the assumption that using a single RAID6 covering your entire pool is a good idea, because only under those circumstances is expanding it as simple as just adding new drives as you see fit.

No it doesnt. We've been talking about expanding RAID disk sets. You made the retarded argument that doing it in multiples of two is somehow better than 1.

>>56566464
You dont understand what the word unison is do you?

>>56566505
>So not only are you willfully ignorant of every single industry practice, you're also willfully ignorant of well-understood and well-quoted RAID failure risks and basic mathematics?
that projection, here let me hold your hand like the retard you are and explain the basics to you

>The reason is simple: RAID10 can lose up to half your disks.
>up to
You aren't guaranteed to be able to lose 2 disks. See the pic at >>56566226 which is from LSI's manual for RAID 10. If you lose the left two most disks (or really any other mirrored pair) the entire RAID is fucked.
>>
>>56566363
>>he thinks 1gbit == 100 mbytes

If you count in network protocol overheads and such and such, 100 mbyte is a good estimate. Technically you may get 110mbyte, or if you fine tuned packet sizes and such, maybe even ~120 mbyte. But that's not the most likely situation, especially since you'd need to configure it on all devices.

And 100 is a nice round number.

The real cap is how fast the NAS can handle the transfers anyway, which is often less.
>>
>>56566563
>You really have no idea how the two are different do you retard?
An unrecoverable read error can and will corrupt your metadata.

>No it doesnt. We've been talking about expanding RAID disk sets. You made the retarded argument that doing it in multiples of two is somehow better than 1.
Re-read my post until you understand it

>You aren't guaranteed to be able to lose 2 disks. See the pic at >>56566226 which is from LSI's manual for RAID 10. If you lose the left two most disks (or really any other mirrored pair) the entire RAID is fucked.
Re-read my post until you understand it

I'm not sure why this is so hard for you. Maybe you're being blinded by your own denial and desperately clinging on to arguments that will help you sleep at night knowing your poorfag RAID6 is “safe”. Anyway, I hope you remember this thread when you lose all your data. See you then.
>>
>>56566578
>The real cap is how fast the NAS can handle the transfers anyway, which is often less.
Even a single 5.4k RPM consumer disk can saturate 100 MB/s _easily_.

If you think there's any bottleneck other than the network in a NAS system, you're being willfully ignorant of the numbers involved.
>>
>>56566620
>An unrecoverable read error can and will corrupt your metadata.
lol wut? first off i think you mean parity data. and secondly pic related.

>Re-read my post until you understand it
stay btfo anon

>poorfag RAID6
lel, a $7k - $8k storage subsystem at home is poorfag? lets see a pic of your workstation anon.
>>
I'm not video editing, can I just buy 3 Samsung 950 PCIe NVMe SSDs and stripe them?

I live in Fl and we are prone to lightning/power outages/and weird shit and less moving parts mean less chance for an error to happen right?
>>
>>56566768
>can I just buy 3 Samsung 950 PCIe NVMe SSDs and stripe them?
yes

>I live in Fl and we are prone to lightning/power outages/and weird shit and less moving parts mean less chance for an error to happen right?
no, read the data sheet and look at the unrecoverable read error rate. which apparently is so bad that they dont even provide it.
>>
>>56566768
>can I just buy 3 Samsung 950 PCIe NVMe SSDs and stripe them?
Yes, that's kind of what I'm doing right now
>less moving parts mean less chance for an error to happen right?
If only that was true. You won't get mechanical malfunctions, but you will get corruption or even outright destruction if your SSD is suddenly powered off in the midst of a write sequence. No joke, you can lose everything in the drive because the 950 PRO does not have any built-in caching to safeguard against sudden power failures during writes.
>>56566817
>unrecoverable read error rate. which apparently is so bad that they dont even provide it.
It's only slightly better than the 850 PRO, according to Samsung's PR. But there is a gap in performance difference between the two, leading to a higher rate of potential failures since you can write more data to it faster than the 850 PRO.
>>
>>56566817
I've also heard good things about Intel's (inb4 Intel Jew memes) 750 series...

However they're a bit... Expensive.

What about Hybrid drives? My laptop has a 1TB and it seems pretty nice.
>>
>>56566864
YOINKS SCOOB!

Is there any SSD that's as reliable as a 7200RPM HDD
>>
>>56565596

>Big Raid 6 rebuilds can easily take a week
>>Read the thread retard, anon was talking about doing a 11 or 12 disk RAID. No one here has a SAN with half a rack of disk shelves.
I think you should read it again dipshit, wtf are you bringing up SANs and disk shelves for, an in-use array could easily take that long with 12 disks using whatever shit card (if any) or a software raid anon's using to keep under $720

> I'm sure your two day rebuild was not some cheap raid card.
>>no it wasnt, pic related.
exactly my point, you're comparing a 12 disk raid with 5400 RPM WD GREENS and software raid to you doing a non-failure rebuild with a $1000+ Areca

>Saying 6 Is safer than 10 is moot since your raid is basically USELESS during that nice long rebuild, while a 10 Is a simple copy, no parity to recalc. So what's really safer?
>>6 of course is because it is guaranteed to be able to withstand a two disk failure.
You seem to be hung up on multiple disk failures, so technically a 20 disk RAID 10 could withstand up to 10 disk failures, and rebuild time might be 4-8 hours depending on your card & drives.
So you're saying you shouldn't have to worry about a RAID-6 losing 3 out of 12 drives during a much longer rebuild time, but beware of a 20 RAID-10 losing one drive and then during its much shorter rebuild, losing another drive of the EXACT same pair (basically 1 in 19 chance?) You can't compare the two simply on number of disk failures.

I didn't realize parity fanboys existed, kinda interesting.
>>
>>56560102

Normally you don't know unless your the UPS guy. It just wont work.
>>
>>56562643
You're full of shit you stupid faggot.
Helium is required for the heads to work properly with such minimum space. In case of a leakage it means permanent mechanical failure.
>>
>>56566886
Even the 750 are dud SSDs that Intel is selling onto consumers because muh shekels. If you want true data protection during blackouts and the least chance of read errors over time, you'll need to fork more shekels over to Intel for their enterprise-grade PCIe SSDs. Those usually have SLC+MLC NANDs with more resilient controllers and added features like ECC and emergency caching.
>>
>>56566956
How much are those...
>>
File: 29849278.jpg (639KB, 541x727px) Image search: [Google]
29849278.jpg
639KB, 541x727px
So, if I want ultimate data protection, would it be best to use RAID-Z3 with ECC RAM, mirrored L2ARC, and mirrored ZIL? I'm a noob to all this.
>>
>>56566864
>But there is a gap in performance difference between the two, leading to a higher rate of potential failures since you can write more data to it faster than the 850 PRO.
That has nothing to do with unrecoverable read error rates.

>>56566886
>I've also heard good things about Intel's (inb4 Intel Jew memes) 750 series...
1 in 10^16 is the unrecoverable read error rate which is the same as my enterprise class Seagate 600 Pros

>>56566918
>an in-use array could easily take that long with 12 disks using whatever shit card
Again no, pic related, on any decent raid card you can set rebuild priorities.

>exactly my point, you're comparing a 12 disk raid with 5400 RPM WD GREENS and software raid to you doing a non-failure rebuild with a $1000+ Areca
softraid vs hardraid rebuild times wont differ by any significant amount

> so technically a 20 disk RAID 10 could withstand up to 10 disk failures
How many times do I have to explain this to you retards. RAID 10 is only gaurenteed to withstand a single disk failure. Look at the pic in >>56566226, if both disks in any mirror fail, the entire array is fucked

>(basically 1 in 19 chance?)
Its not a 1 in 19 chance. Each disk has their own chance to fail. If you flip a coin 3 times in a row and and you get heads every time, does that mean on the fourth flip you have a 25% chance of getting a heads? Of course not, its 50%.
>>
>>56567007
Between $1000 to $12000, depending on the storage size.
>>
>>56558669

>"data is important"
>puts 2 drives in RAID 0
>>
>>56567051
>That has nothing to do with unrecoverable read error rates.
You can't be serious, holy shit.
>>
>>56567054
Then should I even bother with an SSD? I'm building a PC this Christmas I want to be a Nuclear reactor that lasts me 6 years. I'm in Engineering if that helps.

Optimally here's my build:

>I7 6700K; Kaby Lake looks disappointing AF and is only valid for laptops IMO
>MSI Motheboard, good price for an over clocking motherboard
>16GB DDR4, may do 24GB if the CAS isn't too high
>R9 FuryX, EVGA 980TI Hybrid, RX480 Nitro
>Don't know which SSD(s) I want
>EVGA 1000 PSU
>Must be Mini ITX or Mid ATX
>>
>>56567152
What programs do you use or need to use in the forseeable future? You need to specify what your needs are before planning ahead.
>>
>>56558544
Firecuda
>>
>>56567177
For Vidya I play: RTS, Warthunder, Bugthesda, Gmod (lol), ARK Survival Evolved

ENGR Programs: Solidworks, Matlab/Simulink, Maple, Fluidworks, Labview possibly, AutoCAD Student Edition, LoggerPro, CCR (?). There's more as I will be in Formula SAE, but if orbit their names.
>>
>>56567235
>ARK
good luck, no amount of hardware is going to make that run well
>>
>>56567277
I heard Scorched Earth is better optimized
>>
>>56567289
Sure, just $12.99 extra on top of a game that isn't even released yet! :^)
>>
Is there a way to make my computer automatically scan for and fix data errors in my RAID6 array?
>>
>>56567310
Switch to ZFS and set up a `zpool scrub` cron job
>>
>>56567307
I know. It makes me want to uninstall the main game (wont buy SE) until they fix their shit
>>
>>56567326
>ZFS
I'd prefer not to use beta software for all my data, thanks.
>>
>>56567342
Oh, so you want alpha software instead? I guess you could use btrfs
>>
>>56567235
I'm going to ignore your games and focus solely on the engineering programs you'll use because that's more important.
You need more cores than 4 if you want the fastest render time, so bump up to at least an i7-5820K/6800K on an X99 platform. Most of those software you listed will almost certain make use of at least 8 threads and scale well past 12.
You'll also need a lot of RAM for larger renderings or loading multiple rendered objects at the same time. I'd say go no less than 32GB and only increase that if you actually end up using more than 28GB of RAM. All of these programs will eat up as much RAM as you can offer before unloading caching duties to whatever drive you selected as a scratch disk. You'll want to minimize the chances of this happening by giving those programs as much RAM as you can offer.
I'd avoid using most of the consumer video cards because almost none of your software will support anything other than a select group of approved workstation GPUs. Have fun trying to sort of compatibility and rendering issues with no support line from either the software distributor or the GPU makers. I think the Radeon Dual Pro and the Titan series are the only "consumer" cards that have some support for the software you listed.
I'd go with any SSD that has the longest warranty, so that would be the Sandisk Extreme Pro or whatever. Enterprise-grade SSDs have more firmware that enable power-off protection, built-in ECC capability, and sometimes different controllers that prioritize longevity over raw speed, but they tend to cost much more than their consumer counterpart. If you want lulz, buy a SAS RAID card and some SAS SSDs.
>>
>>56558715
This. I ran a 3x2TB drive RAID-5 array for about a week, then read about how long a rebuild would take (along with the math for UREs). The numbers aren't comforting, especially with consumer SATA drives.

I went out the next day and bought another drive to change to 4x2TB RAID-10.

Parity RAID just isn't worth it anymore, especially when you are using large drives. The rebuilds take ages and have a high chance of failure.
>>
>>56566640
>If you think there's any bottleneck other than the network in a NAS system, you're being willfully ignorant of the numbers involved.

Yeah and computing parity/mirroring/striping for RAID requires no resources whatsoever.
>>
>>56555180
it's like 2% return rare dude

failure rate can't be higher than 3 or 4% although I'm not an expert on the subject

this is high compared to competition but that doesn't mean you shouldn't jump on a deal just because it's a seagate
>>
>>56567607
If a *single* hard drive already easily exceeds 100 MB/s, striping or mirroring them isn't going to make that worse. (And you think there's a CPU bottleneck, you're completely ignorant.)

Have you ever *built* a RAID? Even with 2-4 disks you will easily get 200+ MB/s out of them, Let alone with a beefy NAS. There is absolutely no way your network is *not* going to be the primary bottleneck.
>>
>>56567051

Gotta love when the retard in the room realizes his points have no base and no one agrees with him.

>>an in-use array could easily take that long with 12 disks using whatever shit card
>Again no, pic related, on any decent raid card you can set rebuild priorities.
Setting rebuild to 80% on a parity rebuild, my fucking sides, enjoy your crippled beyond use array

>>exactly my point, you're comparing a 12 disk raid with 5400 RPM WD GREENS and software raid to you doing a non-failure rebuild with a $1000+ Areca
>softraid vs hardraid rebuild times wont differ by any significant amount
Yes actually, they do, try it on a 12 disk RAID 6 with anything over 1tb

>> so technically a 20 disk RAID 10 could withstand up to 10 disk failures
>How many times do I have to explain this to you retards. RAID 10 is only gaurenteed to withstand a single disk failure. Look at the pic in >>56566226, if both disks in any mirror fail, the entire array is fucked
Nothings guaranteed in any raid shitstain, its all about real world. You think a 12 disk RAID 6 of shit tier consumer drives is GUARANTEED to survive 2 drive failures?? Come on forrest

>>(basically 1 in 19 chance?)
>Its not a 1 in 19 chance. Each disk has their own chance to fail. If you flip a coin 3 times in a row and and you get heads every time, does that mean on the fourth flip you have a 25% chance of getting a heads? Of course not, its 50%.
Thank you for the analogy that helps in no way, you're still thinking shit is cut and dry. You aren't factoring in all the dynamics.

Let it go man, raid 6 is dodo raid. Don't worry, you can always fall the for triple parity meme
>>
>>56558076
More disks = higher probability of a punctured volume during rebuild.

Assuming a failure probability of 0.03, the probability of a second drive failure (hence complete array failure) with X disks is...

4 disks = 1/193
6 disks = 1/80
24 disks = 1/6
>>
>>56567831
>Let it go man, raid 6 is dodo raid. Don't worry, you can always fall the for triple parity meme
But then he'd have to use ZFS or another checksumming filesystem, right? I don't think any hardware implements field-based triple parity.
>>
File: 1473037498918.jpg (172KB, 1124x1024px) Image search: [Google]
1473037498918.jpg
172KB, 1124x1024px
>>56555180
>BarraCuda
>>
>>56567326
>>56567362
RAID-Z3 would use up 3 drives out of the ones I stick into it?
>>
>>56568586
Exactly. Meaning you need at least 7-8 drives per vdev.
>>
>>56568635
>vdev
Virtual Device?
I have 8 SATA ports, so 6 TB drives would be 30 TB with a RAID-Z3. I'm willing to go with that since it will probably last me until I buy a new server with larger drives. How easy is it to recover the RAID if my operating system drives end up dying?
>>
>>56559869
FOUR RUBBER FEET

IT DOES NOT HAVE A REMOVABLE MOTHERBOARD TRAY
>>
>>56568674
>Virtual Device?
Precisely. Multiple disks form a vdev, and multiple vdev form a pool.

vdevs are independent (you could have a RAID5 vdev and a RAID6 vdev in the same pool - but it would be stupid because loss of a single vdev implies loss of the entire pool)

If you're going to deploy ZFS then I highly, highly, highly encourage you to spend an evening reading through every “zfs best practices”, “zfs tips”, “things to know before deploying zfs” etc. guide there is. Before you even buy your hardware. And yes, they will all tell you to use RAID10 instead of RAID-Z3. (i.e. take your 6 TB drives and form a bunch of 2-disk mirror vdevs)
>>
>>56566768
>I live in Fl and we are prone to lightning/power outages/and weird shit and less moving parts mean less chance for an error to happen right?
Just get a UPS
>>
>>56568748
>RAID10 instead of RAID-Z3
Would that actually be better though? I don't care about performance since it's a media server (mostly, also rsync server for portage), so the only thing I care about is integrity and the data. Most of the forum posts about 10 > Z3 are based on small random reads.
>>
>>56568748
Sorry, forgot to reply to the rest

>>56568674
>How easy is it to recover the RAID if my operating system drives end up dying?
Well, that depends entirely on your design. For example, is your OS itself installed on the pool (I wouldn't recomend this for a NAS or server, but I do it on my workstation) or not?

But in general, ZFS is designed to be very easy to recover. In general, it will be as easy as taking the disks, plugging them into a different system (or a new OS on the same system, or whatever) and running ‘zpool import’. ZFS is pretty automagic when it comes to stuff like disk detection, and it's also designed for stuff like hotplugging. (You can detach and reattach disks, and you can “export” and “import” entire pools to move them to different systems)

Also, the way ZFS is designed on-disk makes it very robust against stuff like sudden power failure (or requivalently, sudden failure of the OS) - basically, ZFS datastructures are designed to be immutable on the disk, so nothing you've written already will be “lost” during a crash (unless the hard drives themselves are failing, of course). At worst, you'll have to roll back or re-apply a few incomplete transactions.

ZFS comes with a shitton of administrative tools though, more than any other filesystem or storage system I've ever used in my life - even if you suffer data loss, it wouldn't completely shit itself (like e.g. btrfs) but rather present you a list of files that got corrupted, and allow you to decide whether you want to import the pool regardless, and so on. The adminsitrative tools are extremely clear in their presentation and the manpages are well written.

Want more marketing? :^)
>>
>>56568834
>sudden power failure
If I can figure out how to make my shitty Cyberpower UPS run a script then I won't need to worry about that unless it's during a rebuild or something retarded.
My OS is installed on a 16 GB SSD, and I have several 16 GB flash drives with copies on it if it ever shits itself.
Is it a good idea to use a mirrored pool for the root filesystem?
>>
>>56568834
For example, if I unplug a disk while in operation and then replug it 10 seconds later, ZFS would automatically repair any data blocks that it's missing without manual intervention, and due to the use of a merkle-tree it's smart enough to only repair what's actually missing.

Do that on a mdraid and you get to spend the next 20 hours resilvering the entire disk.

>>56568823
It's not just about performance while in operation. RAID10 is also easier to recover, usable while recovering, presents less risk of failure while recovering, quicker to expand later on (Just pop in new disks and you're done. No resilver needed), and of course significantly faster even during normal operation.

All in all it's simpler to manage, simpler to deploy, less prone to failure and you're not even giving up much storage efficiency because 8x6 TB disks in RAIDZ3 is only worth a single disk of data more than the same in RAID10.
>>
>>56568894
Your UPS can fail. Your PSU can fail. Your motherboard can fail. Your drive controller can fail. Your CPU can fail. Your RAM can fail. Your OS/drivers can fail. Your brain can fail and do something stupid.

A UPS won't protect you from catastrophic power loss. Luckily, ZFS is designed to protect against catastrophic power loss.
>>
>>56568894
>My OS is installed on a 16 GB SSD, and I have several 16 GB flash drives with copies on it if it ever shits itself.
What OS is this, btw? ZFSonLinux? FreeBSD/FreeNAS? OpenSolaris?

>Is it a good idea to use a mirrored pool for the root filesystem?
Well, it's always a good idea to have redundancy where you care about uptime. If your question was “Is it a bad idea?”, the answer is no.
>>
>>56568975
Gentoo Linux. I was just wondering if it's easy to get it booting off a ZFS array. I'd set it up to boot off a mirror if it is since that way I don't need to manually sync them up whenever I make any large changes.
>>
>>56569007
Gentoo Linux is perfectly bootable off ZFS, that's what I'm currently using. Some caveats apply:

1. If you plan on using large_dnodes (which is recommended if you're planning on setting up SELinux, PaX or using ACLs/xattrs heavily), you can't boot off the pool using grub2 yet.

(But you can have a small /boot partition somewhere else if you want - that's what I do. See http://savannah.gnu.org/bugs/?48885 for the upstream bug I reported for this issue)

I haven't tried booting off it using UEFI. (I'm avoiding UEFI like the plague for now)

2. You'll need some sort of initramfs. genkernel should work fine, and is actively maintained by ryao (one of the members of the ZFSonLinux project). I use genkernel-next personally because I already have it setup, and because I also use systemd (and don't feel like experimenting with mainline genkernel's systemd support yet), but that one required a custom patch to work (and also won't load zfs module parameters on boot, if you care about that)

3. You risk an unbootable system if you screw up your initramfs (e.g. forget to re-emerge zfs after a kernel upgrade). I have a backup system on a spare drive for the sole purpose of fixing simple mistakes like these.

That said, as long as the modules in the initramfs are fine, you can correct most mistakes from within the initramfs environment, because you'll have access to zfs utilities there.
>>
>>56568823
>>56569087
Btw, a point I want to drive home:

Because ZFS is designed to be immutable, many decisions are permanent. For example, you can only ever add vdevs - never remove them. You can't change a vdev's configuration (e.g. raidz2 -> raidz3) later on.

You can only enable pool features, never disable them. That is why you so many people being so vocal about getting you to pick the right option from the start: Because if you decide to disregard the warnings, you'll be stuck with a pool that you can't revert your decision on later on down the line if you run into any problems, and the only way to fix it is to buy a separate set of hard drives and recreate the pool from scratch (copying over all the data)

If you pick raidz3 now and in 1 year decide you're not happy with the performance, there's no easy way to go back. If you enable deduplication now, you can't turn it off again (although you can make it “idle” again by destroying and recreating all datasets that have deduplciated data stored on them). There are many ways to make bad decisions now that will hurt you down the line, which is why going through every resource and planning well is very important.

The benefit of this massive drawback is that ZFS is ultra reliable. Hip and ad-hoc filesystems like btrfs ignore this principle, which gives them a shitton of flexibility (dynamic raid levels, rebalancing, adding/removing drives on a whime, snowflake configurations, etc.) but it bites them in the ass (see: all the silent data corruption bugs btrfs has to tackle with).

Anyway, if you want to go with ZFS, you'll need to plan accordingly.
>>
>>56569087
>I haven't tried booting off it using UEFI
Mine boots off this, using only the bzImage. It boots to the initramfs.

>>56569184
I see. I don't think I need things like deduplication since I've been managing all that on my own. Are there any benchmarks showing performance differences between 10 and Z3? If it's not significant for large reads (videos, ISOs) then I'm not too worried. I'm not going to be installing shit on it or anything like that.
>>
>>56569447
>Mine boots off this, using only the bzImage. It boots to the initramfs.
How does your UEFI know how to understand the bzImage? How does it know how to parse ext4? Would it be able to read from ZFS?

>Are there any benchmarks showing performance differences between 10 and Z3?
Not sure, haven't tried looking for any. I'm sure you could find some, or perhaps do your own when you get the disks. (For me it was never a question)
>>
>>56570270
The first 128 MB partition is a FAT32 partition. The EFI bootloader looks for /boot/EFI/Boot/Bootx64.efi and boots the bzImage (renamed to Bootx64.efi) automagically. Fuck bootloaders, I'm done with that shit forever.
Once it boots to the initramfs I can do anything I need to to get /mnt/root set up for the change_root.
>>
>>56570379
So what do you do if you want to boot to an older kernel instead? Or pass extra options on boot?
>>
>>56570461
If I 'need' extra options I specify them in the kernel config. If I need an older kernel then I have a rescue stick laying around. I was thinking about doing some hokey kexec thing for testing new kernel builds, but it's a headless server so I never bothered with it. Last time I needed to bring out a GPU and display for it was when I was fucking around with the initramfs and trying to add a SSH daemon in it for early logins (for putting the root on an encrypted partition or whatever retarded shit I was trying to do back then).
Basically it's going to be a pain in the ass hooking a GPU and monitor to it, so I just boot from a stick and roll the kernel+config back if anything breaks.
>>
>>56570555
Meh, sounds like a pretty big funtional downgrade for me. I heavily use the ability to add kernel options, try out older kernels easily or use GRUB interactively.

Doesn't sound like UEFI is for me.
>>
>>56570630
You can use UEFI to boot from different kernels (depending on the vendor's implementation), it's just that it's a headless server so the ability to change options before SSH/Telnet is up is worthless for my purposes. My HP laptop for example lets me dig through filesystems on connected drives to find UEFI-bootable files.
Unfortunately, vendors that implement it in a retarded way (see the systemd arch linux MSE writable EFI vars brick debacle) give it a bad reputation. It's probably pretty bloated too, but fuck if I care about that, I just don't want to fuck around with installing boot loaders on my drives.
>>
File: life.png (65KB, 944x622px) Image search: [Google]
life.png
65KB, 944x622px
>>56567069
>>56560772
Oh come on.
Surely I'm fine for another 7 years.
>>
>>56570747
That actually makes me think: Can you choose which kernel to boot via the service processor's system management interface (SP or IPMI)?

I know on most servers you can issue IPMI commands to get them to boot via PXE or disk, it would be awesome if I could issue IPMI commands to get them to boot any given kernel version - or even access the UEFI shell interactively.

That's something that might be useful for you as well - if you attach to the console via the SP during the early boot, you can recover from initramfs errors without needing networking.

Not sure about others, but on Sun ILOM you can type
start /SP/console
to attach to the serial port and get dmesg or a console or whatever.
>>
>>56570747
>I just don't want to fuck around with installing boot loaders on my drives.
Why is it so much better to have to fuck around with creating a shitty FAT32 partition for your UEFI stuff than it is to install a bootloader and maybe make a small /boot? I have to burn a disk on it one way or the other, so I don't see much of a difference.
>>
>>56570847
>create /EFI/Boot/ and stick bzImage in it with .efi at the end
>worse than a grub or lilo config
I don't need to run any fucked up commands and hope that my disks didn't move around, I just stick the bzImage in the Boot directory.
Last time I tried to make lilo use /dev/disk/by-path or UUIDs instead of /dev/sda or whatever it feels like being I only got a bunch of stackoverflow answers saying you couldn't.
As an added bonus I can even access the boot partition on a Windows machine if I really need to drop a new bzImage on it.
>>
>>56571060
>I don't need to run any fucked up commands and hope that my disks didn't move around, I just stick the bzImage in the Boot directory.
Yes, and I just run
grub2-mkconfig -o /boot/grub/grub.cfg
after building a new kernel. It's part of my new-kernel upgrade script.

It sounds like you are having immense difficulties configuring GRUB though, which makes me appreciate why you want to get rid of it. Personally, I have no issues and it's a breeze for me to configure and use. Maybe you're doing something wrong?

GRUB uses UUID automatically for me, I didn't even have to change anything to make it do so.

Never touched LILO, always seemed like a massive piece of shit to me. I'll also agree that GRUB1 was a royal pain in the arse to maintain. GRUB2 is worlds apart.
Thread posts: 210
Thread images: 24


[Boards: 3 / a / aco / adv / an / asp / b / bant / biz / c / can / cgl / ck / cm / co / cock / d / diy / e / fa / fap / fit / fitlit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mlpol / mo / mtv / mu / n / news / o / out / outsoc / p / po / pol / qa / qst / r / r9k / s / s4s / sci / soc / sp / spa / t / tg / toy / trash / trv / tv / u / v / vg / vint / vip / vp / vr / w / wg / wsg / wsr / x / y] [Search | Top | Home]

I'm aware that Imgur.com will stop allowing adult images since 15th of May. I'm taking actions to backup as much data as possible.
Read more on this topic here - https://archived.moe/talk/thread/1694/


If you need a post removed click on it's [Report] button and follow the instruction.
DMCA Content Takedown via dmca.com
All images are hosted on imgur.com.
If you like this website please support us by donating with Bitcoins at 16mKtbZiwW52BLkibtCr8jUg2KVUMTxVQ5
All trademarks and copyrights on this page are owned by their respective parties.
Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.
This is a 4chan archive - all of the content originated from that site.
This means that RandomArchive shows their content, archived.
If you need information for a Poster - contact them.