[Boards: 3 / a / aco / adv / an / asp / b / bant / biz / c / can / cgl / ck / cm / co / cock / d / diy / e / fa / fap / fit / fitlit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mlpol / mo / mtv / mu / n / news / o / out / outsoc / p / po / pol / qa / qst / r / r9k / s / s4s / sci / soc / sp / spa / t / tg / toy / trash / trv / tv / u / v / vg / vint / vip / vp / vr / w / wg / wsg / wsr / x / y ] [Search | Free Show | Home]

First time server builder here Is RAID 5 the only smart way

This is a blue board which means that it's for everybody (Safe For Work content only). If you see any adult content, please report it.

Thread replies: 70
Thread images: 5

File: 1466422141742.webm (1MB, 720x404px) Image search: [Google]
1466422141742.webm
1MB, 720x404px
First time server builder here

Is RAID 5 the only smart way when configuring a RAID server? What's the purpose of RAID 0-1 if it can't adapt to a new HDD?

I ask because I wanted to configure a comfy little 14TB and 1 64gb SSD read/write cache server to use for my doujin and hentai creations and general backup. I might even sell off some of the data for other people to use for a little extra cash if I fall on hard times or something.

posted lewd to bait you faggots into listening
>>
>>55843115
RAID 0 is for hardware ricers that don't actually care about redundancy.
RAID 1 is basically deprecated. It was the most obvious form of RAID.
>>
>>55843115
where's the sauce?
>>
>>55843115

>2016
>using raid
>>
here is a tip: no one is going to pay you for that

call geeksquad
>>
>>55843115
Read/write cache is pointless unless you're actually hosting performance sensitive services. A fucking doujins, hentai and backups is not it.

Also you'll destroy that SSD very fast so you better be ready to replace it often.
>>
>>55843184
There's an alternative?
>>55843175
>>>/gif/
>>55843197
So ditch it and get another HDD, right?
>>
>take bra off
>Tits sag to the equator

This is why I'm glad mine never got over a C.
>>
amanda love
>>
>>55843210
B cup detected
>>
>>55843209
>There's an alternative?
ZFS
>>
>>55843210
Kep telling yourself that flattie.
>>
>>55843115
>Is RAID 5 the only smart way when configuring a RAID server?
It's actually the dumbest way. Do not use RAID 5 or 6. MAYBE 50 and 60, but RAID 10 is better.
>>
File: 1469849031185.jpg (102KB, 387x468px) Image search: [Google]
1469849031185.jpg
102KB, 387x468px
>>55843210
Bed ur a man retard
>>
>>55843675
>ZFS
>for a NAS storing nothing.

Really I don't know why OP is even using a RAID array. Just have a bunch of disks.
>>
>>55843687
Raid10 is fucking garbage.
>>
>>55843724
You're retarded. RAID 10 is pretty much the only acceptable RAID configuration in enterprise in 2016, for very good reason.
>>
>>55843115
>do raid 5
>drive dies
>have to rebuild array
>another drive dies while rebuilding which can take awhile.

Rip in pepperonis data.
>>
>>55843742
Shit configuration. You should feel bad for promoting it.
>>
>>55843210
This. Those tits only look good in something that is holding them up and squishy.
>>
>>55843742
nearly every array i deal with on a daily basis would disagree with you
>>
>>55843801
This is the irritating thing about the big-ass drives. You can easily spend an entire weekend rebuilding an array of 6TiB disks
>>
>>55843711
But I want all those disks into one disc, and RAID seems to be the only way to do it.

And hey, I might want to stream other things through there, like movies or something, don't know until I have it.
My main goal is to one day setup a system where I can gather all of my data remotely, without the use of someone else's proprietary system
>>
>>55843210
flat chest flat chest
>>
>>55843863
>But I want all those disks into one disc, and RAID seems to be the only way to do it.
Well, no, there's JBOD but that's pants-on-head retarded as you increase the number of volumes.
>>
>>55843861
That's why I'm sticking to 2-3tb drives
>>
>>55843877
Is it retarded for me to do, or retarded in general to do?
>>
File: Not Telling Her Name .webm (3MB, 960x540px) Image search: [Google]
Not Telling Her Name .webm
3MB, 960x540px
>>55843115
She's qt
>>
>>55843863
>But I want all those disks into one disc
So use volume spanning, done.
>>
>>55843888
both, really. you proportionally increase the chance to destroy the logical volume each time you add another disk to the frankenstein's monster.

i mean, if you knew exactly which one died and had the group configured via LVM i suppose you could screw with that to pick up the pieces, but why when an array does the work for you? to save the $100 for the parity drive?
>>
>>55843115
>raid 5
>not raid 160
Are you even trying?
>>
>>55843115
RAID 10 is reasonable. ZFS or BTRFS are the best ways to go.
>>
>>55843954
Hey, I'm going to see if I can do this in debian, thanks for the heads up, although from the looks of it it sounds like it's more of a OS trick, meaning I won't be able to utilize the self-healing qualities I would of hoped to with a RAID 5 system.

Doesn't matter anyway, I'm getting seagate drives, the most trustworthy in the business :^)
>>
>>55843917
she would look amazing if she dropped the 25 or so pounds she's overweight
>>
>>55843960
What if I don't ever plan on going beyond 14tb, and simply plan on replacing dead HDD when they occur?
>>55844015
But >>55843142, why combine them?
>>55844026
>>55843550 btw
>>
>>55843917
She reminds me a big of that weeaboo bitch with huge tits that pretends people aren't just watching her videos for her tits while actively using her tits to promote her videos.
>>
>>55843210
They really don't.
http://www.pornhub.com/view_video.php?viewkey=ph5681aed010738
>>
>>55843917
To all the wondering /g/ents, she's Amanda Love.
>>
>>55844026
You have issues pal.
>>
>>55844042
>What if I don't ever plan on going beyond 14tb, and simply plan on replacing dead HDD when they occur?
The volume is still butt-fucked if one dies. And the failure rate is going to be approximately 5 times higher than one drive when you use 5 in a span.
>>
>>55843175
Amanda Love
>>
File: 1458356904808.png (489KB, 502x637px) Image search: [Google]
1458356904808.png
489KB, 502x637px
>>55843175
Google search, you retard
>>
>>55844057
0/10
>>55844051
LOL
>>
>>55844079
Sounds terrible, then what's the use for such a system?

Also, if anyone could answer, what affects the speed when trying to access a file from the server onto my phone or computer? Is there a way to bypass my shitty 3mbps Internet speed?
>>
>>55844042
>why combine them
Well, if you put six 1TB drives in RAID 1, you have 1TB of usable space with 6 copies of everything.
Six 1TB drives in RAID 10, you get 3TB of usable space, and if you're lucky you can have 3 drives fail with no data loss. Even if you're unlucky, you still have as much protection as RAID 5 without the write speed penalty.
>>
>>55844119
>what's the use for such a system?
convenience. i'm oversimplifying it as you can cut out the bad part of a spanned volume but unless you have some technical software limitation there's not much benefit of it over separate file systems.

>>55844119
>Is there a way to bypass my shitty 3mbps Internet speed?
go with another provider/bond another link. you can't magically cram 15Mb of data in a pipe one tenth that size, and data compression isn't something you're going to monkey with on your home NAS
>>
>>55844168
Yeah, but at the cost of losing half of my storage to backups.

Write speed isn't something I'm really concerned about anymore because as you've all pointed out >>55843197
>>
>>55844119
>What's the use
Cheap pool of storage. Not for production use.
>what affects the speed
Speed of your drives, speed of your server's connection to the switch/router, speed of your pc/phone's connection to the router/internet, your ISP-imposed speed cap, and the speed of your phone/pc's storage if saving files.
If the 3mbps is ISP imposed, buy better service or change providers.
>>
>>55844224
>losing half of my storage to backups.
if you want parity or mirrors, you're going to have to lose some. and they're not backups, RAID is not a backup. it's disk failure mitigation.
>>
Personally, I just have two 2TB drives and I manually sync them from time to time.
>>
>>55844119
>>55844236
Also regarding spanning: VM's can end up "floating" across several metal machines, so spanning a virtual disk across the physical ones makes sense, if each machine has its own redundancy/resiliency.
>>
>>55844269
I seriously doubt OP's porn stash is going to be a datastore for an ESXi host.
>>
>>55844287
Are you saying that's not normal (for a /g/tard)?
>>
>>55844249
You do have a point there. I figured with the RAID 5 setup I would lose a little less though.

I guess this whole server thing is way more complex than I originally imagined. I mean I haven't even considered ZFS or whatever before, I just assumed RAID was always the way to go.
>>55844287
It's not
>>55844236
>>55844219

I would if I could, but I have EarthLink, so it's like stunning your soul over to the devil.
>>
>>55844327
>I guess this whole server thing is way more complex than I originally imagined. I mean I haven't even considered ZFS or whatever before, I just assumed RAID was always the way to go.

It isn't necessarily. RAID is definitely the easier/more conventional option.
ZFS is alright, though kind of overkill for home use. If you open that can of worms I would recommend using ECC RAM and even if it looks tempting stay the fuck away from dedupe.
>>
dude, back problems lmao
>>
>>55843917
>tattoos
No thanks
>>
>>55844327
>RAID 5 setup I would lose a little less though.
You do, you do take a speed penalty for the parity though. The concern with level 5, as intimated earlier, is that identical hard drives bought together tend to have similar MTTFs, so the likelihood of a cascade failure becomes higher when you're rebuilding, especially since the disks are going to be working overtime for hours on end until the rebuild is finished. Level 6 gives more padding at the cost of space.
>>
>>55843115
You could honestly just use ZFS with striped mirrors. If you don't care about the SSD's lifespan you can just use it for a log or write cache. Right now I have four 1TB drives in a striped mirror fashion with ZFS and use an old SSD as a log cache for it. I get good performance. The SSD might not be necessary, but it would otherwise go unused, might as well put it to use.

Either way, consider your options and do what best suits you. For me I don't host anything too particularly special other than seafile and some other minor services so the setup I have is good enough.
>>
>>55844287
He asked what the use for such a system was.
>>55844327
RAID is to cover your ass in case of disk failure. If you have access to/ can build an ECC system, consider ZFS. BTRFS is another option, doesn't require ECC or (necessarily) devour storage.
Also, not using the myriad of technologies available to keep your porn stash safe? It's like you want to lose it all. How can you expect to keep your data safe if you don't have a Raid 160 array, per 24TB-1bit, to support a spanned ZFS/BTRRFS virtual disk, with 2 GB of RAM per TB of data. Don't forget to back up your snapshots to a separate machine, and ensure the cases are cosmic radiation - resistant, and you get CPU's capable enough to handle compressing your data regularly. Oh, and don't forget your quality UPS, to ensure you have a well regulated current, and you have enough time to finish your writes.
>>
>>55844422
Oh look, it's autism on 4chan!
>>
>>55844404
You do not need ECC RAM to run ZFS, even one of the cofounders himself said there's nothing particularly special about ZFS that would require it.
>>
>>55844472
I see why people just get Dropbox and call it a day
Keeping it on the pretty cheap side, trying not to go above $500

>>55844464
>>55844264

In meant to ask, how roomy are you'll right now in terms of space? Are you close to maxing out?

2 and 3 tb HDD are getting pretty cheap, so I figured I go all out if I planned on going at all
>>
>>55844575
>how roomy are you'll

wat
>>
>>55844603
Meant you all
Fleksy is being alcoholic
>>
>>55844575
I have maybe 90GB free now, but I can always delete some shit.
>>
>>55844575
Buy enough ultrastar drives to fit your needs, +3. 2 for parity in a raid 6, and 1 in case of drive failure. format the array using ZFS (BSD) or BTRFS (Linux). Look into ECC, It doesn't hurt.
Oh, and for the litany of things I listed above, I forgot about registered RAM. For when you want to play with the big boys.
>>
>>55844477
No, I just have normal standards
>>
File: meme_server.png (70KB, 670x622px) Image search: [Google]
meme_server.png
70KB, 670x622px
>>55844642
Literally what I was thinking
>>
>>55844642
What that actually opens a new question

Should I use Linux or BSD for my server
Thread posts: 70
Thread images: 5


[Boards: 3 / a / aco / adv / an / asp / b / bant / biz / c / can / cgl / ck / cm / co / cock / d / diy / e / fa / fap / fit / fitlit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mlpol / mo / mtv / mu / n / news / o / out / outsoc / p / po / pol / qa / qst / r / r9k / s / s4s / sci / soc / sp / spa / t / tg / toy / trash / trv / tv / u / v / vg / vint / vip / vp / vr / w / wg / wsg / wsr / x / y] [Search | Top | Home]

I'm aware that Imgur.com will stop allowing adult images since 15th of May. I'm taking actions to backup as much data as possible.
Read more on this topic here - https://archived.moe/talk/thread/1694/


If you need a post removed click on it's [Report] button and follow the instruction.
DMCA Content Takedown via dmca.com
All images are hosted on imgur.com.
If you like this website please support us by donating with Bitcoins at 16mKtbZiwW52BLkibtCr8jUg2KVUMTxVQ5
All trademarks and copyrights on this page are owned by their respective parties.
Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.
This is a 4chan archive - all of the content originated from that site.
This means that RandomArchive shows their content, archived.
If you need information for a Poster - contact them.