[Boards: 3 / a / aco / adv / an / asp / b / bant / biz / c / can / cgl / ck / cm / co / cock / d / diy / e / fa / fap / fit / fitlit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mlpol / mo / mtv / mu / n / news / o / out / outsoc / p / po / pol / qa / qst / r / r9k / s / s4s / sci / soc / sp / spa / t / tg / toy / trash / trv / tv / u / v / vg / vint / vip / vp / vr / w / wg / wsg / wsr / x / y ] [Search | Free Show | Home]

>He hoards data >Doesn't use ZFS You have 30 seconds

This is a blue board which means that it's for everybody (Safe For Work content only). If you see any adult content, please report it.

Thread replies: 84
Thread images: 14

>He hoards data
>Doesn't use ZFS
You have 30 seconds to explain yourself.
>>
>>60524631

>his filesystem needs 6GB of RAM
>>
>>60524631
kys feg
>>
>>60524692
>He has idle RAM
>>
>>60524631
Meme filesystem, ext4 can handle a lot more nested directories than zfs
>>
>>60524631
>Not using NTFS
What are you, gay?
>>
>>60524631
how do I add a drive to an array?
>>
Because of their absurd requirements for disk redundancy (aka RAID).

It is regarded as 'bad practice' using any sort of RAID and they recommend simply having double the disks you need and store two copies of your data.
If you try to use any sort of RAID prepare for absurdly low performance and when a single of your disks fail it takes literally WEEKS of time to rebuild it (and good luck to you if you expect to use the array in the meantime).
And there is the worst one: Inability to expand RAID arrays. I want to set fault tolerance to X drives and when I plug into a new one just use it as normal, but ZFS says fuck you.
You NEED to have all the drives you will ever want to have in the array on hand at time of creation and if you ever need more space just build another array and waste away storage space lol.

It may make sense if you have tons of money and can afford to throw away half of your storage space, but it has insane requirements for home usage.
>>
>>60526694
bcachefs and tfs are the future
>>
File: 1490981463315.jpg (17KB, 552x291px) Image search: [Google]
1490981463315.jpg
17KB, 552x291px
>>60526561
>NTFS
>>
>>60526694

Christ what a crybaby. Protip: redundancy and reliability uses more capacity.
>>
I need only one copy of anything I can just grab again. Other filesystems are good enough, which is good enough.
>>
>>60526694
This is what the people who have never experienced a failed Hdd think.
>>
>>60526716
I don't know anything about them. Will I be able to just set fault tolerance to an X number off drives and add in HDs as needed and have the filesystem expand as expected?
And preferably don't need absurd amount of RAM and cpu power for a simple filesystem?
If so I'm anxious for the future.
>>
>>60526850
with bcachefs you'll be able to do what you'd like, and have tiering up to like 16 tiers. You can have higher tiers cache the lower tiers, and replicate the metadata on them as well. You can setup striping/parity however you like on any tier, and remove and add devices while online. You can throw in an ssd and set it up to cache the lower tiers, as either a writeback or read cache, or both. pretty much anything goes

And nah, zfs is only that way because they rushed it out the door for servers with a ton of ecc.

I'm looking forward to tfs's machine learning cache system as well
>>
btrfs RAID56 is getting fixed in kernel 4.12
>>
>>60526694
RAID5/RAIDZ1 failure is overhyped, it's perfectly reasonable to use it with 4 disk vdevs. Expanding your storage 4 disks at a time isn't that unreasonable for the benefits of ZFS, and you only lose 25% to parity. Considering that it's a home setup, the downtime from having to restore from backup in the extremely unlikely case of a drive failure during rebuild is a non-issue to begin with. Also, a ZFS resilver takes considerably less time than a traditional RAID array rebuild.

However BTRFS will clearly be superior once RAID56 will get implemented properly, since it offers easily expandable arrays.
>>
>>60528087
Assuming 4TB as the least reasonable HD size to buy now, that means spending ~500 usd at a time to expand by 12TB at a time.
That is several years worth of data to me, and when converted to local price about means 2 months full salary.
Also over time you get progressively worse efficiency and failure rates since it is enough to two disks in the same vdev to become corrupt for you to loose all your data, and you have a fixed 25% storage loss.
If ZFS handled expansion like BRTFS want to, it would mean a fixed drive failure tolerance without depending on luck so that drives in the same vdev wont die at the same time and much better efficiency as you would only loose a few drives worth of storage space.

I'm gonna wait a few more years until BRTFS gets its shit together or a better FS comes along. Until them I'm just gonna keep good old regular offline backups and deal with bit rot.
>>
>>60528613
pretty much the only way to do piecemeal expansion with ZFS on home-user budgets is to settle for mirrors, and drop in another two-drive vdev when you need to expand. The space efficiency sucks but at least scrubs and resilvers are fast.
>>
>>60528613
>I'm gonna wait a few more years until BRTFS gets its shit together
I've been thinking this literally since before btrfs became stable and before it was merged into the kernel. It was supposed to be our savior but ended up being rushed and designed poorly with code piled ontop of piles of code. It was developed without any care about correctness and more about new features

that's why I was saying bcachefs. it's what btrfs should have been and it's the reason the developer is working on it as well. I'm running raid 10 on btrfs but I plan on moving to something like an ssd cached raid 5+0
>>
AppleFS doesn't have these problems.
>>
>>60528779
I'm kind of excited about bcachefs now, been reading about it for about an hour now.
While I don't really care much about the cache part of it it seems like a well designed system with modern features and developed with code quality in mind.
I just hope it turns out like brtfs should have, but i seems like it is still a couple of years away, lets see how it turn out...
>>
>>60528779
the name kinda suggests to me that they care a lot more about the caching/tiering functionality than the fault-tolerance stuff. We'll see how it shapes up over time I guess.
>>
>>60528983
>>60528986
I don't like the name either, he just took his old bcache name and slapped fs on it. I guess it works for riding off his previous work but maybe he'll rebrand before it gets merged
>>
>>60528613
I get my 4TB drives at 80 bucks a pop.

Ebay ya dingus.
>>
File: 1415059072951.jpg (63KB, 654x539px) Image search: [Google]
1415059072951.jpg
63KB, 654x539px
>>60528613
>deal with bit rot.
It's fucking hard, man.

Even my backup rots. I only know because I use it as a zpool.
At least I can tell what's taken damage and weather to pull from live or backup instead of flying dark and waiting for a bitflip to slice something up.
>>
>>60529882
Which drives and where? I got mine for 130, but it is a nice 5200 rpm (lower noise and power usage) HGST enterprise drive which blackblaze tested to be their single most reliable drive with over 3k testing samples.
>>
>>60529919
I never really used ZFS, so how does it work on a single drive? It has no fault tolerance, but it still checksums data and warns you at read time if there has been corruption?
>>
>>60528613
Yeah, I won't disagree there, it's also why I haven't moved to ZFS yet. And the fact that I live in a shithole country while electronics cost more than in the US. Though expanding with 4 disks at a time is still pretty reasonable to me - I could afford that once every 1-2 years, and keep up with my storage needs. It's a bit riskier to keep adding vdevs, but two complete disk failures in the same one is just too rare. An URE might still happen, but that'll only fuck up a block instead of the whole pool, so you'll only need to restore a few files.

I'm just very hesitant to commit to it, especially that other solutions are starting to surface. Other than BTRFS, Stablebit Drivepool promises integration with ReFS Integrity streams, and they also plan to develop their own bitrot protection software.
>>
>>60529882
>4TB drives at 80 bucks
ok m8
>>
>>60530135
Yep, on a per file basis.

If you have a single vdev zpool and a backup, you're much safer than using even traditional RAID.
You just have to put in a little elbow grease when something's checksum if not right.

ZFS makes backups easy with differential snapshots though.

>>60530122
I run a RAIDz1. WD blues & greens @ 5400 RPM work for me. If one dies it will get replaced but slow spinning drives rarely fuck up unless you physically abuse them.

Not worth the extra cost for slightly better manufacturing standards in my opinion, when you can just replace the buggers.
>>
>>60524631
What for? I have ext4 and will upgrade to a RAID configuration in the future, too costly right now.
>>
>>60530135
If you just want to detect bitrot, ReFS is a far simpler solution (assuming you haven't moved your data to a linux server already).

It can also repair bitrot if you use it alongside storage spaces, but apparently performance is still horrible.
>>
>>60530380
Not dealing with proprietary software, specially on something like that.
Might use ZFS on my backup drive though, never gave it much thought really. Just assumed it wouldn't fit that use case.
Maybe an script to checksum all files locally and on the backup, but ZFS might just be simpler, thanks for the idea.
>>
>>60530847
Might want to look into Snapraid too, which is open-source. It offers almost the same same protection as ZFS, just not in real-time.
>>
mdadm RAID is the production ready solution now, and we are already looking at cloud type distributed FS being the next usable stable best choice thing (because they'll manage both local and remote storage with all concerns covered eventually, including data integrity, redundancy, deduplication, encryption, efficient usage of hardware that can be added and removed any time with no effort, ...)
>>
>>60530959
It is great as far as RAID goes, but it does not protect against bit rot.
>>
>>60530285
>You just have to put in a little elbow grease when something's checksum if not right.
if I remember correctly, if ZFS finds a checksum error and can't fix it due to lack of redundancy, it will tell you what files are affected. Restore from backups and you can clear the error, it won't kill the whole pool like a failed vdev will.

>>60531695
I think you can "scrub" it, but because it adheres to the "no layering violations" philosophy, all it can tell you is that disk block X is inconsistent. It can't rebuild from redundancy (no checksums) or tell you what file you need to restore (because it doesn't know anything about the filesystem above it)
>>
>>60524631
B-but anon, I use tempfs, the only filesystem that respects my privacy.
>>
File: nas.png (79KB, 1019x548px) Image search: [Google]
nas.png
79KB, 1019x548px
bro you dont even know

600 bucks buys you a lot of ZFS

theres no excuse for anyone
>>
>>60532124
If you can tell there has been data corruption but can't correct it or tell what it affects it is of no use really.
Gonna look further into ZFS on a external drive, helps me get some experience if I ever use it as my main FS and helps against bit rot (which ATM I have no protection whatsoever).
>>
File: IMG_20150926_145732.jpg (735KB, 1520x2688px) Image search: [Google]
IMG_20150926_145732.jpg
735KB, 1520x2688px
My home server has this horrible setup of 6x500GB doing RAID through mdadm, LUKS, and ext4.

It works, but it's disgusting.

But I cannot be bothered to fix it.
>>
>>60533644
this is very nice cabling

also what case?
>>
File: rack_03.jpg (428KB, 2000x1130px) Image search: [Google]
rack_03.jpg
428KB, 2000x1130px
>>60533681

Thank you.

It's called: "X-Case RM 400/10 Short V5 500mm 4u ATX Rackmount

I found it at https://www.xcase.co.uk/ but it doesn't seem to be there anymore.
>>
File: IMGP1875.jpg (527KB, 2000x1275px) Image search: [Google]
IMGP1875.jpg
527KB, 2000x1275px
>>60533759
I ask because you did a whole hell of a lot better than I was able to. I want to neaten it up but several attempts at that have gotten nowhere. Your rack overall looks nicer too. One boot drive, one 1TB scratch disk, and 12x3TB drives in six mirror vdevs. It's in a cheap Rosewill case from Newegg.

I have to open it up to replace a disk later this week, maybe this time I can make it less awful.
>>
File: rack_04.jpg (406KB, 2000x1130px) Image search: [Google]
rack_04.jpg
406KB, 2000x1130px
>>60533920

The key is to do the data cable management before you do all the other ones, and then install the power cables at the very end.

Also, use zip ties to bundle sata cables together. They are flat so the combine very wel together.

Once you get the done you add the power cables and ruin the whole look with them.
>>
> 2017
> Not using JBOD & multiple google drive unlimited accounts as a backup
>>
>>60533920

Why didn't you align the cooling fins of the CPU cooler with the direction of the case airflow?
>>
>>60524692
Isn't it 1Gb RAM per TB?

>>60526220
ZFS supports dedup and compression.
>>
File: drive.png (37KB, 927x874px) Image search: [Google]
drive.png
37KB, 927x874px
>>60535208

If you're gonna troll, make it halfway believable.
>>
File: 1478961369977.png (43KB, 1917x1017px) Image search: [Google]
1478961369977.png
43KB, 1917x1017px
>>60535250
Don't be a fucking faggot pls, example 1 of my accounts , using rclone encrypt & decrypt the data, and then mounting gdrive as a volume with fuse on my server
>>
File: 1490652693178.jpg (52KB, 541x309px) Image search: [Google]
1490652693178.jpg
52KB, 541x309px
>>60535250
You can buy unlimited accounts on ebay for 15$ fag

2 Accounts mounted as vaults
>>
>>60535231
Well it was a long time ago I put that heatsink on (it's a Nehalem and used to be in my desktop until it got bumped to the server when I upgraded last year), but if I remember correctly, it's because I couldn't due to the peculiarities of the mounting bracket and/or the heatpipes on the right side fouling either the PSU or the chipset heatsink.

Yeah it'd be better to have some 92mm tower cooler that works with rack airflow instead of that old blowdown sink, but various combinations of laziness and stinginess keep it there. This is helped by the fact that it works okay as-is, CPU temps under load only get to 60C or so.
>>
I use CEPH instead.

Get on my level,

F A G G O T
A G G O T F
G G O T F A
G O T F A G
O T F A G G
T F A G G O
>>
>>60529919
>no trigger discipline
>>
>hoarding data
0 purpose
>>
>>60529919
I seriously hope you have ECC memory.
>>
>>60535502

Where does these accounts comes from ? Some university research center ?
>>
>>60524631
I don't have ECC RAM.
Nor do I have the money to purchase Software RAID cards and Enclosures enough for me to justify the initial investment.


That and I have had Loads of problems with fucking BITROT on ZFS.
>>
>>60526561
you made me laugh

ill tell you a joke:

NTFS
>>
>>60538432

>ZFS
>bitrot
>>
here I am thinking about how I need to replace my dinky NAS which is basically EOL and we have a storage thread on /g/.

desu I'm leaning towards just using LVM

>>60536686
>ceph
>>
>>60538626
Me too, I'm thinking about centralizing my storage (moving most hard drives out of my desktops except boot drive and plopping them in my home server) and honestly I think I'll just go with XFS on LVM, especially since I'm not gonna run a *BSD based server.
>>
is bitrot a real fucking thing or am i being baited

yes ok entropy and quantum tunneling slowly makes electric charge and magnetic polarity go away but seriously? why have i never heard of this before in my life
>>
>>60538684
yea my desktop / linux server just have small-ish SSDs in them as main boot disks, all my media and other shit is dumped onto my NAS.

I've used ZFS before and I don't know if I want to deal with the extra config / management overhead of that vs just LVM
I gotta build a new linux box either way I guess. I didn't plan well when I built my current linux server - it's pretty small resource-wise and there's no room to upgrade, basically got to build a whole new box
>>
>>60538432
You don't need ECC, you will likely never experience bitrot. If you are, your hardware is likely defective in some way, only other explanation is you somehow live in the cosmic ray party room of the Earth.
>>
Xfs is faster
>>
File: 1290479667535.jpg (12KB, 243x349px) Image search: [Google]
1290479667535.jpg
12KB, 243x349px
All memeing aside does ZFS have any noticeable improvement over NTFS in SSDs that people should care about?
>>
>>60538748
ECC isn't really for fixing random bitflips from cosmic rays, it's for detecting and compensating for weak capacitors or bitline sense amps that fuck up repeatedly, but only every couple hours or days.
>>
Meh, the whole bitrot, raid write hole,data corruption thing has gotten really out of hand. Yes if your server loses power while writing data, some data will get lost,damaged,etc. Just the nature of the beast. The fact it's in a raid config has nothing to do with it. The only way to prevent this is to use a UPS that will in the event of a power failure shut down the server properly before the ups battery dies. It's good practice to use a ups anyway no mater if you use raid or not. Drives die, that's what backups are for. Files get deleted by mistake, that's what shadow copies (previous versions) is designed for so you can go back to yesterday or a month ago and retrieve those files without having to touch your main backup. If your whole server just dies well that's what a system image + full data backup is for. Will you lose data at some point, oh yes. But with a good plan maybe at most a days worth, depending on how often you run a backup job.
>>
>>60539704
To ensure maximum data safety keep your backup device (s), NAS,External usb, whatever shutdown when not in use. That way when you do lose power they won't be affected. plus the drives will last longer.
>>
>not using murderFS
>>
>>60539777

> waste of trips on shit advice

unless it's permanently offsite (and preferably continuous/live) and thus not arbitrarily powered down, it's not really backup.
>>
>>60524631
RAiD6 with a XFS filesystem works just fine.
>>
>>60524631
I have a nonredundant array of expensive disks. Haven't lost anything yet.
>tfw non-production and can take your time making sure a hard drive isn't a lemon
>>
>>60535248
You only need that much RAM if you're going to be using deduplication in FreeNAS, a lot less is fine. At worst it'll perform worse.

You can always just go Nas4Free instead, for a lot less bullshit and simpler setup.
>>
>>60538626
>ceph

Yes, CEPH. It's master race.
>>
>>60524692
Unused Ram is wasted Ram
>>
>>60528613
>I'm gonna wait a few more years until BRTFS gets its shit together
You sure about that m8? I've been thinking that for like 10 fucking years now.
>>
>>60541406
Unspent money is wasted money.
>>
>>60540046
Why not EXT4 instead of XFS? I heard XFS performs worse in most cases and XFS really is only suitable for extremely large operation (talking petabytes here). Both should be plenty stable, but I heard EXT4 has better support for recovery when things go bad.
Also good luck with bit rot. Unless you just want to save money (or are worried about uptime) I'd personally buy a few more HDs and use them as a regular offline backup. More reliable by any measure, specially since you aren't protecting yourself against bit rot anyway.
>>
>>60538742
You don't have 20 terabytes of weeabo horseshit sitting around.
>>
>>60541516
Or bcachefs then. God, we should have a better solution by now.
If in about 3 years things don't improve I'm probably going to give up and use ZFS. Or maybe give up on my obsessive hoarding and just keep using regular backups like everyone else...
>>
>>60541525
This is very true.
Thread posts: 84
Thread images: 14


[Boards: 3 / a / aco / adv / an / asp / b / bant / biz / c / can / cgl / ck / cm / co / cock / d / diy / e / fa / fap / fit / fitlit / g / gd / gif / h / hc / his / hm / hr / i / ic / int / jp / k / lgbt / lit / m / mlp / mlpol / mo / mtv / mu / n / news / o / out / outsoc / p / po / pol / qa / qst / r / r9k / s / s4s / sci / soc / sp / spa / t / tg / toy / trash / trv / tv / u / v / vg / vint / vip / vp / vr / w / wg / wsg / wsr / x / y] [Search | Top | Home]

I'm aware that Imgur.com will stop allowing adult images since 15th of May. I'm taking actions to backup as much data as possible.
Read more on this topic here - https://archived.moe/talk/thread/1694/


If you need a post removed click on it's [Report] button and follow the instruction.
DMCA Content Takedown via dmca.com
All images are hosted on imgur.com.
If you like this website please support us by donating with Bitcoins at 16mKtbZiwW52BLkibtCr8jUg2KVUMTxVQ5
All trademarks and copyrights on this page are owned by their respective parties.
Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.
This is a 4chan archive - all of the content originated from that site.
This means that RandomArchive shows their content, archived.
If you need information for a Poster - contact them.