Can we talk about home network storage for a minute?
Currently I'm using a standalone windows box with two 2tb drives attached with one drive shared on the network and running syncbak nightly to replicate any changes from that drive to the other.
The drives are not in a mirrored raid array. My thought behind this is I have up to 24 hours to restore data (accidental deletion) from the other drive before syncbak replicates the changes at 2:00am
My fear however is data degredation/corruption. As in syncbak sees the corruption on the primary drive as a "changed file" and then replicates it to the second drive. Thus fucking me.
Is there some sort of data health monitoring software that warns of data corruption or impending drive failure?
Maybe this is a retarded way of dealing with data storage? Is there a better way to handle this? I'd rather keep it windows based if I can.
I would also almost rather it not be a part of a multidisk raid. This way in the event of a controller failure or any sort of disaster, any one single drive would contain all the data and be readable by any other PC.
You need simple Raid 1 with an off site back up.
What you want is flat out wasteful of resources in time, money, and energy. A simple two drive Raid 1 is about as stable as they come as well. You're not trying to run dual raid 10s with six drives a piece.
Then throwing a backup in to a secure account offsite means that you will have your data no matter what. Hell for simplicity sake, you could just get Carbonite for the cost of a Netflix subscription and back up everything in your network.
Use versioning in SyncBack if you have enough free space. Monitor drives with Hard Disk Sentinel. Make .par files for verification and recovery with MultiPAR. All problems solved.
The easy way, requires some maintenance every month.
Enable to RAID1 and enable the COW features for your filesystem (called VSS on windows) and schedule it for daily backups, discarding old ones.
The real way (may be pretty expensive)
Enable RAID1 and set up some form of off-site backup.
>>61154229
use microsoft google drive
Enjoy your botnet.
>>61156146
>What you want is flat out wasteful of resources in time, money, and energy.
I may be missing something but wouldn't your solution still require a dedicated PC to host the raid array as well as run the offsite replication software?
Though I didn't realize offsite storage was so cheap.
Do well have news from the hpe gen10 servers ? They seemed to be pretty good for the price.
>>61156436
The REALLY real way
Keep your JBOD configuration, Install FreeBSD and use ZFS and take advantage of the copy-on-write features. You're data is safe from controller failure, and you get instantaneous 24 hour backups, which can be unlinked/expired whenever you want.
Pair that with an off-site backup if you want that added security.
>all these home raid retards
as far as monitoring corrupted files, a SnapRAID parity volume will do the trick
for monitoring drive health check smart data (how did you not know this), you can use crystaldiskinfo or speedfan or a shitload of other software to display it on windows (or smartctl on linux)
also do yourself a favour and stop using windows as a server
>>61156518
no reason to buy when you can still get the rebate on gen8
>>61156522
what makes this the "REALLY real way"?
don't think migrating all the data to zfs is the best option
>>61156522
ZFS is terrible on anything but Solaris, as it paravirtualizes approx half the Solaris kernel just to make it work.
>>61154229
Buy plenty of drives, replace regularly: your data is more worth than the disk
Why not just get a NAS as this point ?
QNAP has so many applications for its NAS'es I know one of them lets you back up your nas files to the cloud for an extra layer of protection.
ZFS does sound like it will handle your issues well, but its not like RAID5/6 doesn't have parity and can't correct any errors anyways
There are even DIY solutions available as well, like Freenas or openmediavault
>>61159259
Currently running a ZFS root on my laptop.
It's faster than anything else I could have put on here.