>>45512962 Why would you take backblaze seriously after they reported that enterprise drives had a higher failure rate than consumer drives with a such a low sample size? If two fewer drives failed, then enterprise drives would be considered more reliable in their original analysis. For their current drive reliability report, it was mentioned that they were ripping drives out of portable drive enclosures.
Personal experience with drives isn't necessarily a good indicator unless you've had hundreds of drives as a good sample size.
Here's the only real advice for buying HDDs:
Avoid Seagate at all costs.
Generally buy WD or Hitachi. Hitachi is owned by WD and WD has stated that they still operate as an individual company with a different product.
Avoid WD Greens, they are cheap and slow and prone to failure. blues are OKAY for normal computing but blacks are better and have a fantastic warranty which makes them worth the price. Reds are for good for mass storage but don't operate particularly fast unless you have them in a RAID configuration.
SSDs are, and have proven to be more reliable than mechanical HDDs. All that bullshit you heard about limited write capacity and random issues have been fixed at this point. Hell, even when the limited writes were being spouted as the reason not to buy SSDs, they were still theoretically able to write more times than an HDD before failure.
SSDs use a level of parity to avoid issues like this. The higher capacity SSD you buy, the more reliable (generally) because of increased parity among the cells.
With HDDs you generally get what you pay for. Seagates are cheap enough to attract people to buy them but they fail at a significantly higher rate.
You'll hear people say they've run Seagates for years with no problems and that could be true but it's all really a gamble when buying HDDs. You could buy a great and reliable Seagate but the odds are low. You pay a little bit more for a WD or Hitachi and you increase the odds you get a good and reliable HDD.
Okay, show us. You're spewing out absolute bullshit and can't back any of it up. Anyone can literally google SSD reliability and come up with actual results that prove they are more reliable than HDDs.
I'm sure it sucked, but that's part of being an early adopter of technology, there are bugs that come along with it. At this point, this is pretty much fixed and no longer an issue. Firmware fixes have gotten a lot of those bugs fixed as well as new hardware.
HDDs do handle power failure better I will give you that, but by a margin that is sort of insignificant unless you live in some 3rd world country whose power stations are being bombed.
My friend is a defense contractor and has done some testing of SSDs for a certain large company and he did mention that he was disappointed about the failure rates with power surges in comparison to HDDs. He mentioned that after about 20 SSDs started acting wonky while after 40 most were dead. I don't know what the rate was with HDDs but he said they were a lot more reliable in that field. This was about a year and half ago though so this information may not be relevant at this time for newer SSDs.
As far as other areas of testing he did on the SSDs, one was a rapid change in temperature from around 300 degrees for 20 minutes down to about -20 degrees for 20 minutes and said that SSDs performed better than HDDs. I don't know what sort of relevance this test has in the real world but it's sort of a testament to the reliability I guess.
Reads and writes were much more reliable than enterprise HDDs.
SSD are the most reliable, especially for write-once backups. Problem is the price is much higher than an HDD and the capacity much lower.
I can't believe it's 2014 and we aren't using 1TB optical media. Seriously, what the fuck. We should be buying blank TB discs for a dollar at this point, but they've not even been released yet. Fucking slackers.
Also OP, no matter how reliable they are, they still have to deal with stupid as fuck transport, stupid as fuck resellers and the handling of crates and shipments these idiots and the transport services do.
>>45514659 The amount is irrelevant unless you hook it up every some time so it relocates the information as SSDs do. The information will wear out or get corrupted if you leave it in a safe for 10 years due to how it stores information. That's why NSA, governments and libraries use magnetic tapes, or in this case HDDs, because its physical and won't wear out or go away unless you shake it like an idiot and ruin the platters or piss moisture at it.
Bullshit son, HDDs have the same issue; if you don't periodically rewrite them, the magnetic data gets weaker over time until you start getting read errors. It's common practice to take your HDD every year or two and flip the bits back and forth.
>>45514292 >I don't know what sort of relevance Well one has moving parts and the other doesn't so the later is more reliable in temperature change, that basic physics and don't even need a dick head soldier to figure that out.
I'll just leave this here. The drives are all in good SMART health, they've had light to moderate use over their 9 year lives. This computer was build during the transition to SATA from IDE, so its using a modified SCSI controller to handle SATA which is why it comes up weird in speccy for the last one. They're all Seagate btw, last one is 160GB, first two are 80gb and in a Windows "RAID 0".
>>45514839 Helium drives don't need to "breath" like normal air filled ones. Air ones breath to accommodate with changes in air pressure. They also reduce the friction on the drives making them quieter and much more reliable. This also makes it possible to put the platters closer together for higher capacity.
>>45514753 What should I say? I have a 3TB Seagate. According to this thread, I am kill. But for the last two years or so it has handled my data quite well. Is there a way to detect failure early like with S.M.A.R.T. just over USB?
>>45513171 >there are some models which are the same size and have a different number of platters though. you can't recognize them by serial# or anything however. "get fucked" - the jew hdd cartel
This. this is why you don't buy high capacity drives. It's a lottery- some are good, some are garbage. they of course make sure only the good ones from their best factories go to reviewers etc, and everybody else gets to roll the dice.
So who the fuck is HGST? It says "A Western Digital Company" so I assume they are reputable. Is this how they rebranded Hitachi? I'm considering getting their 4TB as a backup drive for all my other drives. Starting to get worried about my 1TB black, since I'm approaching 55,000 power on hours; I'm surprised the thing is still reading as "Good" in SMART.
>>45514923 >HGST, Inc. (formerly Hitachi Global Storage Technologies) is a wholly owned subsidiary of Western Digital that sells hard disk drives, solid-state drives, and external storage products and services. From Wikipedia, the free encyclopedia
>>45514914 It doesn't decay, what you are thinking of is demagnetization due to too densely packed magnetic domains. It has been taken care of when they moved from iron oxide to cobalt based alloys today. There is no "decay" in HDDs unless you shake them or piss on them.
>>45515097 hitachies they go there also costs around $300 a piece. And the age of 4TB seagate is very similar to what I have at home 3TB WD red EFRX the table on the right is from the january data btw, left picture is updated graph with september data
>>45515115 It is physical storage, arrangement of physical grains with magnetic properties. The only problem with HDDs was when you have too densely packed domains and this induces the grains to influence each other to change orientation over time and "demagnetize" so to speak. It doesn't have anything with losing magnetic properties just like gold won't lose color. Cobalt alloy fixed this, as did finding ways to avoid too densely packed domains. So yes, the platters last a lifetime unless the head hits them and you fuck up the HDD via shaking, or shit gets fucked due to moisture.
The SSD is a different more complex story, it needs charges from time to time to make the information last and that's why it often rearranges data.
>>45516677 Hard drives are really hit and miss. For example, I have a 120GB Hitachi Deathstar that still works; I use it as an external for my Wii games. They are notorious for being incredibly unreliable, yet mine is solid as a rock.
>>45516791 I've noticed that too. I don't trust these HDD tests. http://www.pcworld.com/article/2089464/three-year-27-000-drive-study-reveals-the-most-reliable-hard-drive-makers.html I've had Seagates working for years while several WD storage drives have failed.
I need to get several SATA drives for storage but am beyond lost on which brand or capacity to pick. Doubt I can afford RAID
>>45516891 >I don't trust These aren't sponsored results. They use drives differently than you might, benchmarking other portions of a drive to determine its life in _their_ situation.
Some consumer drives will fare much better with constant writes than others will. Some drives will fail more often if they're spun up erratically. There might be voltage droop from a bad power supply that certain drives handle better than others, or you might be rougher with your drives than they're being.
I don't think their numbers are faked, they just don't apply to everyone.
I know that, I just don't see what kind of measure of reliability it is to the average consumer who probably isn't keeping their drives inside of an industrial freezer/hottest fucking desert on the planet. Nice to know that they will last though I suppose.
>>45517130 Also, PC world is analyzing these statistics incorrectly.
>The worst of the bunch, meanwhile was the 1.5 TB Seagate Barracuda Green (ST1500DL003), with an average lifespan of 0.8 years. Ouch! I have no idea why they shared the average lifespan rather than the annual failure rate.
On a note that proves these are really just statistics and not a specific research study, >Backblaze said this particular model is pretty bad, but it cautions not to read too much into it. The company received these specific drives as warranty replacements, so they were probably refurbished with wear and tear on them by the time they met Backblaze’s HDD taskmasters.
>>45517130 I don't cheap out on components. Server-side I make sure to have a good UPS and quality power supplies to minimalize the risk of problems. What would you go to for high capacity storage drives? the data is not critical so no RAID
I'm not doing any intensive things with it. It isn't my boot drive, I mostly use it for storing games and large media, like my video rendering workspace, large clusters of PSD files I'm working on, and the like.
It's being constantly used, so that's why I went with a Red. Plus it was actually cheaper than even a Green in this sale.
>>45517245 I will do my research on 3TB Hitachi and WD Reds >how often do I write Not very often as they would only serve as storage drives in an always on machine I prefer SSDs for the OS and as a landing zone for content so I don't have to deal with bottlenecks
>>45517450 keep rolling the dice and gambling on their reliability HDD reviews and reliability tests are almost never consistent I've had a lot more WD Green storage drives arive DOA and fail quicker than Seagate Ofcourse this contradicts the graphs everyone posted in this thread
>>45514309 Not really. There was some test of various SSDs and Intel is confirmed to use proper capacitors for proper shutdown. I use a Seagate 600 Pro which, though not included in that test, does have proper power failure handling advertised and in in hardware.
>>45514292 It didn't suck if you went with non-Sandforce drives which is why Intel and Samsung have such great reputations. They did proper testing and validation. Eventually Intel did adopt Sandforce in some drives and they had a decent reputation though I'm glad they are back to in-house relatively speaking.
AFAIK SSDs don't need heaters for cold weather operation and I agree with the reliability as long as you use drives that are known to be good.
>>45516677 Seagates had a reputation for being the most reliable but that was pre-2000. As the study I cited earlier suggested, maybe the Enterprise reliability stems from the fact that people are conflating the use of enterprise drives with their use in fault tolerant arrays.
>>45516903 The bathtub curve as found by both backblaze and the 100,000 drive study didn't exist. Failures over time just gradually increase.
>>45516650 I do have a UPS. ECC, ZFS, backup to Amazon Glacier too which I'm sure is more fault tolerance than whatever you are running.
>was about to buy a 4TB WD black >apparently wangblows can't even see more than 2TB without doing some stupid partition shit >crazy unreliable >reviews say that WD sends you used/refurbished drives if yours breaks while under warranty >case doesn't have enough room to get a nice RAID setup going without completely blocking what little airflow there is >would cost 2much anyways >NAS not viable either and also adds a couple hundred dollars to the cost I just want to be set on storage for another 5-6 years ;-;
>>45517721 what do you think is better for large data libraries of content that are not top priority in terms of the data's value or importance: cheaper raided disks or more expensive 'enterprise grade' w/o raid?
I don't think anyone in this thread works with HDDs enough to actually make a judgment regarding platters. I work for a fucking NAS company. Everyone uses 2TB or larger HDDs. You know what fails? Shitty HDDs like Greens. It has nothing to do with capacity. It theoretically adds more ways to fail. In reality it doesn't happen.
>>45518916 Why bother using MBR anymore anyway? Windows 7 has absolutely no problem booting from GPT on a UEFI system. I've been doing it for years. Unless you're running on a really old board, MBR is pretty worthless these days.
>>45524042 I'm still in time for cancelling the order, do you have any actual argument? http://www.tweaktown.com/articles/6028/dispelling-backblaze-s-hdd-reliability-myth-the-real-story-covered/index.html#UX5OD0IAgPphmv71.99
>>45524042 >Everyone here is worried about a 2% failure rate of a drive model that Blackblaze intentionally stocks up simply because it's the cheapest option >Nobody notices the downwards trend of Seagate drives, and ignores the upwards trend of WD
If any of my Seagates fail, I'll just replace them and load my backups.
>>45524492 that identifies piece of hardware I want to actually know how I can exploit it if I get my hands on it not just assume some hackers guru will find out a shop in XY sold this and that HDD at that date or something
>tfw my parents are using the same 256 gb hdd for 9 years now for the family pc and it hasn't failed once. We used to download and erase shit tons of stuff (movies, music, files), format, defrag at least 20 times. Still working perfectly. >feels good man
is a drive failure different every time? how do I notice when a drive starts failing and what's the best way to try and transfer all the data off of it? are there cases where a drive just fucks off one moment and all of the stuff on it is gone for good?
>>45526170 Yes it can be different. When you notice that its taking super long to access a file it's possibly dying. Sometimes it will also just suddenly die and not report to the BIOS anymore. this is why you make backups.
>>45512720 S.M.A.R.T from my 4TB WD could drive. smartctl 5.41 2011-06-09 r3365 [armv7l-linux-3.2.26] (local build) === START OF INFORMATION SECTION === Device Model: WDC WD40EFRX-68WT0N0 User Capacity: 4,000,787,030,016 bytes [4.00 TB] Sector Sizes: 512 bytes logical, 4096 bytes physical Device is: Not in smartctl database [for details use: -P showall] ATA Version is: 9 ATA Standard is: Exact ATA specification draft version not indicated Local Time is: Sun Dec 7 15:07:17 2014 PST SMART support is: Available - device has SMART capability. SMART support is: Enabled
=== START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED
>>45524071 >>45527646 That rebuttal is full of shit. Backblaze's earliest drives were WD Greens, which would've gone into crappy rev1 racks just like the early Seagates. Despite that, the 4-year WD Greens have survived much better than the 4-year Seagates. Similarly, if you compare their Hitachi and Seagate batches of similar age and capacity, the Seagates fail at higher rates.
>>45536339 Something about reds not moving data from sectors if they get fucked because the assumed purpose of reds is to use them in a raid array (for data redundancy). So if 1 sector gets fucked on 1 drive then data is on the other drive anyway.
Thread replies: 239 Thread images: 32
Thread DB ID: 25502
All trademarks and copyrights on this page are owned by their respective parties. Images uploaded are the responsibility of the Poster. Comments are owned by the Poster.
This is a 4chan archive - all of the shown content originated from that site. This means that 4Archive shows their content, archived. If you need information for a Poster - contact them.
If a post contains personal/copyrighted/illegal content, then use the post's [Report] link! If a post is not removed within 24h contact me at firstname.lastname@example.org with the post's information.