Gentoo Forums
Gentoo Forums
Gentoo Forums
Quick Search: in
Need recommendations for RAID disks
View unanswered posts
View posts from last 24 hours

Goto page Previous  1, 2  
Reply to topic    Gentoo Forums Forum Index Kernel & Hardware
View previous topic :: View next topic  
Author Message
Zucca
Veteran
Veteran


Joined: 14 Jun 2007
Posts: 1519
Location: KUUSANKOSKI, Finland

PostPosted: Sun Oct 30, 2016 11:28 am    Post subject: Reply with quote

Goverp wrote:
Which drives are good enough for RAID. Err, "Redundant Array of Inexpensive Disks.
.. or independent. Which one is "offically" right is another topic...
_________________
..: Zucca :..

Code:
ERROR: '--failure' is not an option. Aborting...
Back to top
View user's profile Send private message
Fitzcarraldo
Veteran
Veteran


Joined: 30 Aug 2008
Posts: 1655
Location: United Kingdom

PostPosted: Sun Oct 30, 2016 11:52 am    Post subject: Reply with quote

Zucca wrote:
Goverp wrote:
Which drives are good enough for RAID. Err, "Redundant Array of Inexpensive Disks.
.. or independent. Which one is "offically" right is another topic...

Well, the original term was 'inexpensive', although the per-MB price of 'inexpensive' HDDs in the 1980s was a lot more expensive than it is today. Anyway, the intent of the originators of the term was certainly to look at how to replace an expensive HDD ('Single Large Expensive Disk') with an array of relatively-inexpensive HDDs.

A scanned copy of the original 1988 paper by Patterson, Gibson & Katz, 'A Case for Redundant Arrays of Inexpensive Disks (RAID)', is available on the Carnegie Mellon University's School of Computer Science Web site: http://www.cs.cmu.edu/~garth/RAIDpaper/Patterson88.pdf
_________________
Clevo W230SS: amd64 OpenRC elogind nvidia-drivers & xf86-video-intel.
Compal NBLB2: ~amd64 OpenRC elogind xf86-video-ati. Dual boot Win 7 Pro 64-bit.
KDE on both.

Fitzcarraldo's blog
Back to top
View user's profile Send private message
1clue
Advocate
Advocate


Joined: 05 Feb 2006
Posts: 2549

PostPosted: Sun Oct 30, 2016 2:20 pm    Post subject: Reply with quote

Goverp wrote:
<asnide>
I love this. Which drives are good enough for RAID. Err, "Redundant Array of Inexpensive Disks. The answer ought to be the cheapest you can get, with a few spares, so you can swap the broken ones as-and-when.
</asnide>


Clearly you didn't read the thread, or even the original post.
Back to top
View user's profile Send private message
Zucca
Veteran
Veteran


Joined: 14 Jun 2007
Posts: 1519
Location: KUUSANKOSKI, Finland

PostPosted: Mon Oct 31, 2016 1:29 pm    Post subject: Reply with quote

1clue wrote:
Clearly you didn't read the thread, or even the original post.

Yeah. Originally inexpensive, but now independent or something else than inexpensive.
No-one wants to store hepas of data in cheapest possible raid5 array and when something goes wrong you'll realize that while rebuilding the array you've lost another one.
_________________
..: Zucca :..

Code:
ERROR: '--failure' is not an option. Aborting...
Back to top
View user's profile Send private message
.user
n00b
n00b


Joined: 10 Mar 2014
Posts: 6

PostPosted: Mon Oct 31, 2016 3:14 pm    Post subject: Reply with quote

Oh, so that's why some drives fail. It's because they were from this kind, the cheap kind. An eye opener, search no further, eurika.

I strongly believe in that conclusion discussed in the thread earlier, the smiling curve, drives fail shorly after setup or 2-3-4+ years after that. My experience with failing drives means mostly the encounter of the click of death. It happens at cold boot. Most of my drives aren't clicking to death though, some are eventually seen by controller/mobo, correctly or just as rom. Many times the drives are fine after the next restart / warm reinitialization. I just consider them failed and I just swap them out or just temporarily unplug their power cable, thinking that I'd get into it at a later time, since it's annoying to have delayed cold boots and since the general saying states that such a drive will fail shortly after anyways / has its days counted.

I also have a strong belief induced by this @only cold initialization in that running harddrives fail less. The drives that failed to me were all western digital. I am a maxtor (brand) jedi of storage! My olderst drives are still running and all are seagate and toshiba aba and aca. I think aca is the cheapest drive on the market. Stay away from it.
Back to top
View user's profile Send private message
1clue
Advocate
Advocate


Joined: 05 Feb 2006
Posts: 2549

PostPosted: Mon Oct 31, 2016 3:56 pm    Post subject: Reply with quote

I started the thread because the last time I bought a bunch of drives I paid not much attention and bought a dozen drives, 4 of which had been returned within the first year and 8 of which had failed by the second. By the end of the scond year I was no longer interested in replacements, only in finding something else.

I agree in the abstract that I should be able to use anything, but in the real world we must face the fact that not all drives are created equal, and drives that are fantastic for one purpose are a terrible choice for others.

There are two main factors at play, as I see it:

  1. Manufacturing defects
  2. Drives manufactured for a specific purpose are often a very bad choice for another purpose.


I believe that my situation was a combination of both. I certainly suffered from infant mortality with my WD green drives, because not all of the early-failed drives were on RAID, or even on Linux. But also putting a WD green into a RAID array is a stupendously bad choice, because of the default behavior of the drive. Even using the factory tool as mentioned earlier in this thread did not save the remaining drives.

Based on the backblaze data, WD has certainly had reliability issues lately. Go back a few years earlier and they were fine.


I don't particularly care for the way this thread turned into an abstract discussion of what RAID means, or what modes I should use, or how it shouldn't matter what drive model I use. Or how WD greens worked fine for somebody else.

I am after specific data on drive reliability by model number for RAID use, and I have it on a limited set of models. That's probably adequate but I'd be interested in another study with different models in it if someone has found one. I haven't, although I've spent some time with Google since the thread started.

Sorry for being cranky.
Back to top
View user's profile Send private message
eccerr0r
Watchman
Watchman


Joined: 01 Jul 2004
Posts: 7130
Location: almost Mile High in the USA

PostPosted: Mon Oct 31, 2016 5:30 pm    Post subject: Reply with quote

So you actually bought 12 disks and returned or disposed of every one (or had to toss the replacements)?

The only thing I've had major problems with is: power. As I am running a MDRAID on consumer quality devices (assuming people who are using WD Greens are also using consumer quality devices) I'm also using consumer quality power supplies. I've found that connectors and PSUs may or may not be up to snuff, killing disks or data. Before each of my RAIDs get implemented I've been running them on a different machine, testing whether disks will drop early.

I'm working on a fourth RAID using again consumer level drives. It's currently in dependability testing now (3 or 4x2TB disks in RAID5, mostly Toshiba/Seagate "Desktop" drives). The current "production" RAID is using a mixture of WD and Hitachi Desktop drives (4 500GB). I have another low use 73GB x 2 RAID1 (Refurbished SCSI Enterprise disks) that I don't use much, so no real data there. The first array was a 120GB x 4 Seagate and Maxtor drives, again using "Desktop" drives.

The 120GBx4 array after working power issues, I had no failures, and I ended up dismantling/upgrading the array as it was stuck at "steady state". I repurposed the disks as individual scratch disks. The 500GBx4 array I've had disk drops like mad due to power, but eventually got that squared away. I did end up having two disk failures - one was infant mortality (Hitachi) and the other was probably wearout failure (the other side of the "bathtub"/"smile" curve). It was a WD.
_________________
Intel Core i7 2700K@ 4.1GHz/HD3000 graphics/8GB DDR3/180GB SSD
What am I supposed watching?
Back to top
View user's profile Send private message
1clue
Advocate
Advocate


Joined: 05 Feb 2006
Posts: 2549

PostPosted: Mon Oct 31, 2016 7:57 pm    Post subject: Reply with quote

OK so revised details backed by actually looking at hardware:

The original batch of WD green 750g drives (and the computer that the main RAID array was in) was bought as parts and built in 2011. I posted 8 or 10 years ago, but checking the system in question I realize it's only been 5 years. This is a home office setup, it's for my work but my money. An enterprise system was bought at the same time by my employer using higher quality components. I bought this to test software I was writing for the enterprise hardware. Not the same core count, not the same anything really, but similar enough.

Box 1: 2011 Asus P6T with an i7 920 (4 cores, hyperthreading) which had 6g at first, upgraded twice to 24g) had raid6 (4x wd green) and 1x WD green for system drive. Still in service as a dev box. No longer has RAID.
Box 2: 2011 Mac/OSX. 2x WD greens, non-RAID.
Box 3: 2013 Intel box, can't remember what exactly but dual core, RAID1 WD greens, system drive was something else, box has since made a trip to the dumpster.

My RAID array for this box used 4 drives. It's software RAID 10, where the enterprise setup is hardware. I bought 12 drives at the same time. They're numbered with a permanent marker, not sequentially as they came out of the box but rather when I had my first batch of failures, I numbered the ones that had not yet failed and were installed, and then numbered the rest which were still new as they were installed. These drives and their replacements constitute the entire stack of WD greens I've ever owned. There is exactly one WD green that is still running, error free, and that's as one of many backup devices which are physically removed after backup. I do weekly backups and this is one of several devices. It was new when I put it into the backup rotation. I expect it to fail at any time, and am at this point keeping it around to get personal statistics for this batch of greens.

The enterprise system at work is still running, no hardware replacements of any kind. We bought 2 spares for the array and they're still in the box. It's no longer performing the same task and no longer under high load, but everything originally installed on the box is still in service and still error free. This is what I would expect after 5 years.

The chronology of this is that the first drive died on my testing box (the first one mentioned) and I immediately returned it thinking nothing more than that this was infant mortality. The second drive died shortly after the first was replaced, and then I started doing research. At that point I reset the idle3 using the Linux tools on all Linux drives in service. Even so I got another failure, before a year was up. I got the WD Windows tool and disconnected my drives, plugged them into a Windows box (don't own one, had to borrow one from work) and then used that setup to reset idle3. Shortly after that one of the replacements went out, and I no longer cared about getting WD anything, an attitude that continues now. I still used the drives because I couldn't afford to just throw them out and buy a new batch of drives, but I did increase the frequency of offline backups to deal with the crazy failure rate.

I stopped buying used hardware for personal use right around the year 2002. Since then I've had pretty good luck with everything I spent time to research. I had the belief at the time that hard drives were pretty much all reliable, except for the infant mortality thing we've been talking about. The 1 or 2 devices I've had to return as nonfunctional were replaced with components that worked. That's with the exception of the greens. I've made bad choices about some hardware I didn't research, but rather than having early fails I just got something that doesn't play well with Linux or just got substandard performance. A TV card comes to mind.

At any rate, I have typically had many years of good service from hard drives and pretty much anything else. Generally speaking my hardware becomes useless to me due to specs before I get a hard failure.
Back to top
View user's profile Send private message
eccerr0r
Watchman
Watchman


Joined: 01 Jul 2004
Posts: 7130
Location: almost Mile High in the USA

PostPosted: Mon Oct 31, 2016 8:34 pm    Post subject: Reply with quote

Definitely the head load/unloading was an issue for the WD greens, though I had fought with simply trying to get Linux to stop spinning up sleeping drives... that ended up being futile as Linux really wants to write to disks, or at least I really wanted to make sure that current metadata is saved. Thus I can see the head load/unload, even if the drive stays spun up, would be an issue as it's hard to keep Linux from touching the disks. That being said, this is the only disk that I know of that has this bad behavior apart from laptop drives.

The SATA disks I'm currently using for production/testing are

WD 500GB Blue ("Desktop") WD5000AAKS-65V0A0 (A WD5000AAKS-00YGA0 was the failing disk). Funny that my RAID array is a LOT faster when the 00YGA0 was dropped but it could also have been the 4.4 kernel too (was running 3.17).
HGST Desktop 500GB HDP725050GLA360 (I had to RMA one for infant mortality, and a HDS725050KLA360 disk fail which was a refurbished disk, so that was just me being cheap)
Toshiba Desktop 2TB PH3200U-1I72 alas not enough hours on them to give a good recommendation.

I have another Seagate 2TB I want to eventually integrate but 3x2TB RAID5 is currently more storage than I need and currently the disk is a cold spare. In fact the 4x500GB RAID5 still exceeds my storage needs, I just need to clean up.

I only have one WD green disk (2TB) and it's not part of any array. I see the head load count ballooning, but shut off this feature. The computer with the disk installed has failed so I'm not racking more hours on it, though it did survive quite a while of 24/7 operation as a PVR disk.

I have a feeling that buying used HDDs is a craps shoot. If it really was a pull, it might be good; but I have a suspicion a lot of used HDDs are actually ones that had a bad sector, someone erased it to make it go away, and resold. I wish this was made more clear but it'd hurt the resale value of these disks too.
_________________
Intel Core i7 2700K@ 4.1GHz/HD3000 graphics/8GB DDR3/180GB SSD
What am I supposed watching?
Back to top
View user's profile Send private message
Anon-E-moose
Advocate
Advocate


Joined: 23 May 2008
Posts: 3929
Location: Dallas area

PostPosted: Tue Nov 01, 2016 9:27 am    Post subject: Reply with quote

I've been using the wd red 2 tb series for a while, in a usb3 raid external box.
They've been there for a couple of years now, no problem.
I'm using them for mirrored backup.

That was just shortly after they announced the "red" series, don't know if they've gotten worse over time though.
_________________
Asus m5a99fx, FX 8320 - nouveau, oss4, rx550 for qemu passthrough
Acer laptop E5-575, i3-7100u - i965, alsa
---both---
5.0.13 zen kernel, profile 17.0 (no-pie) amd64-no-multilib
gcc 8.2.0, eudev, openrc, openbox, palemoon
Back to top
View user's profile Send private message
Goverp
l33t
l33t


Joined: 07 Mar 2007
Posts: 668

PostPosted: Tue Nov 01, 2016 10:33 am    Post subject: Reply with quote

This is irrelevant, but it may be of interest:

My desktop machine has a 4-disk mdadm RAID-5 setup populated with WD Green disks for somthing like 6 years. About 1 year after I set it up (so about 2010-11) it started throwing hardware problems on one drive. SMART data showed nothing, and WD's disk test tool (the serious one that runs in MS DOS, not the toy that runs in Windows) couldn't find any problem.

Fortunately, at the same time a few other people were having RAID hardware problems, so plenty of advice was available. I rebuilt the disk, and about a week later the same would happen. Then another drive, an a different cable. Then I happened to installed a new kernel (somewhere in the 3.8 series, but I'm not sure exactly when), and coincidentally, the problems stopped (I'm still using the same WD disks as I started with, including the ones throwing errors, and my spare drive is still on the shelf). And coincidentally, most of the discussions of RAID errors in the fora ended.
_________________
Greybeard
Back to top
View user's profile Send private message
Display posts from previous:   
Reply to topic    Gentoo Forums Forum Index Kernel & Hardware All times are GMT
Goto page Previous  1, 2
Page 2 of 2

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum