Western Digital has select
WD Gold Enterprise Class SATA Hard Drives on sale for the prices listed.
Shipping is free.
Thanks to Staff Member
f12_26 and Community Member
MistaClean for posting this deal.
Available:Amazon has select
WD Gold Enterprise Class SATA Hard Drive on sale for the prices listed.
Shipping is free.
Available:Newegg has select
WD Gold Enterprise Class SATA Hard Drive on sale for the prices listed.
Shipping is free.
Available:Features:- Quality and reliability with up to 2.5M hours MTBF(3) to help you store your data with confidence. | (3) MTBF and AFR: Projected values for model number WD221KRYZ and WD202KRYZ. Final MTBF and AFR specifications will be based on a sample population and are estimated by statistical measurements and acceleration algorithms under typical operating conditions, workload 220TB/year and device temperature 40°C. Derating of MTBF and AFR will occur above these parameters, up to 550TB writes per year and 60°C device temp. MTBF and AFR ratings do not predict an individual drive's reliability and do not constitute a warranty.
- Specifically designed for use in enterprise-class storage systems and data centers.
- Improve performance with our vibration protection technology.
- HelioSeal technology delivers high capacities with a low power draw (12TB & up)
Leave a Comment
Top Comments
Red Pro = 5yr warranty, 1million hrs MTBF, 300tb/yr workload, 7200rpm, a bit louder than Red Plus (36 dBa seek vs 29dBa seek @14tb), slightly less power at seek than Red Plus (see below), the 20/22tb drives have some extra features (OptiNAND 20/22 (no Armorcache))
Red Plus = 3yr warranty, 1million hrs MTBF, 180tb/yr workload, 7200prm (8tb WD80EFBX & up) but likely this is an "up to", slightly less noise at same drive sizes than Gold or Red Pro (see above), slightly more power at seek than Red Pro (6.2w vs 6.5w, same idle/sleep @14tb), only goes up to 14tb capacity
I just picked up 6 of the Gold 16tb with this sale (& paypal 12% CB). The 16tb still shows as available btw.
I wanted the 5yr warranty (+2 extra yrs with citi credit card), potentially better support/returns, and the longer MTBF rating/etc. The price was close enough. I would have considered the Red Pro if they had been $15/tb or less at the same time and the Gold hadn't been on sale, but admit the (potentially inconsequential) perks of going Gold seemed nice. I imagine I would have been fine with the Red Plus honestly....but didn't seem to be much benefit in terms of recent sales prices to give up for the 2yr less warranty for questionable acoustic/power benefits.
Ultimately, I probably could have been fine going refurbished Seagate/MDD for much less, but oh well. Didn't want to mess with trying to stress them to see if any failures and then mess with trying to return/exchange.
Also, beware buying OEM drives, drives from tiny 3rd parties, and the potential extra hassle with returns.
81 Comments
Sign up for a Slickdeals account to remove this ad.
So while the gold might be arguably more reliable on paper .. does it really matter in practice ?
So while the gold might be arguably more reliable on paper .. does it really matter in practice ?
MTBF doesn't mean "the drive will last that long"
In fact in systems with a constant failure rate the chance of reaching MTBF is only 36.8% (meaning roughly 2/3rds of them fail sooner)
Here's Ars on why MTBF isn't great either:
https://arstechnica.com/informati...are-equal/
Among the things they point out-- Seagate uses a different MTBF assumption set for enterprise drives and non-enterprise ones
Lastly, even Seagate appears to recognize it's a deceptive measurement that people mistakenly think is them saying the drive will last 100 years or something (as seems to be the case here)
https://www.seagate.com/support/k...-174791en/
Anyway, all that said, the 10x better Non-recoverable read errors per bits read spec is the more important one in my mind....
Thanks to RAID, if a single drive just dies, that's not such a huge issue BY ITSELF, even on huge drives.
But bit rot makes me care a LOT about non-recoverable bit errors there though, especially on large drives
10^14 bits is 12.5 TB, so on average, the chance of 16TB (if your not-dead drives were full) being read without a single URE is very low, and the probability the array fails to rebuild is very high.
10^15 is an entire order of magnitude lower- obviously. Thus making an inability to read the drive you're needing to read fully to recover a LOT better than "near 100% likely to fail"
If you're running RAID that can survive one more failure then your 10^14 chances are still... higher than you might like... while your 10^15 chances are very very good.
In fact in systems with a constant failure rate the chance of reaching MTBF is only 36.8% (meaning roughly 2/3rds of them fail sooner)
Here's Ars on why MTBF isn't great either:
https://arstechnica.com/informati...are-equal/
Among the things they point out-- Seagate uses a different MTBF assumption set for enterprise drives and non-enterprise ones
Lastly, even Seagate appears to recognize it's a deceptive measurement that people mistakenly think is them saying the drive will last 100 years or something (as seems to be the case here)
https://www.seagate.com/support/k...-174791en/
Anyway, all that said, the 10x better Non-recoverable read errors per bits read spec is the more important one in my mind....
Thanks to RAID, if a single drive just dies, that's not such a huge issue BY ITSELF, even on huge drives.
But bit rot makes me care a LOT about non-recoverable bit errors there though, especially on large drives
10^14 bits is 12.5 TB, so on average, the chance of 16TB (if your not-dead drives were full) being read without a single URE is very low, and the probability the array fails to rebuild is very high.
10^15 is an entire order of magnitude lower- obviously. Thus making an inability to read the drive you're needing to read fully to recover a LOT better than "near 100% likely to fail"
If you're running RAID that can survive one more failure then your 10^14 chances are still... higher than you might like... while your 10^15 chances are very very good.
Sign up for a Slickdeals account to remove this ad.
In fact in systems with a constant failure rate the chance of reaching MTBF is only 36.8% (meaning roughly 2/3rds of them fail sooner)
Here's Ars on why MTBF isn't great either:
https://arstechnica.com/informati...are-equal/
Among the things they point out-- Seagate uses a different MTBF assumption set for enterprise drives and non-enterprise ones
Lastly, even Seagate appears to recognize it's a deceptive measurement that people mistakenly think is them saying the drive will last 100 years or something (as seems to be the case here)
https://www.seagate.com/support/k...-174791en/
Anyway, all that said, the 10x better Non-recoverable read errors per bits read spec is the more important one in my mind....
Thanks to RAID, if a single drive just dies, that's not such a huge issue BY ITSELF, even on huge drives.
But bit rot makes me care a LOT about non-recoverable bit errors there though, especially on large drives
10^14 bits is 12.5 TB, so on average, the chance of 16TB (if your not-dead drives were full) being read without a single URE is very low, and the probability the array fails to rebuild is very high.
10^15 is an entire order of magnitude lower- obviously. Thus making an inability to read the drive you're needing to read fully to recover a LOT better than "near 100% likely to fail"
If you're running RAID that can survive one more failure then your 10^14 chances are still... higher than you might like... while your 10^15 chances are very very good.
https://www.anandtech.c
Sign up for a Slickdeals account to remove this ad.
Leave a Comment