Did this coupon
work for you?
work for you?
Post Date | Sold By | Sale Price | Activity |
---|---|---|---|
04/24/23 | Amazon | $539 |
39 |
Sold By | Sale Price |
---|---|
Amazon | $599.99 |
Rating: | (4 out of 5 stars) |
Reviews: | 871 Amazon Reviews |
Product Name: | SABRENT 10 Bay 3.5” SATA Hard Drive Tray Less Docking Station (USB 3.2 Type C and Type A) (DS-UCTB) |
Manufacturer: | SABRENT |
Model Number: | DS-UCTB |
Product SKU: | B09TV1XPDD |
UPC: | 840025252943 |
The link has been copied to the clipboard.
228 Comments
Your comment cannot be blank.
Featured Comments
The Mini PCs we normally see listed max out with 2.5Gbps networking. So this would be able to keep up and saturate the pipe. If you needed more bandwidth, having separate direct SATA connections would be needed, likely with some type of external SAS connection.
10 drives is very large, unless you are going for extremely cheap small drives to fill the array. IMO it's better to use larger drives as each drive consumes power to run. UGreen has a Kickstarter going right now that has some really crazy deals for NASes that are supposed to ship in June. You might be more bang for your buck there.
Also, anyone thinking of using this many drives, Go with at least one parity disk, or even better two. The chance of data loss increases as you move to more and more drives. Not caring about movies on a single 10TB drive... fine. Not caring about 180TB, that's going to be a much larger pain to replace everything.
I was checking what level of support it has from Sabrent (zero, they have really gone downhill with firmware updates) and there's a thread about how it doesn't have automatic power recovery to bring the drives back up after power loss.
actually, i am not even sure of the reference? but sabrent is very well known in ssd and pc component business for the last 5-10 years
Sign up for a Slickdeals account to remove this ad.
Software raid has gotten to the point where hardware raid isnt the best, wendall/level 1 techs did a great video on the benefits of ZFS
https://youtu.be/l55GfAwa8RI?si=
Thats almost as bad as putting software raid on top of hardware raid.
Right i am saying it works but someone saying "hardware raid is no longer recommended" is wrong. Crap integrated "hardware" raid say on a consumer MB is crap or an enterprise adaptec based hardware raid card is garbage ( or was last time I dealt with them). LSI, Megaraid or whoever owns them now make solid products that enterprise servers use. For low performance non hardware offloaded software raid is fine…. But it uses CPU and can be prohibitive in some instances. Im just saying the blanket statement about hardware raid not being recommended is horribly inaccurate. There are a bunch of different use cases where say oracle using asm on a PB scale fibre channel storage array would be appropriate. I guess all of this is outside the scope of this conversation. Dealing with multimillion dollar PB scale storage arrays gives me a different perspective . Sorry… im a storage nerd.
I know it's pedantic but most people use G instead of g because the difference between B and b is substantial.
Sign up for a Slickdeals account to remove this ad.
Just get something like this, set it up in TrueNAS or even just plain ol Linux, and set up a cron job to rsync/robocopy your files to another similarly sized drive once a day, or heck once an hour if you want. If you can figure out RAID, you can certainly figure out how to do that.
This also has the benefit of additional protection from ransomware. If you have storage on a RAID array mapped as a drive in Windows, and get nailed with a virus, you better have a 3rd, offline copy somewhere.
It's quite the minefield looking for multi-bay HDD enclosure if there's even a reliable one out there! They all seem to be budget-oriented creations cobbled together with bare minimum R&D and testing. All having the tendency to randomly "drop" drives requiring a full power cycle of the entire unit to get them back. Funny enough, they all run perfect with just 1-drive installed which defeats the purpose.
Across the board, post-purchase support was non-existent. Ended up breaking down and getting myself a proper synology and it's been smooth sailing since.
.. when all you need is a jbod enclosure
Power hog lol. R730 48 cores 128GB ram idling at 168watts. Historical peak 562 watts. In may 22 2023 that is since apr 8 2020. R730 72 cores 2x xeon e5-2699 256gb ram 220 watts. Historical peak on the first one was 573 watts on nov 16 2023 (turned it on in march). I have multiple quadro cards in each server. 16 nics in each connected to all kinda stuff. Esx i have hit 80ghz on one vm. The pc i am running on looking at the internet no games etc. 130-170 watts. Now it might get hot and loud but for power draw and power management…. Its waaay better than you would think.
I still have some older hardware raid OS drives still running but file system based btrfs mirror is my new boot/host format. I had a drive fail, replaced it, and it rebuilt in less than an hour to the new drive.
Raid is largely obsolete. There are much better, i.e. faster, lower power parity systems these days.
Anyway, I thought I posted this weeks ago, perhaps it was a different post...but for $400 you can get this NETAPP with 24 bays and its network based not c type cable or whatver this post unit uses.
https://www.ebay.com/itm/202404952486
Hahahhahaha. Backblaze. Boomers lol.
I really hope you are just trolling here. If not…
Here is a link to an enterprise storage array using nvme upto ~6PB or so… i even copied the portion out of the link that says raid for you.
Servers running production workloads either boot from san.. or boot from local storage then use mostly Fibre channel, iSCSI, NFS for remote shared storage. (Boot from SAN is a hassle except for very specific use cases… its still a hassle,).
I dont know what your context is but using backblaze as any type of authoritative reference is really funny. Raid is largely obsolete?!?! There are much better i.e. faster lower power parity systems. RAID stands for redundant array of independent disks. If you are using more than one physical disk and data is being striped across them… then its a raid array. Whatever extra words or descriptions you are using is just a way to differentiate different implementations.
You show me an enterprise level storage array not using raid and Ill explain why it isnt enterprise level. Netapp lol…. Read an admin guide fro say hpe 3par, alletra, primera… price one of those things out. http://buy.hpe.com/us/en/storage/...1013540069.
Capacity HPE Alletra 9060: 1966 TiB (raw) / 6103 TiB (effective);
HPE Alletra 9080: 1966 TiB (raw) / 6103 TiB (effective);
Effective capacity assumes 4:1 estimated data compaction rate including: thin provisioning, deduplication, compression, and copy technologies) in a RAID 6 (10+2) configuration. Note TB vs TiB. Actual ratios will vary based on workload. See HPE StoreMore guarantee for more information.
If you arent just trolling…
You should go read more than just things that support your opinions. Once you come to realize that you are wrong… you should try to understand what inside you allowed you to speak with such an air of confidence and authority while being wrong. Most people who speak that way have various mechanisms that allow them to deflect or avoid the realization that they are wrong. They tend to develop more and more of them over time which enforces their opinions of themselves and avoid the true reality of their situations.
If you were trolling.. good job… you got me. I would say you wasted my time but I am a super storage nerd and probably am on ~20 hours worth of calls a week talking about storage, designing solutions, doing performance consulting etc. backblaze… I am going to share this with some of my colleges and we are all going to have a good laugh. Or hey… maybe I am wrong and I will learn something new that will help justify the 2500.00 a day I charge. I like to be wrong… it means I have learned something new…. It just doesnt happen that often.
Sign up for a Slickdeals account to remove this ad.
i made the switch based on the advice of pros that in the modern day cpus are plenty powerful to handle parity and exceed all but the highest end hardware raid controllers, also the fact that rebuilding arrays in the event your raid controller fails is sometimes impossible
zfs pool is portable and can easily be imported to any machine
that said two terramaster d500-cs are cheaper than this 10-bay
What kind if hardware raid cards? Enterprise ones with a stick or two of memory and a battery on it? Most "hardware" raid functionality on motherboards and consumer cards will underperform software raid on the same machine. With that said… you cant really boot off of a software raid volume unless its a raid 1 then it just boots off of one drive… loads the software stack then does software raid stuffs. CPU offload is important for real things. Having multiple high speed disks and whatever software raid you are using can chew up CPU. If you do have a disk issue it can hang the entire OS etc. (depending on how it fails). The likelyhood of that happening is much lower with hardware raid. Ease of use is another important consideration. Real raid controllers typically have a bios/gui that is easy to understand and consistent. Replacing a drive and troubleshooting is easy. Attempting to repair something in software raid if you arent good at it can lead to some pretty interesting data loss scenarios. Are you using write thru and no write cache? What happens if you have 20GB of pending write data in cache.. waiting to be dumped to disks and the power goes off? No amount of software raid is going to make that data magically come back. The software raid might keep the volume and file system from getting corrupted etc. but its not going to make that data come back. I dont know what your version of high end is…. But not losing data is typically the point. The methods of not losing data vary. Software raid unless you are not using write cache will lose data if it turns off. If it does use write cache and the power goes off it can repair the filesystem and keep partially committed data from corrupting the file system and it can make sure you dont have data errors when all of the disks are present etc.. The different software raid types typically have different levels of resiliency, performance and protection.
But yeah… using software raid is better than nothing from a cost perspective. If you know how to use/admin/troubleshoot it. Otherwise stick with hardware raid or just backup stuff.
LSI/mega raid controllers are the best in my opinion. Broadcom bought them… so I dont know how good they are anymore. Adaptec raid controllers are pretty crap compared to lsi (from a troubleshooting standpoint).
I am done rambling cheers