Did this coupon
work for you?
work for you?
Post Date | Sold By | Sale Price | Activity |
---|---|---|---|
04/24/23 | Amazon | $539 |
39 |
Sold By | Sale Price |
---|---|
Amazon | $599.99 |
Rating: | (4 out of 5 stars) |
Reviews: | 871 Amazon Reviews |
Product Name: | SABRENT 10 Bay 3.5” SATA Hard Drive Tray Less Docking Station (USB 3.2 Type C and Type A) (DS-UCTB) |
Manufacturer: | SABRENT |
Model Number: | DS-UCTB |
Product SKU: | B09TV1XPDD |
UPC: | 840025252943 |
The link has been copied to the clipboard.
228 Comments
Your comment cannot be blank.
Featured Comments
The Mini PCs we normally see listed max out with 2.5Gbps networking. So this would be able to keep up and saturate the pipe. If you needed more bandwidth, having separate direct SATA connections would be needed, likely with some type of external SAS connection.
10 drives is very large, unless you are going for extremely cheap small drives to fill the array. IMO it's better to use larger drives as each drive consumes power to run. UGreen has a Kickstarter going right now that has some really crazy deals for NASes that are supposed to ship in June. You might be more bang for your buck there.
Also, anyone thinking of using this many drives, Go with at least one parity disk, or even better two. The chance of data loss increases as you move to more and more drives. Not caring about movies on a single 10TB drive... fine. Not caring about 180TB, that's going to be a much larger pain to replace everything.
I was checking what level of support it has from Sabrent (zero, they have really gone downhill with firmware updates) and there's a thread about how it doesn't have automatic power recovery to bring the drives back up after power loss.
actually, i am not even sure of the reference? but sabrent is very well known in ssd and pc component business for the last 5-10 years
Sign up for a Slickdeals account to remove this ad.
WHY THE HELL would you go to all that work when you can just use the USB 10G port that's already there?
And I've tried sharing direct-connected ethernet devices to the rest of a network before. It's a huge pain in the ass if you've never done network config before. Nobody who is actually interested in buying this is gonna wanna get into that.
I'm confused why it's so great to have a USB 3.2 connection for something like this when you can get an SAS controller card for your machine... Unless you really just use a laptop all time time... But even then, you can't figure to organize drives in their own enclosures then? You NEED access to TEN disks simultaneously at all times? It doesn't really matter to me, I was just hoping there would've been a better answer as to what's so great with this thing... You're definitely paying for what seems to be a marginal convenience....
I'm confused why it's so great to have a USB 3.2 connection for something like this when you can get an SAS controller card for your machine... Unless you really just use a laptop all time time... But even then, you can't figure to organize drives in their own enclosures then? You NEED access to TEN disks simultaneously at all times? It doesn't really matter to me, I was just hoping there would've been a better answer as to what's so great with this thing... You're definitely paying for what seems to be a marginal convenience....
I plug those into my satellite receiver and can record. It's 2GB per hour for HD and produces a .ts file (Transport Stream).
I was using HDDs in the computer and 3 aluminum Rosewill eSATA/USB enclosures, the nice ones with fans that can be turned off. No longer sold.
When I built my Ryzen 5950X I upgraded the storage to a Orico 5-bay USB 3.1 10 Gbps enclosure. It has 16TB drives in it which is said to be the limit. Today a WD 18TB will arrive and I'll test it to see if it'll do 18TB. The old Rosewills go bigger than they claimed.
I'm dumping a ton of TV and movies into that thing. I'm using big drives in the computer to dump video to, then I transcode it and store it in the Orico 5 Bay. I also move the originals there because I'm probably going to transcode everything to something else due to progress in formats and horse power etc.
I am going to keep duplicates of everything. I also have files from the 80s. I'm like ConcreteMan a few pages back, a data horder. If I download a motherboard manual then buy something else, I keep the pdf.
I run Linux and use a simple program to make two drives match, in case I screw up and don't create a duplicate.
My Orico will copy drive to drive at very good speed. I forgot exactly. But SATA is 6 Gbps and I have 2x 5 Gbps in the Orico. It works really well.
From the big drives in the computer to the Orico is usually 260 MB/s, up to 269-279 at times.
When I transcode the TS files to h.264/265 I can reduce a 2 hour movie to about 600MB.
That's what I use a 5-Bay 10 Gbps enclosure for and I sure wish I'd have gotten the Sabrent 5-bay because the Orico has trays but the problem is the flimsy doors.
The plastic doors pop open but they do not eject the tray. You can't pull on the door and it's just a bad design. It said aluminium enclosure but the important part is junk plastic.
I don't stream all over the house even though everything is on two wired networks. One is security cameras and the other is Internet. I bridge the connections when needed.
Once compressed, I can put quite a few movies etc on a laptop drive and plug it into the receiver and play them back. I don't need a server or powered on drives.
That's another reason I wish I'd have gotten the Sabrent, individual power buttons. I only need one drive powered on most of the time, the drive I'm dumping transcodes to.
I can see a use for this 10-bay. I'm going to end up with three 3TB, two 6TB and two 8TB drives retired. It would be nice to have spare bays to copy things back if a drive does die.
The Orico can be daisy chained too, so space and bays can be added. I assume the Sabrent can do that too but check it...I have not verified that.
I did read all 11 pages and learned a little, but I hope this helps people figure out what they need, or don't need, like me.
I need an AI commercial remover.
That's a lot of work.
Your wall of text is very hard to read.
And I've tried sharing direct-connected ethernet devices to the rest of a network before. It's a huge pain in the ass if you've never done network config before. Nobody who is actually interested in buying this is gonna wanna get into that.
here's an article on just a few reasons why using multidisk usb storage is generally a bad idea (in the context of truenas, but is similar for other systems as well)
the deal with directly connect ethernet devices is that they ALSO should be connected to your 1g network as well. (so you use your onboard nic on the server to connect it to your normal network switch (same with the workstation), and you connect the high speed network cards together directly. - then on the workstation, you map the network drives using the server's high speed ip address (which should preferably be in a different ip range (i.e. if you use a 192.168.0.x or a 10.0.0.x for your lan, the high speed static link should be 192.168.1.x or 10.0.1.x so that the machines know that they are accessing 2 different networks. - note: if you call the server by name instead of ip, you'll get the slow ip and the data will flow over the slow connection on the workstation. (if you really want to you can manually change this in the hosts file by defining it's name and telling it that name can be found at the high speed link's server ip, but i've never bothered.
Sign up for a Slickdeals account to remove this ad.
The Mini PCs we normally see listed max out with 2.5Gbps networking. So this would be able to keep up and saturate the pipe. If you needed more bandwidth, having separate direct SATA connections would be needed, likely with some type of external SAS connection.
10 drives is very large, unless you are going for extremely cheap small drives to fill the array. IMO it's better to use larger drives as each drive consumes power to run. UGreen has a Kickstarter going right now that has some really crazy deals for NASes that are supposed to ship in June. You might be more bang for your buck there.
Also, anyone thinking of using this many drives, Go with at least one parity disk, or even better two. The chance of data loss increases as you move to more and more drives. Not caring about movies on a single 10TB drive... fine. Not caring about 180TB, that's going to be a much larger pain to replace everything.
This statement is incredibly wrong. Relying on software for raid compared to a hardware raid controller. Find a real server that used software raid dell, hpe whoever. Battery backed up cache. Replacing a hardware raid controller happens quite a bit less often than say reinstalling your os… then having to import /rebuild your software raid container. Synology uses software raid but its od will rebuild it auto magically. I would like to know where you found "Hardware RAID is no longer recommended" as software can keep up and gives the flexibility in not being paired with a specific controller or losing all of your data."
Thats almost as bad as putting software raid on top of hardware raid.
That said, I understand that there are some vendors (such as HP and Dell) that *do* have servers that can be configured for hardware raid from the factory, but they aren't really all that popular anymore, and it's mostly due to the slow-moving offerings from those large Enterprise SIs (who have invested heavily in licensing and integrating hardware raid controllers onto their motherboards). Most data centers, enterprises, and medium/large businesses are typically using EITHER one or more traditional centralized file servers (with expansion via SAS DAEs/fibrechannel/sometimes iscsi) to one or more centralized heads (with lots of ram and maybe an ssd caching layer) using zfs (running either on some flavor of linux or some storage distribution that uses zfs (like truenas) for pool, storage, redundancy, scrubbing, etc) OR some kind of decentralized filesystem such as ceph/glusterfs for foss offerings or weka or similar proprietary options. (not that i'd ever recommend individuals license expensive high performance clustered filesystems with 5+ servers for home data storage).
Most dell and hpe servers have hardware raid. Perc ( i think its power-edge expandable raid controller)cards for dell, i dont know what the hpe version is called… even tho I work for them. Depending on the os they might boot off of local drives i.e. raid controller or usb/flash drive for say esx. Boot from san is just a bad idea most of the time. The shared storage for say esx clusters etc SAN with fibre channel infrastructure or iscsi.
I have had a few of the 8 bay synology arrays. They all use software raid and work pretty well unless you are a crap ton of tiny files. Parsing performance data for hpe arrays a 160MB file when done can be 4 GB and like 75000 filed when its done. I had drives fall off kind of often till i figured that out
Dont think you are going to find a server with 10 bays cheaper. I think what he means is dont put them all in a software raid array like making a raid 0 1 drive fails and you are hosed. You could use them as individual disks. Instead of plugging in 10 usb drives. I believe the backplane will work with the drives that you have to mess with the power cable on when you plug them into a pc with no backplane. 1GB/s is kind of slow if you are hitting all drives at the same time.
I have it backed up by a old pc with JBOD drives.
I plug those into my satellite receiver and can record. It's 2GB per hour for HD and produces a .ts file (Transport Stream).
I was using HDDs in the computer and 3 aluminum Rosewill eSATA/USB enclosures, the nice ones with fans that can be turned off. No longer sold.
When I built my Ryzen 5950X I upgraded the storage to a Orico 5-bay USB 3.1 10 Gbps enclosure. It has 16TB drives in it which is said to be the limit. Today a WD 18TB will arrive and I'll test it to see if it'll do 18TB. The old Rosewills go bigger than they claimed.
I'm dumping a ton of TV and movies into that thing. I'm using big drives in the computer to dump video to, then I transcode it and store it in the Orico 5 Bay. I also move the originals there because I'm probably going to transcode everything to something else due to progress in formats and horse power etc.
I am going to keep duplicates of everything. I also have files from the 80s. I'm like ConcreteMan a few pages back, a data horder. If I download a motherboard manual then buy something else, I keep the pdf.
I run Linux and use a simple program to make two drives match, in case I screw up and don't create a duplicate.
My Orico will copy drive to drive at very good speed. I forgot exactly. But SATA is 6 Gbps and I have 2x 5 Gbps in the Orico. It works really well.
From the big drives in the computer to the Orico is usually 260 MB/s, up to 269-279 at times.
When I transcode the TS files to h.264/265 I can reduce a 2 hour movie to about 600MB.
That's what I use a 5-Bay 10 Gbps enclosure for and I sure wish I'd have gotten the Sabrent 5-bay because the Orico has trays but the problem is the flimsy doors.
The plastic doors pop open but they do not eject the tray. You can't pull on the door and it's just a bad design. It said aluminium enclosure but the important part is junk plastic.
I don't stream all over the house even though everything is on two wired networks. One is security cameras and the other is Internet. I bridge the connections when needed.
Once compressed, I can put quite a few movies etc on a laptop drive and plug it into the receiver and play them back. I don't need a server or powered on drives.
That's another reason I wish I'd have gotten the Sabrent, individual power buttons. I only need one drive powered on most of the time, the drive I'm dumping transcodes to.
I can see a use for this 10-bay. I'm going to end up with three 3TB, two 6TB and two 8TB drives retired. It would be nice to have spare bays to copy things back if a drive does die.
The Orico can be daisy chained too, so space and bays can be added. I assume the Sabrent can do that too but check it...I have not verified that.
I did read all 11 pages and learned a little, but I hope this helps people figure out what they need, or don't need, like me.
I need an AI commercial remover.
That's a lot of work.
there was a docker container I used to use to strip out commercials called "auto-comskip" but I can see the image for that was decommissioned -- it was a big game of cat and mouse but it did seem to work maybe 80% of the time... at least it didn't remove primary content from my usage.
I don't believe it exists. If I'm right, then I'd be looking forever for something that doesn't exist. Huge waste of time trying and failing to prove myself wrong on behalf of some random internet stranger.
This is why the burden of proof is on the one who claims something exists.
"You didn't look hard enough"
Called it in another comment. Every time, without fail.
Sign up for a Slickdeals account to remove this ad.
Thats almost as bad as putting software raid on top of hardware raid.
SAN manufacturers are using their own controllers. And while those controllers are do hardware offloading, they aren't RAID in any traditional form.
Try reading up on btrfs and zfs if you haven't. Both are file systems implementing software RAID that are used at enterprise scale and do that automagical rebuilding that Synology can do. And why is that? It's because the latest version of Synology's DSM use btrfs.
Is it impossible to still get new hardware with a hardware RAID controller? No, but that still doesn't mean it's a dead end technology that shouldn't be thrown to the curb. Japan just this last year finally stopped requiring filing things to the government be done on floppy disk, and they still have extremely heavy reliance on fax machines. Just because people can still by floppy drives, disks, and fax machines new, doesn't mean those technologies aren't dead and should be replaced with one of many better alternatives.