Did this coupon
work for you?
work for you?
Post Date | Sold By | Sale Price | Activity |
---|---|---|---|
05/07/24 | Amazon | $31.75 |
1 |
05/06/24 | Amazon | $31.80 |
11 |
05/06/24 | Amazon | $31.75 frontpage |
76 |
05/05/24 | Amazon | $31.80 frontpage |
45 |
05/03/23 | Amazon | $42.99 |
2 |
03/06/23 | Amazon | $49.99 |
10 |
02/07/23 | B&H Photo Video | $48 frontpage |
55 |
01/02/23 | Walmart | $46.88 popular |
17 |
12/13/22 | Kingston | $58 frontpage |
54 |
11/28/22 | Walmart | $28.88 popular |
86 |
11/21/22 | Amazon | $52.78 |
8 |
Rating: | (4.7 out of 5 stars) |
Reviews: | 18,992 Amazon Reviews |
Product Name: | Kingston NV2 1TB M.2 2280 NVMe Internal SSD | PCIe 4.0 Gen 4x4 | Up to 3500 MB/s | SNV2S/1000G |
Manufacturer: | Kingston Digital, Inc. |
Model Number: | SNV2S/1000G |
Product SKU: | B0BBWH1R8H |
UPC: | 5704174985839 |
The link has been copied to the clipboard.
106 Comments
Your comment cannot be blank.
Featured Comments
Kingston NV2 SSD Review: Cheap But Risky
A generic budget SSD with irregular hardware
The 2TB Kingston NV2 is a dirt cheap NVMe SSD and not much more. Performance is fairly bad, the drive runs hot, and you cannot be certain of the hardware. It makes for a cheap secondary drive but is not ideal for laptops or for use as a primary drive.
Sign up for a Slickdeals account to remove this ad.
Edit: just read the low reviews. One user said it died after 2 days, yikes
Imagine that a 48 MB block was 100% in use. No problem. Now delete 20 MB of files in that block. Ok, those sections of the block are marked as invalidated - but are not capable of being rewritten yet. The computer thinks you have 20 MB free. To get that 20 MB to be usable again, the controller has to do the following. It takes the 28 MB still in use and moves it to a new 48 MB block. It then tries to find data from other blocks that are near empty and moves their data to this new block. After this new 48 MB block is full, any blocks that are now considered 100% empty can be erased and only then are they able to be written.
So you see, there's a lot of stuff happening in the background. And when the drive is full, this becomes a major puzzle problem as your free space might be spread over a hundred 48 MB blocks and dozens of blocks may need to be written just to free up even one 48 MB block of space for new writes.
48 MB erase blocks are fairly big in SSD's. What I described above is called write amplification. Write amplification increases with larger erase block size and lower drive free space. When a drive is near full, every write can cause 10 or more writes due to the background work involved.
All of this said, there's no reason a controller should fail at 60% utilization - or even 100% if it's properly designed. I'm only describing the rigorous process happening behind the scenes. I also described a general SSD implementation and this is not exactly what is happening on ours, but it's similar.
Imagine that a 48 MB block was 100% in use. No problem. Now delete 20 MB of files in that block. Ok, those sections of the block are marked as invalidated - but are not capable of being rewritten yet. The computer thinks you have 20 MB free. To get that 20 MB to be usable again, the controller has to do the following. It takes the 28 MB still in use and moves it to a new 48 MB block. It then tries to find data from other blocks that are near empty and moves their data to this new block. After this new 48 MB block is full, any blocks that are now considered 100% empty can be erased and only then are they able to be written.
So you see, there's a lot of stuff happening in the background. And when the drive is full, this becomes a major puzzle problem as your free space might be spread over a hundred 48 MB blocks and dozens of blocks may need to be written just to free up even one 48 MB block of space for new writes.
48 MB erase blocks are fairly big in SSD's. What I described above is called write amplification. Write amplification increases with larger erase block size and lower drive free space. When a drive is near full, every write can cause 10 or more writes due to the background work involved.
All of this said, there's no reason a controller should fail at 60% utilization - or even 100% if it's properly designed. I'm only describing the rigorous process happening behind the scenes. I also described a general SSD implementation and this is not exactly what is happening on ours, but it's similar.
Still unfortunate that a relatively modern drive can die suddenly. I have budget WD external HDDs that will apparently outlive this SSD. Seems ridiculous.