Slickdeals is community-supported.  We may get paid by brands or deals, including promoted items.
Sorry, this deal has expired. Get notified of deals like this in the future. Add Deal Alert for this Item
Frontpage

1TB Kingston NV2 M.2 2280 PCIe 4.0 x4 NVMe SSD Expired

$35.15
$80.99
+ Free Shipping
+49 Deal Score
41,631 Views
Amazon has 1TB Kingston NV2 M.2 2280 PCIe 4.0 x4 NVMe SSD Solid State Drive (SNV2S/1000G) on sale for $60.99 - $25.82 savings at checkout = $31.76 -> Now $35.17. Shipping is free.
  • Note: $25.82 savings is automatically applied at final checkout. Expected to ship within 1-4 weeks.
Thanks to Deal Hunter phoinix for finding this deal.

Specs:
  • DRAM-less (64 MB HMB)
  • Sequential Read: 3,500 MB/s
  • Sequential Write: 2,100 MB/s
  • Endurance: 320 TBW

Original Post

Written by
Edited May 8, 2024 at 03:02 PM by
Amazon [amazon.com] has 1TB Kingston NV2 M.2 2280 4.0 Gen 4x4 NVMe Internal SSD for $60.99 - $29.24 $25.82 when you 'clip' the coupon on product page = $31.76. >Now $35.17 Shipping is free. Usually ships within 1-4 weeks

Price:
$49.24 lower (61% savings) than the list price of $81
Save $29.24 at checkout

Deal history:Customer reviews:
4.7⭐ / 18,992 global ratings
9,000+ bought in past month

amazon.com/dp/B0BBWH1R8H [amazon.com]

Please report the deal if expired (this saves other members' time)
My other deals

#pfpd
If you purchase something through a post on our site, Slickdeals may get a small share of the sale.
Deal
Score
+49
41,631 Views
$35.15
$80.99

Price Intelligence

Model: Kingston NV2 PCIe 4.0 NVMe SSD 1TB Internal M.2 2280

Deal History 

Sort: Most Recent
Post Date Sold By Sale Price Activity
05/07/24Amazon$31.75
1
05/06/24Amazon$31.80
11
05/06/24Amazon$31.75 frontpage
76
05/05/24Amazon$31.80 frontpage
45
05/03/23Amazon$42.99
2
03/06/23Amazon$49.99
10
02/07/23B&H Photo Video$48 frontpage
55
01/02/23Walmart$46.88 popular
17
12/13/22Kingston$58 frontpage
54
11/28/22Walmart$28.88 popular
86
11/21/22Amazon$52.78
8
Show More
Don't have Amazon Prime? Students can get a free 6-Month Amazon Prime trial with free 2-day shipping, unlimited video streaming & more. If you're not a student, there's also a free 1-Month Amazon Prime trial available. You can also earn cash back rewards on Amazon and Whole Foods purchases with the Amazon Prime Visa credit card. Read our review to see if it’s the right card for you.

Your comment cannot be blank.

Featured Comments

Tom's Hardware review:
Kingston NV2 SSD Review: Cheap But Risky
A generic budget SSD with irregular hardware

The 2TB Kingston NV2 is a dirt cheap NVMe SSD and not much more. Performance is fairly bad, the drive runs hot, and you cannot be certain of the hardware. It makes for a cheap secondary drive but is not ideal for laptops or for use as a primary drive.
I got it the first time it popped up here and it is slow. I will use it for nonessential data, if you don't have a use for it, I think it is expensive even for $30 Smilie
The thing about buying a drive that you can't trust is that you can't always remember that you shouldn't trust it. Drives with a high chance of failure are almost worthless.

Sign up for a Slickdeals account to remove this ad.

Joined Jul 2017
L4: Apprentice
> bubble2 309 Posts
194 Reputation
Relik
05-16-2024 at 03:44 PM.
05-16-2024 at 03:44 PM.
Quote from Peerless_Warrior :
Why does 60% utilization cause the drive to fail?

Edit: just read the low reviews. One user said it died after 2 days, yikes
High utilization in SSD's puts more strain on the controllers, particularly with QLC NAND. If the controller code has bugs, they will be found & triggered under high utilization The reason is complex, but I'll try to simplify it. The controller is constantly moving things around, copying, and invalidating data. For Intel's 144 layer QLC NAND, the size of an erase block is 48 megabytes.

Imagine that a 48 MB block was 100% in use. No problem. Now delete 20 MB of files in that block. Ok, those sections of the block are marked as invalidated - but are not capable of being rewritten yet. The computer thinks you have 20 MB free. To get that 20 MB to be usable again, the controller has to do the following. It takes the 28 MB still in use and moves it to a new 48 MB block. It then tries to find data from other blocks that are near empty and moves their data to this new block. After this new 48 MB block is full, any blocks that are now considered 100% empty can be erased and only then are they able to be written.

So you see, there's a lot of stuff happening in the background. And when the drive is full, this becomes a major puzzle problem as your free space might be spread over a hundred 48 MB blocks and dozens of blocks may need to be written just to free up even one 48 MB block of space for new writes.

48 MB erase blocks are fairly big in SSD's. What I described above is called write amplification. Write amplification increases with larger erase block size and lower drive free space. When a drive is near full, every write can cause 10 or more writes due to the background work involved.

All of this said, there's no reason a controller should fail at 60% utilization - or even 100% if it's properly designed. I'm only describing the rigorous process happening behind the scenes. I also described a general SSD implementation and this is not exactly what is happening on ours, but it's similar.
Like
Funny
>
Helpful
Not helpful
Reply
Joined Dec 2011
Two Minds Became One
> bubble2 11,907 Posts
Peerless_Warrior
05-16-2024 at 08:04 PM.
05-16-2024 at 08:04 PM.
Quote from Relik :
High utilization in SSD's puts more strain on the controllers, particularly with QLC NAND. If the controller code has bugs, they will be found & triggered under high utilization The reason is complex, but I'll try to simplify it. The controller is constantly moving things around, copying, and invalidating data. For Intel's 144 layer QLC NAND, the size of an erase block is 48 megabytes.

Imagine that a 48 MB block was 100% in use. No problem. Now delete 20 MB of files in that block. Ok, those sections of the block are marked as invalidated - but are not capable of being rewritten yet. The computer thinks you have 20 MB free. To get that 20 MB to be usable again, the controller has to do the following. It takes the 28 MB still in use and moves it to a new 48 MB block. It then tries to find data from other blocks that are near empty and moves their data to this new block. After this new 48 MB block is full, any blocks that are now considered 100% empty can be erased and only then are they able to be written.

So you see, there's a lot of stuff happening in the background. And when the drive is full, this becomes a major puzzle problem as your free space might be spread over a hundred 48 MB blocks and dozens of blocks may need to be written just to free up even one 48 MB block of space for new writes.

48 MB erase blocks are fairly big in SSD's. What I described above is called write amplification. Write amplification increases with larger erase block size and lower drive free space. When a drive is near full, every write can cause 10 or more writes due to the background work involved.

All of this said, there's no reason a controller should fail at 60% utilization - or even 100% if it's properly designed. I'm only describing the rigorous process happening behind the scenes. I also described a general SSD implementation and this is not exactly what is happening on ours, but it's similar.
I feel like I've read something similar to while skimming my IT textbook. It was about LBA on storage mediums.

Still unfortunate that a relatively modern drive can die suddenly. I have budget WD external HDDs that will apparently outlive this SSD. Seems ridiculous.
1
Like
Funny
>
Helpful
Not helpful
Reply
Page 8 of 8
Start the Conversation
 
Link Copied

The link has been copied to the clipboard.