Slickdeals is community-supported.  We may get paid by brands for deals, including promoted items.
Heads up, this deal has expired. Want to create a deal alert for this item?
expired Posted by SehoneyDP • Mar 15, 2024
expired Posted by SehoneyDP • Mar 15, 2024

NVIDIA GeForce RTX 3090 Founders Edition Dual Fan 24GB GDDR6X GPU Card (Refurb)

(Select Stores) + Free Store Pickup

$700

Micro Center
149 Comments 118,831 Views
Visit Micro Center
Good Deal
Save
Share
Deal Details
Select Micro Center Stores have NVIDIA GeForce RTX 3090 Founders Edition Dual Fan 24GB GDDR6X PCIe 4.0 Graphics Card (Refurbished, 9001G1362510RF2) on sale for $699.99. Select free store pickup where available.
  • Note: Availability for pickup will vary by location and is very limited.
Thanks to Deal Hunter SehoneyDP for sharing this deal.

Features:
  • 24GB GDDR6X 384-bit Memory
  • 7680 x 4320 Maximum Resolution
  • PCIe 4.0
  • Full Height, Triple Slot
  • DisplayPort 1.4a, HDMI 2.1

Editor's Notes

Written by jimmytx | Staff
  • About this Store:
    • All products come with 60 days of Complimentary Tech Support
    • This product may be returned within 30 days of purchase (details).
  • Additional Information:

Original Post

Written by SehoneyDP
Community Notes
About the Poster
Deal Details
Community Notes
About the Poster
Select Micro Center Stores have NVIDIA GeForce RTX 3090 Founders Edition Dual Fan 24GB GDDR6X PCIe 4.0 Graphics Card (Refurbished, 9001G1362510RF2) on sale for $699.99. Select free store pickup where available.
  • Note: Availability for pickup will vary by location and is very limited.
Thanks to Deal Hunter SehoneyDP for sharing this deal.

Features:
  • 24GB GDDR6X 384-bit Memory
  • 7680 x 4320 Maximum Resolution
  • PCIe 4.0
  • Full Height, Triple Slot
  • DisplayPort 1.4a, HDMI 2.1

Editor's Notes

Written by jimmytx | Staff
  • About this Store:
    • All products come with 60 days of Complimentary Tech Support
    • This product may be returned within 30 days of purchase (details).
  • Additional Information:

Original Post

Written by SehoneyDP

Community Voting

Deal Score
+84
Good Deal
Visit Micro Center
Leave a Comment
To participate in the comments, please log in.

Top Comments

If you have a 24GB card, just download koboldcpp which is a 250mb exec, and get a GGUF model off huggingface -- a 20B or 34B model is about 15GB, then run it. Total time, including installing card and drivers ~1hr.

Check out /r/localllama on reddit.
This card is great if you want to play with local language models. Tired of GPT refusing to answer your questions about how to dispose of that dead body in your freezer? Run your own local model that has been 'un-aligned' and it will let you know exactly which brand of acid to get at the hardware store to remove that pesky problem. All you need a boatload of VRAM to run it.
It's a great card. But for gamers, seems 4070 ti super with 3 years of warranty is a better choice.

148 Comments

Sign up for a Slickdeals account to remove this ad.

Mar 16, 2024
1,026 Posts
Joined Mar 2005
Mar 16, 2024
d4deal
Mar 16, 2024
1,026 Posts
Quote from YW55 :
You mean 3090, 4090Ti doesn't exist yet.
Thank you for kindly correcting my mistake. SD needs more friendly folks like you. We are all posting for free to help everyone find good deals together. nod
1
Mar 16, 2024
134 Posts
Joined Aug 2010
Mar 16, 2024
postitnote
Mar 16, 2024
134 Posts
3090s are pretty good if you want to run local LLMs. ollama is fine, but anything that uses exllamav2 works really well on nvidia cards if you can fit the entire model in vram. You can have multiple cards if you want to run the more complex models that only fit in 48GB or even 72GB+ vram.
Mar 16, 2024
556 Posts
Joined Jul 2018
Mar 16, 2024
AquaPicture2620
Mar 16, 2024
556 Posts
Quote from postitnote :
3090s are pretty good if you want to run local LLMs. ollama is fine, but anything that uses exllamav2 works really well on nvidia cards if you can fit the entire model in vram. You can have multiple cards if you want to run the more complex models that only fit in 48GB or even 72GB+ vram.
Does exllamav2 support multiple gpus?
Mar 16, 2024
1,026 Posts
Joined Mar 2005
Mar 16, 2024
d4deal
Mar 16, 2024
1,026 Posts
Quote from shilderb :
VRAM's become my obession since I started using generative AI through Stable Diffusion. That stuff will eat up as much as you can feed it. I'm currently on a 2070 and a 3 minute video with 48 frames a second takes me 16 hours to generate. Upgrading to 16GB should cut that in half to 8 hours and an additional 8 on top of that would probably bring it down to 4 hours... it's a big difference when you're trying to learn something that requires a lot of trial and error.
* For gaming, RTX 3090 Ti & 3090 are 20-30% slower than RTX 4080:

In GPU benchmarks, 3090 (210%) & 3090 Ti (232%) are slower than 4080 (290%) & 4090 (370%).
https://gpu.userbenchmark.com/Com...4136vs4081

* For AI, 3090 Foundation Edition has 24GB VRAM = 50% more than 4080:

3090/Ti FE has 24GB VRAM (same as 4090), so they are better than 4080 (16GB) or 3080 (10GB) for AI.
I guess they were reference designs provided by Nvidia to game developers and video card OEMs for testing.
Last edited by d4deal March 16, 2024 at 11:57 AM.
Mar 16, 2024
5,112 Posts
Joined Jul 2017
Mar 16, 2024
Luigis3rdcousin
Mar 16, 2024
5,112 Posts
Quote from slimdunkin117 :
7900xt is the better choice
I think you meant the XTX version and gaming. If so, for gaming I would agree with you, but if you're doing any of the artificial intelligence stable diffusion type stuff, there's a bunch of tools out there that are Nvidia specific that will only run on Nvidia hardware, unfortunately. So a 3090 with 24 gigs of VRAM is very tempting for a lot of people right now
Mar 16, 2024
1,686 Posts
Joined Sep 2018
Mar 16, 2024
slimdunkin117
Mar 16, 2024
1,686 Posts
Quote from Luigis3rdcousin :
I think you meant the XTX version and gaming. If so, for gaming I would agree with you, but if you're doing any of the artificial intelligence stable diffusion type stuff, there's a bunch of tools out there that are Nvidia specific that will only run on Nvidia hardware, unfortunately. So a 3090 with 24 gigs of VRAM is very tempting for a lot of people right now
nope. XT. Xtx would be a 4080 competitor
2
Mar 16, 2024
124 Posts
Joined Mar 2011
Mar 16, 2024
HotsauceShoTYME
Mar 16, 2024
124 Posts
Quote from HappyAccident :
If you have a 24GB card, just download koboldcpp which is a 250mb exec, and get a GGUF model off huggingface -- a 20B or 34B model is about 15GB, then run it. Total time, including installing card and drivers ~1hr.

Check out /r/localllama on reddit.
So what you are saying is I can make a legit case to my company to get me a 4090 for my workstation.
1

Sign up for a Slickdeals account to remove this ad.

Mar 16, 2024
134 Posts
Joined Aug 2010
Mar 16, 2024
postitnote
Mar 16, 2024
134 Posts
Quote from AquaPicture2620 :
Does exllamav2 support multiple gpus?
Yes, it works really well. That's what I use now.

The entire space is constantly changing, so like exllamav2 got an update last week to improve caching in a way that makes for more efficient vram usage, a feature that hasn't been implemented in other inference engines yet.

These things work so well, and with all the chatter on legislation on the dangers of AI, makes me think that we are a few years away from these GPUs being locked down so they can only be used for gaming. They'll make it so that the only access to AI you will have is through large companies that restrict the kinds of things you can do with them. We already see how there are export restrictions on 4090s to China. That's why there was such a jump in prices of 3090s in the past year imo.
Mar 16, 2024
556 Posts
Joined Jul 2018
Mar 16, 2024
AquaPicture2620
Mar 16, 2024
556 Posts
Quote from postitnote :
Yes, it works really well. That's what I use now.

The entire space is constantly changing, so like exllamav2 got an update last week to improve caching in a way that makes for more efficient vram usage, a feature that hasn't been implemented in other inference engines yet.

These things work so well, and with all the chatter on legislation on the dangers of AI, makes me think that we are a few years away from these GPUs being locked down so they can only be used for gaming. They'll make it so that the only access to AI you will have is through large companies that restrict the kinds of things you can do with them. We already see how there are export restrictions on 4090s to China. That's why there was such a jump in prices of 3090s in the past year imo.
Cool, does it split weights (tensors) across gpus automatically, or do you have to split them yourself? Can't seem to find any documentation on it.
1
Mar 16, 2024
31 Posts
Joined Sep 2021
Mar 16, 2024
zhudazheng
Mar 16, 2024
31 Posts
Quote from HappyAccident :
This card is great if you want to play with local language models. Tired of GPT refusing to answer your questions about how to dispose of that dead body in your freezer? Run your own local model that has been 'un-aligned' and it will let you know exactly which brand of acid to get at the hardware store to remove that pesky problem. All you need a boatload of VRAM to run it.
A bit surprised to see technical discussion about LLM alignment here lol
Mar 16, 2024
171 Posts
Joined Aug 2013
Mar 16, 2024
peteer01
Mar 16, 2024
171 Posts
Quote from d4deal :
Per Micro Center, the 3090 Ti enables more CUDAs (10,752 vs. 10,496) but needs less power (750w vs. 850w) than the 3090.
The 3090 is 350W. The 3090Ti is 450W.
Mar 16, 2024
751 Posts
Joined May 2009
Mar 16, 2024
Mr.Keroro
Mar 16, 2024
751 Posts
Quote from sandwich :
The base 3090 has an issue with overheating vram modules on the backside. Be sure to have a fan pointing at the backplate to keep those modules cool
I'm advocating the next 5000 series be liquid cooled as the standard. The size of the 3000/4000 series cards take 3 slots to fit the huge air fins. I've seen MSI liquid cooled for the 4080 and it's amazing.
1
Mar 16, 2024
1,560 Posts
Joined Jul 2014
Mar 16, 2024
IIII
Mar 16, 2024
1,560 Posts
nevermind .............................
Mar 16, 2024
511 Posts
Joined Feb 2021
Mar 16, 2024
HappyAccident
Mar 16, 2024
511 Posts
Quote from zhudazheng :
A bit surprised to see technical discussion about LLM alignment here lol
This is just what I use as a quick alignment test for new checkpoints. Some people use raunchy questions but I find this works better because a lot of custom aligned models are built to be raunchy but they ignored the practical things that get killed by the alignment like how to clear a drain using lye or whatever that GPT might think is too dangerous for the plebs.
Last edited by Mbilo March 16, 2024 at 06:35 PM.
1
1

Sign up for a Slickdeals account to remove this ad.

Mar 17, 2024
1,777 Posts
Joined Aug 2011
Mar 17, 2024
DJRobNM
Mar 17, 2024
1,777 Posts
Quote from zhudazheng :
A bit surprised to see technical discussion about LLM alignment here lol
I'd be interested in knowing what people are using LLM for...
1

Related Searches

Popular Deals

View All

Trending Deals

View All