Slickdeals is community-supported.  We may get paid by brands for deals, including promoted items.
Heads up, this deal has expired. Want to create a deal alert for this item?
expired Posted by SehoneyDP • Mar 15, 2024
expired Posted by SehoneyDP • Mar 15, 2024

NVIDIA GeForce RTX 3090 Founders Edition Dual Fan 24GB GDDR6X GPU Card (Refurb)

(Select Stores) + Free Store Pickup

$700

Micro Center
149 Comments 118,819 Views
Visit Micro Center
Good Deal
Save
Share
Deal Details
Select Micro Center Stores have NVIDIA GeForce RTX 3090 Founders Edition Dual Fan 24GB GDDR6X PCIe 4.0 Graphics Card (Refurbished, 9001G1362510RF2) on sale for $699.99. Select free store pickup where available.
  • Note: Availability for pickup will vary by location and is very limited.
Thanks to Deal Hunter SehoneyDP for sharing this deal.

Features:
  • 24GB GDDR6X 384-bit Memory
  • 7680 x 4320 Maximum Resolution
  • PCIe 4.0
  • Full Height, Triple Slot
  • DisplayPort 1.4a, HDMI 2.1

Editor's Notes

Written by jimmytx | Staff
  • About this Store:
    • All products come with 60 days of Complimentary Tech Support
    • This product may be returned within 30 days of purchase (details).
  • Additional Information:

Original Post

Written by SehoneyDP
Community Notes
About the Poster
Deal Details
Community Notes
About the Poster
Select Micro Center Stores have NVIDIA GeForce RTX 3090 Founders Edition Dual Fan 24GB GDDR6X PCIe 4.0 Graphics Card (Refurbished, 9001G1362510RF2) on sale for $699.99. Select free store pickup where available.
  • Note: Availability for pickup will vary by location and is very limited.
Thanks to Deal Hunter SehoneyDP for sharing this deal.

Features:
  • 24GB GDDR6X 384-bit Memory
  • 7680 x 4320 Maximum Resolution
  • PCIe 4.0
  • Full Height, Triple Slot
  • DisplayPort 1.4a, HDMI 2.1

Editor's Notes

Written by jimmytx | Staff
  • About this Store:
    • All products come with 60 days of Complimentary Tech Support
    • This product may be returned within 30 days of purchase (details).
  • Additional Information:

Original Post

Written by SehoneyDP

Community Voting

Deal Score
+84
Good Deal
Visit Micro Center
Leave a Comment
To participate in the comments, please log in.

Top Comments

If you have a 24GB card, just download koboldcpp which is a 250mb exec, and get a GGUF model off huggingface -- a 20B or 34B model is about 15GB, then run it. Total time, including installing card and drivers ~1hr.

Check out /r/localllama on reddit.
This card is great if you want to play with local language models. Tired of GPT refusing to answer your questions about how to dispose of that dead body in your freezer? Run your own local model that has been 'un-aligned' and it will let you know exactly which brand of acid to get at the hardware store to remove that pesky problem. All you need a boatload of VRAM to run it.
It's a great card. But for gamers, seems 4070 ti super with 3 years of warranty is a better choice.

148 Comments

Sign up for a Slickdeals account to remove this ad.

Mar 17, 2024
1,777 Posts
Joined Aug 2011
Mar 17, 2024
DJRobNM
Mar 17, 2024
1,777 Posts
Quote from zhudazheng :
A bit surprised to see technical discussion about LLM alignment here lol
I'd be interested in knowing what people are using LLM for...
1
Pro
Mar 17, 2024
2,796 Posts
Joined Jul 2020
Mar 17, 2024
ThirstyCruz
Pro
Mar 17, 2024
2,796 Posts
Quote from postitnote :
Yes, it works really well. That's what I use now.

The entire space is constantly changing, so like exllamav2 got an update last week to improve caching in a way that makes for more efficient vram usage, a feature that hasn't been implemented in other inference engines yet.

These things work so well, and with all the chatter on legislation on the dangers of AI, makes me think that we are a few years away from these GPUs being locked down so they can only be used for gaming. They'll make it so that the only access to AI you will have is through large companies that restrict the kinds of things you can do with them. We already see how there are export restrictions on 4090s to China. That's why there was such a jump in prices of 3090s in the past year imo.
I hope not but am positive something like this will happen in disguise. I don't fear AI at all , I fear the incredible shift of even more power/data to big co and big money and will take years/decades for people to appreciate that.
1
Mar 17, 2024
134 Posts
Joined Aug 2010
Mar 17, 2024
postitnote
Mar 17, 2024
134 Posts
Quote from AquaPicture2620 :
Cool, does it split weights (tensors) across gpus automatically, or do you have to split them yourself? Can't seem to find any documentation on it.
yes there is an autosplit feature. works really well.
1
Mar 17, 2024
463 Posts
Joined May 2019
Mar 17, 2024
thaophuong73
Mar 17, 2024
463 Posts
In the meanwhile, Nvidia reported record quarterly profit.
Mar 17, 2024
151 Posts
Joined Nov 2016
Mar 17, 2024
Bearclawjohnson
Mar 17, 2024
151 Posts
If you like modding too you'll want the VRAM. I mod Skyrim a lot and have seen my VRAM usage reach up close to 20gb at times on my 7900 xtx.
1
Mar 17, 2024
511 Posts
Joined Feb 2021
Mar 17, 2024
HappyAccident
Mar 17, 2024
511 Posts
Quote from DJRobNM :
I'd be interested in knowing what people are using LLM for...
Some people use them to learn (how else can you learn to use them without paying a ton of money for API use to OpenAI?) -- you would be crazy to think it isn't a good idea to have on your resume that you are skilled in deployment and customization of large language models for specialty use cases.

Some people use them for 'waifus' which is a play on the word 'wife' in a specific cultural section of men who idolize Japanese culture and want a realistic fictional mate in that can conform to their specific needs.

Some people use them for D&D like ongoing roleplay adventures -- imagine having a book that is constantly written for your own tastes and that you can participate in.

Some people use them for custom solutions to problems that can't be solved any other ways.

Some people use them to recreate the personalities of people who they admire or who have passed.

Some people just don't like being told 'no' by a company when they ask their product a question and this is their way of taking control.

I mean, it is literally your own thing that can understand language and that can be molded by you to do almost infinite things that involve fluency in language.

Speaking of languages, most models become multiligual without specific training -- they just learn how languages work so even an English trained model ends up being able to translate other languages by function of its inner workings.
1
Mar 17, 2024
364 Posts
Joined Dec 2016
Mar 17, 2024
MattB6434
Mar 17, 2024
364 Posts
Quote from HappyAccident :
This card is great if you want to play with local language models. Tired of GPT refusing to answer your questions about how to dispose of that dead body in your freezer? Run your own local model that has been 'un-aligned' and it will let you know exactly which brand of acid to get at the hardware store to remove that pesky problem. All you need a boatload of VRAM to run it.
This guy gets me.
1

Sign up for a Slickdeals account to remove this ad.

Mar 17, 2024
662 Posts
Joined Jan 2008
Mar 17, 2024
bensonw
Mar 17, 2024
662 Posts
Good card overall. You can download iCUE software to customize lightings on this card. It does run hot especially for hotspot. Changing out the thermal pads will help. Like many stated, once you start messing with AI, the 24GB will prove super useful. I spent more time with the AI than gaming and it can be quite interesting.
Mar 17, 2024
3,064 Posts
Joined Dec 2014
Mar 17, 2024
noobtech206
Mar 17, 2024
3,064 Posts
Quote from bensonw :
Good card overall. You can download iCUE software to customize lightings on this card. It does run hot especially for hotspot. Changing out the thermal pads will help. Like many stated, once you start messing with AI, the 24GB will prove super useful. I spent more time with the AI than gaming and it can be quite interesting.
Always keep a spare keycard in your back pocket. Don't be a casualty like Ex Machina…. or have an emergency hatch.
Mar 17, 2024
662 Posts
Joined Jan 2008
Mar 17, 2024
bensonw
Mar 17, 2024
662 Posts
Quote from noobtech206 :
Always keep a spare keycard in your back pocket. Don't be a casualty like Ex Machina…. or have an emergency hatch.
Haha, thanks for the warning. If I live long enough to see that day, my friend...
Mar 17, 2024
585 Posts
Joined Apr 2015
Mar 17, 2024
seier
Mar 17, 2024
585 Posts
Quote from xlongx :
It's a great card. But for gamers, seems 4070 ti super with 3 years of warranty is a better choice.
Refurbished is always risky. However, a 3090 does have more ram. $700 is far too big of a risk for me.
Mar 17, 2024
6 Posts
Joined Sep 2022
Mar 17, 2024
PowerfulClub201
Mar 17, 2024
6 Posts
Have anyone used this or another other GPU in an external thunderbolt enclosure, and for AI/LLM ?

I have a Razer TB3 enclosure, which I am considering installing a 3090/7900xtx on and use for AI for my laptop with TB3. (rather than in a PC directly) I want something more portable.

Curious to learn vicariously from that experience so I can either proceed or avoid doing.
Mar 17, 2024
3,918 Posts
Joined Jul 2019
Mar 17, 2024
Gb1908
Mar 17, 2024
3,918 Posts
Boycott the cards that made bitcoin great
Mar 17, 2024
53 Posts
Joined Sep 2013
Mar 17, 2024
SteveS9108
Mar 17, 2024
53 Posts
Wtf. Is this tech support ?

Sign up for a Slickdeals account to remove this ad.

Mar 17, 2024
25 Posts
Joined Nov 2016
Mar 17, 2024
twclocks
Mar 17, 2024
25 Posts
Quote from SehoneyDP :
I was definitely thinking the same thing for just $100 more the longevity should be worth it right? I'm not expert, but the big difference seems to be the vram? 24GB vs 16GB is there a big difference in consideration of that? Does it help things in particular?
You can find it open box for cheaper
Mar 17, 2024
53 Posts
Joined May 2020
Mar 17, 2024
YourPotatoness
Mar 17, 2024
53 Posts
Quote from jason879 :
Got one 3090Ti last week for $799. Traded in a few of AMD 50/60 series cards which I don't use anymore. Good card for AI workload. I'm planning to replace existing 4 x Tesla P40 fleet with these.

It's super efficient during idle and consumes only 3w without connect to any monitor.

Code:
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.54.14              Driver Version: 550.54.14      CUDA Version: 12.4     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce RTX 3090 Ti     Off |   00000000:1B:00.0 Off |                  Off |
|  0%   40C    P8              3W /  450W |   20592MiB /  24564MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
Where did you trade your GPU? I have a 2090 super here that I wanted to trade in.

Related Searches

Popular Deals

View All

Trending Deals

View All