Slickdeals is community-supported.  We may get paid by brands for deals, including promoted items.
Heads up, this deal has expired. Want to create a deal alert for this item?
expired Posted by SehoneyDP • Mar 15, 2024
expired Posted by SehoneyDP • Mar 15, 2024

NVIDIA GeForce RTX 3090 Founders Edition Dual Fan 24GB GDDR6X GPU Card (Refurb)

(Select Stores) + Free Store Pickup

$700

Micro Center
149 Comments 118,819 Views
Visit Micro Center
Good Deal
Save
Share
Deal Details
Select Micro Center Stores have NVIDIA GeForce RTX 3090 Founders Edition Dual Fan 24GB GDDR6X PCIe 4.0 Graphics Card (Refurbished, 9001G1362510RF2) on sale for $699.99. Select free store pickup where available.
  • Note: Availability for pickup will vary by location and is very limited.
Thanks to Deal Hunter SehoneyDP for sharing this deal.

Features:
  • 24GB GDDR6X 384-bit Memory
  • 7680 x 4320 Maximum Resolution
  • PCIe 4.0
  • Full Height, Triple Slot
  • DisplayPort 1.4a, HDMI 2.1

Editor's Notes

Written by jimmytx | Staff
  • About this Store:
    • All products come with 60 days of Complimentary Tech Support
    • This product may be returned within 30 days of purchase (details).
  • Additional Information:

Original Post

Written by SehoneyDP
Community Notes
About the Poster
Deal Details
Community Notes
About the Poster
Select Micro Center Stores have NVIDIA GeForce RTX 3090 Founders Edition Dual Fan 24GB GDDR6X PCIe 4.0 Graphics Card (Refurbished, 9001G1362510RF2) on sale for $699.99. Select free store pickup where available.
  • Note: Availability for pickup will vary by location and is very limited.
Thanks to Deal Hunter SehoneyDP for sharing this deal.

Features:
  • 24GB GDDR6X 384-bit Memory
  • 7680 x 4320 Maximum Resolution
  • PCIe 4.0
  • Full Height, Triple Slot
  • DisplayPort 1.4a, HDMI 2.1

Editor's Notes

Written by jimmytx | Staff
  • About this Store:
    • All products come with 60 days of Complimentary Tech Support
    • This product may be returned within 30 days of purchase (details).
  • Additional Information:

Original Post

Written by SehoneyDP

Community Voting

Deal Score
+84
Good Deal
Visit Micro Center
Leave a Comment
To participate in the comments, please log in.

Top Comments

If you have a 24GB card, just download koboldcpp which is a 250mb exec, and get a GGUF model off huggingface -- a 20B or 34B model is about 15GB, then run it. Total time, including installing card and drivers ~1hr.

Check out /r/localllama on reddit.
This card is great if you want to play with local language models. Tired of GPT refusing to answer your questions about how to dispose of that dead body in your freezer? Run your own local model that has been 'un-aligned' and it will let you know exactly which brand of acid to get at the hardware store to remove that pesky problem. All you need a boatload of VRAM to run it.
It's a great card. But for gamers, seems 4070 ti super with 3 years of warranty is a better choice.

148 Comments

Sign up for a Slickdeals account to remove this ad.

Mar 15, 2024
1,686 Posts
Joined Sep 2018
Mar 15, 2024
slimdunkin117
Mar 15, 2024
1,686 Posts

Our community has rated this post as helpful. If you agree, why not thank slimdunkin117

Quote from xlongx :
It's a great card. But for gamers, seems 4070 ti super with 3 years of warranty is a better choice.
7900xt is the better choice
1
6
Mar 15, 2024
5,233 Posts
Joined Jun 2010
Mar 15, 2024
wpc
Mar 15, 2024
5,233 Posts
Quote from slimdunkin117 :
7900xt is the better choice
did you just bring up AMD in an NVIDIA deal thread? EEK!
9
Mar 15, 2024
210 Posts
Joined Oct 2009
Mar 15, 2024
playdc
Mar 15, 2024
210 Posts
Quote from duijver :
What is the investment time + hardware to play around with your own GPT / LLM model?

I can play with GPT to see what it spits out - I am curious to hear from someone that has done it.
Go ollama.com. One click to installed ollama,then one commond to run any large language model in it library.
Mar 15, 2024
149 Posts
Joined Apr 2013
Mar 15, 2024
esy1219
Mar 15, 2024
149 Posts
anyone have any idea on what refurbished could mean? did they just repackage GPU or something significant had to be changed like a ram chip? Also, would a refurb gpu be worth the risk? I'm assuming as others have stated, going for the 4070 would be better as the performance would be similar?
Pro
Mar 15, 2024
886 Posts
Joined Apr 2007
Mar 15, 2024
Meteo
Pro
Mar 15, 2024
886 Posts
Quote from duijver :
What is the investment time + hardware to play around with your own GPT / LLM model?

I can play with GPT to see what it spits out - I am curious to hear from someone that has done it.

Once you get things working, you should checkout oobabooga's UI. its really popular in the community https://github.com/oobabooga/text...tion-webui

allows you to do all sorts of things from trying different models, adjusting parameters, and even finetuning
Mar 15, 2024
2,610 Posts
Joined Jun 2007

This comment has been rated as unhelpful by Slickdeals users.

Mar 15, 2024
2,497 Posts
Joined May 2018
Mar 15, 2024
Timless
Mar 15, 2024
2,497 Posts
Quote from wpc :
that would just be too much gpu power. all gamers would be jealous
Do any games actually support nvlink?

Sign up for a Slickdeals account to remove this ad.

Mar 16, 2024
2,346 Posts
Joined Apr 2006
Mar 16, 2024
duijver
Mar 16, 2024
2,346 Posts
Quote from HappyAccident :
If you have a 24GB card, just download koboldcpp which is a 250mb exec, and get a GGUF model off huggingface -- a 20B or 34B model is about 15GB, then run it. Total time, including installing card and drivers ~1hr.

Check out /r/localllama on reddit.
Thank you. I went with LM Studio to start off and it runs really fast on a macbook pro / M3 Pro. I grabbed mistral 7B to start out with since I am not sure how high I can go with a MBP.
Mar 16, 2024
2,346 Posts
Joined Apr 2006
Mar 16, 2024
duijver
Mar 16, 2024
2,346 Posts
Quote from jpswaynos :
If you're on Windows you can get setup in minutes: https://lmstudio.ai/

Hardware minimums are pretty low to run a 7B parameter LLM, but can ramp up substantially if you want to run a 30 or 60B parameter LLM and get more than a couple of Tokens/s.
Thank you! LM Studio is great and super easy to get started. It does not seem as powerful as the paid openAI or Perplexity... but I just started with a 7B and that is expected without doing anything. Around 30 minutes of playing including the downloading and throwing some code in VSC.
Mar 16, 2024
92 Posts
Joined Nov 2004
Mar 16, 2024
jason879
Mar 16, 2024
92 Posts
Got one 3090Ti last week for $799. Traded in a few of AMD 50/60 series cards which I don't use anymore. Good card for AI workload. I'm planning to replace existing 4 x Tesla P40 fleet with these.

It's super efficient during idle and consumes only 3w without connect to any monitor.

Code:
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.54.14              Driver Version: 550.54.14      CUDA Version: 12.4     |
|-----------------------------------------+------------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id          Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |           Memory-Usage | GPU-Util  Compute M. |
|                                         |                        |               MIG M. |
|=========================================+========================+======================|
|   0  NVIDIA GeForce RTX 3090 Ti     Off |   00000000:1B:00.0 Off |                  Off |
|  0%   40C    P8              3W /  450W |   20592MiB /  24564MiB |      0%      Default |
|                                         |                        |                  N/A |
+-----------------------------------------+------------------------+----------------------+
Last edited by jason879 March 15, 2024 at 06:38 PM.
Mar 16, 2024
511 Posts
Joined Feb 2021
Mar 16, 2024
HappyAccident
Mar 16, 2024
511 Posts

Our community has rated this post as helpful. If you agree, why not thank HappyAccident

Quote from duijver :
Thank you. I went with LM Studio to start off and it runs really fast on a macbook pro / M3 Pro. I grabbed mistral 7B to start out with since I am not sure how high I can go with a MBP.
With MBP you can go as high as your system RAM. The key to language models with GPU vs CPU is memory bandwidth. A 3090 with 384bit GDDR6X will give you around 930GB/s. A Macbook Pro 2021 with M1 gives you 200GB/s. So you can expect a ~4.5x speedup just from memory bandwidth if you run the same model on a 3090 as a MBP. The thing is that if you look at how the models are processing data, the big roadblocks are the data transfer steps.

If you know what is going on inside a transformer model (I barely have a rough idea) it is computing by comparing different possibilities against its set of weights in order to ultimately find a response that fits within its constraints (the parameters you set when you load it -- not sure how LM studio does it, but if you see a 'temperature' slider, that set of options is what I am talking about). To do this it has to run through millions of tries over and over running through layers until it pops out a token. This requires vast amounts of data to be evaluated constantly and thus memory bandwidth is the key limiting factor for operation.

Hope I didn't get too much wrong with that description.

This is all related only to inference by the way (having the models compute responses), not training (teaching the models how to compute responses) which has similar constraints but is a different process and can rely on different factors.
3
Mar 16, 2024
7,359 Posts
Joined Jul 2016
Mar 16, 2024
Frank_Nitty
Mar 16, 2024
7,359 Posts
Not a bad price, but a 4080 Super at MSRP would be a better alternative. The 3090 has 24GB of RAM I would never use in its entirety, and plus my 6950 XT still suits me just fine.
4
Mar 16, 2024
2,186 Posts
Joined Feb 2010
Mar 16, 2024
MR_FLY_GUY
Mar 16, 2024
2,186 Posts
Quote from SehoneyDP :
Ah that makes sense ty. I got a 1440p monitor for now, so probably don't really see a need. I do feel like I could use more for PCVR though. I'll have to look at which one is better for that. I can still wait until the 5000 series are out though tbh.
16gb vram is perfect for any resolution. U don't need 24.
4
Mar 16, 2024
849 Posts
Joined Jul 2011
Mar 16, 2024
sky0102
Mar 16, 2024
849 Posts
mang, this used to be over $2.5k at the peak of the pandemic and mining craze with an HP rig with a 10th gen intel.
1

Sign up for a Slickdeals account to remove this ad.

Mar 16, 2024
80 Posts
Joined Apr 2022
Mar 16, 2024
big819
Mar 16, 2024
80 Posts
how is microcenter refurb quality? years ago, they were pretty bad right?.... i mean after all these years, are they better now?

Related Searches

Popular Deals

View All

Trending Deals

View All