expired Posted by SehoneyDP • Mar 15, 2024
Mar 15, 2024 4:09 PM
Item 1 of 4
Item 1 of 4
expired Posted by SehoneyDP • Mar 15, 2024
Mar 15, 2024 4:09 PM
NVIDIA GeForce RTX 3090 Founders Edition Dual Fan 24GB GDDR6X GPU Card (Refurb)
(Select Stores) + Free Store Pickup$700
Micro Center
Visit Micro CenterGood Deal
Bad Deal
Save
Share
Top Comments
Check out /r/localllama on reddit.
149 Comments
Sign up for a Slickdeals account to remove this ad.
Our community has rated this post as helpful. If you agree, why not thank xlongx
Also, maybe for scientific stuff you'll want more VRAM, but I have no idea if you'll want more VRAM or compute power for stuff like that.
Our community has rated this post as helpful. If you agree, why not thank wpc
Also, maybe for scientific stuff you'll want more VRAM, but I have no idea if you'll want more VRAM or compute power for stuff like that.
Our community has rated this post as helpful. If you agree, why not thank HappyAccident
Sign up for a Slickdeals account to remove this ad.
I can play with GPT to see what it spits out - I am curious to hear from someone that has done it.
Our community has rated this post as helpful. If you agree, why not thank HappyAccident
I can play with GPT to see what it spits out - I am curious to hear from someone that has done it.
Check out /r/localllama on reddit.
Check out /r/localllama on reddit.
I am not sure if my Precision 7810 system (dual xeon v3 beasts) will handle the power requirements of the 3090 and the 4070 TI S might be worth looking at for as it is more in-line with the 2080S in there now.
Our community has rated this post as helpful. If you agree, why not thank jpswaynos
I can play with GPT to see what it spits out - I am curious to hear from someone that has done it.
Hardware minimums are pretty low to run a 7B parameter LLM, but can ramp up substantially if you want to run a 30 or 60B parameter LLM and get more than a couple of Tokens/s.
Our community has rated this post as helpful. If you agree, why not thank sandwich
Our community has rated this post as helpful. If you agree, why not thank static.CH03
Hardware minimums are pretty low to run a 7B parameter LLM, but can ramp up substantially if you want to run a 30 or 60B parameter LLM and get more than a couple of Tokens/s.
Sign up for a Slickdeals account to remove this ad.