this post was submitted on 01 Jan 2024
14 points (73.3% liked)

Buildapc

3843 readers
34 users here now

founded 2 years ago
MODERATORS
 

I have been teaching myself Linux on really old hardware. I am looking into building a new system so I can learn SDXL and maybe mess around a little with LLMs.

I have been reading as much as I can, but get a lot conflicting info. Ideally I would like a build that I can get started with, without being at bare minimums if possible. Just best value at a realistic starting point. Willing to save up more if it will save me from waiting forever while my PC is maxed out. With options to expand easily as I go. Don't mind using used hardware. I have also read some about cheap enterprise hardware being an option that can expand easily?

Any help would be awesome. Thank you in advance.

P.S. Happy New Year! Wishing everyone all the best. After the past few years, we could all use a better one.

you are viewing a single comment's thread
view the rest of the comments
[–] j4k3 3 points 11 months ago

For LLMs the bigger models are super important. I got a i7 twelfth gen with a 16GBV 3080Ti in a laptop. That is 20 logical cores and DDR5 along with the largest GPU option that was available a few months ago... short of spending $4k. I upgraded my ram to the max of 64GB within a week. I wish I had picked a laptop that could address 96+ GB of system memory. The laptop form factor sucks. The fans blow all the time and the battery life with this monster GPU is less than 1 hour if it is running at all. The power supply also doubles as a hotplate.

Most AI stuff work over your network in a web browser or on local host on your machine. Towers are better. If you are training a LoRA you will absolutely cook a GPU where it thermal throttles. I put my laptop in front of a window AC unit blowing at max cold and it barely stays below 90°C. Towers and cooling are important, as are number of available logical cores and RAM. You want absolute max GPU you can afford.

If I could do this again, I would look into a real workstation with 256GB+ of system memory, support for enterprise CPUs that support as current of AVX512 assembly instructions as possible (supported feature in Llama2 model loader), and I would get a 24 GB GPU.

As far as I know the largest open source model right now is a 180B model. Every token is 2 bytes. So you would need ~ 360 GB of memory to make that work. Do you need this, maybe not, but I would LOVE to be able to try that model. After running a 70B and finding a few of them that I like, it is all I run. There is no comparison in the output quality between even a 33B and a 70B. Bigger is much better. All the smaller stuff needs training and tweaking to make it work well. Don't trust benchmarks or basic reviews on YT. Ask someone that is actually using models in practice.