This type of thing is mostly used for inference with extremely large models, where a single GPU will have far too little VRAM to even load a model into memory. I doubt people are expecting this to perform particularly fast, they just want to get a model to run at all.
KingRandomGuy
I'm a researcher in ML and LLMs absolutely fall under ML. Learning in the term "Machine Learning" just means fitting the parameters of a model, hence just an optimization problem. In the case of an LLM this means fitting parameters of the transformer.
A model doesn't have to be intelligent to fall under the umbrella of ML. Linear least squares is considered ML; in fact, it's probably the first thing you'll do if you take an ML course at a university. Decision trees, nearest neighbor classifiers, and linear models all are machine learning models, despite the fact that nobody would consider them to be intelligent.
Yeah, I agree that it does help for some approaches that do require a lot of VRAM. If you're not on a tight schedule, this type of thing might be good enough to just get a model running.
I don't personally do anything that large; even the diffusion methods I've developed were able to fit on a 24GB card, but I know with the hype in multimodal stuff, VRAM needs can be pretty high.
I suspect this machine will be popular with hobbyists for running really large open weight LLMs.
I'm glad to hear yours have been holding up! Maybe my friends and I were just particularly unlucky.
The service manuals are available direct from Dell. For all the laptop's faults in my experience, I do appreciate that the SSDs are socketed, as are the RAM sticks on the 15. I do also appreciate that Dell sells replacement batteries (and they aren't glued in either!) as that's usually the first part to need a swap.
Useless is a strong term. I do a fair amount of research on a single 4090. Lots of problems can fit in <32 GB of VRAM. Even my 3060 is good enough to run small scale tests locally.
I'm in CV, and even with enterprise grade hardware, most folks I know are limited to 48GB (A40 and L40S, substantially cheaper and more accessible than A100/H100/H200). My advisor would always say that you should really try to set up a problem where you can iterate in a few days worth of time on a single GPU, and lots of problems are still approachable that way. Of course you're not going to make the next SOTA VLM on a 5090, but not every problem is that big.
Yep this is the exact issue. This problem comes up frequently in a first discrete math or formal mathematics course in universities, as an example of how subtle mistakes can arise in induction.
Exactly, the assumption (known as the inductive hypothesis) is completely fine by itself and doesn't represent circular reasoning. The issue in the "proof" actually arises from the logic coming after this, in which they assume that they can form two different overlapping sets by removing a different horse from the total set of horses, which fails if n=1 (as then they each have a single, distinct horse).
I haven't used the XPS 13 personally but my experience and all my friends' experience with the XPS lineup is that despite their build quality, they're quite prone to failure. On my 15, the keyboard failed multiple times, as well as one of the fans and eventually one thunderbolt port, all within a span of 4 years.
They're beautiful machines that really should be quality, but in practice for some reason they haven't lasted for me. On the plus side though, Dell does at least offer service manuals, and lots of parts can be replaced by a user (on the 15 you can easily replace fans, RAM, SSDs, and with some work you can replace the top deck, display, and SD reader).
I'm fairly certain blockchain GPUs have very different requirements than those used for ML, especially not LLMs. In particular they don't need anywhere as much VRAM and generally don't require floating point math, nor do they need features like tensor cores. Those "blockchain GPUs" likely didn't turn into ML GPUs.
ML has been around for a long time. People have been using GPUs in ML since AlexNet in 2012, not just after blockchain hype started to die down.
I think what they meant by that is "is this different wrt antitrust compared to Intel and x86?"
Intel both owns the x86 ISA and designs processors for it, though the situation is more favorable in that AMD owns x86-64 and obviously also designs their own processors.
I would say that in comparison to the standards used for top ML conferences, the paper is relatively light on the details. But nonetheless some folks have been able to reimplement portions of their techniques.
ML in general has a reproducibility crisis. Lots of papers are extremely hard to reproduce, even if they're open source, since the optimization process is partly random (ordering of batches, augmentations, nondeterminism in GPUs etc.), and unfortunately even with seeding, the randomness is not guaranteed to be consistent across platforms.
It really depends on what you're looking for. Are you just looking to learn how to print new materials, or do you have specific requirements for a project?
If it's the former, I'd say the easiest thing to try is PETG. It prints pretty reasonably on most printers though has stringing issues. It has different mechanical properties that make it suitable for other applications (for example, better temperature resistance and impact strength). It'll be much less frustrating than trying to dial in ABS for the first time.
ABS and TPU are both a pretty large step up in difficulty, but are quite good for functional parts. If you insist on learning one of these, pick whichever one fits with your projects better. For ABS you'll want an enclosure and a well ventilated room (IMO I wouldn't be in the same room as the printer) as it emits harmful chemicals during printing.