HackyHorse3000

joined 1 year ago
[–] HackyHorse3000 16 points 4 months ago

I think the main problem is applying LLM outside the domain of "complete this sentence". It's fine for what it is, and trained on huge datasets it obviously appears impressive, but it doesn't know if it's right or wrong, and evaluation metrics are different. In most traditional applications of neural networks, you have datasets with right and wrong answers, that's not how these are trained, as there is no "right" answer to "tell me a joke." So the training has to be based on what would likely fill in the blank. This could be an actual joke, a bad joke, a completely different topic, there's no difference in the training data. The biases, incorrect answers, all the faults of this massive dataset are inherent in the model, and there's no fixing that. They are fundamentally different in their application and evaluation (this extends to training) methods from other neural networks that are actually effective at what they do, like image processing and identification. The scope of what they're trying to do with a finite dataset is not realistic and entirely unconstrained, as compared to more "traditional" neural networks, which are very narrow in scope exactly because of this issue.

[–] HackyHorse3000 26 points 4 months ago (2 children)

That's the thing though, that's not comparable, and misses the point entirely. "AI" in this context and the conversations regarding it in the current day is specifically talking about LLMs. They will not improve to the point of general intelligence as that is not how they work. Hallucinations are inevitable with the current architectures and methods, and they lack a inherent understanding of concepts in general. It's the same reason they can't do math or logic problems that aren't common in the training set. It's not intelligence. Modern computers are built on the same principals and architectures as those calculators were, just iterated upon extensively. No such leap is possible using large language models. They are entirely reliant on a finite pool of data to try to mimic most effectively, they are not learning or understanding concepts the way "Full-AI" would need to to actually be reliable or able to generate new ideas.

[–] HackyHorse3000 16 points 6 months ago (4 children)

Their sibling is 27 in this case, I think well past the age of "gullible young".

[–] HackyHorse3000 5 points 7 months ago (2 children)

Looks like a Lego rocket with a cloud lamp as exhaust, with a dank dragon to boot

[–] HackyHorse3000 1 points 9 months ago

Kagi basically did with Smallweb

[–] HackyHorse3000 49 points 10 months ago (5 children)

Neal.fun is perfect for this exact scenario.

[–] HackyHorse3000 7 points 11 months ago (1 children)

Yeah, that's generally my take as well. Can't exactly expect people to make, curate and host content without any kind of funding, and despite what people may claim, it's a very low percentage of people who will actively pay for what they consume.

[–] HackyHorse3000 1 points 1 year ago* (last edited 1 year ago)

Fair point, I did misread that. But it seems you're acting in bad faith with just one source again. Any search amongst published articles provide evidence for the efficacy and cost effectiveness of masks as a adjunct preventative measure. It seems rather like cherry picking to trust the one place that goes against the grain, no?

[–] HackyHorse3000 2 points 1 year ago (2 children)

I know you're being combative so it's unlikely, but did you actually read both sources? One is a review of around 70 studies, before and during the pandemic, sonme unpublished. The other is a review of 5000 articles which found statistically significant results..

[–] HackyHorse3000 12 points 1 year ago

You'd love Technology Connections on YouTube if you haven't seen him already.