jacksilver

joined 1 year ago
[–] jacksilver 12 points 5 days ago

Also "win + - > or <-" to move a tile to left or right side.

[–] jacksilver 6 points 5 days ago

Definitely sounds like it could be real. If I had to guess their mounting a drive (or another partition) and it's defaulting to read only. When restarting it resets the original permissions as they only updated the file permissions, but not the mount configuration.

Also reads like some of my frustrations when first getting into Linux (and the issues I occasionally run into still).

[–] jacksilver 2 points 5 days ago

Yeah, that's what I was thinking. Just need to throw some foil on it and you've got a very expensive new buddy.

[–] jacksilver 19 points 6 days ago (6 children)

Anyone have a high level breakdown of what this update contains?

[–] jacksilver 9 points 6 days ago

This is just the estimates to train the model, so it's not accounting for the cost to develop the system for training, collecting the data, etc. This is just pure processing cost, which is staggeringly large numbers.

[–] jacksilver 2 points 6 days ago

To be fair casting for both Tilda Swinton as The Ancient One and Johnny Depps as Tonto were both criticized when the movies were released. Probably not to the level Halle Baileys casting was, or by the same people, but both were definitely seen as whitewashing.

Its also likely The Ancient Ones casting got as much attention as it did due to the political nature of the change (seen as to appease China over its history regarding Tibet).

[–] jacksilver -1 points 6 days ago (1 children)

I think you're missing the point. No LLM can do math, most humans can. No LLM can learn new information, all humans can and do (maybe to varying degrees, but still).

AMD just to clarify by not able to do math. I mean that there is a lack of understanding in how numbers work where combining numbers or values outside of the training data can easily trip them up. Since it's prediction based, exponents/tri functions/etc. will quickly produce errors when using large values.

[–] jacksilver 10 points 1 week ago (3 children)

Here's an easy way we're different, we can learn new things. LLMs are static models, it's why they mention the cut off dates for learning for OpenAI models.

Another is that LLMs can't do math. Deep Learning models are limited to their input domain. When asking an LLM to do math outside of its training data, it's almost guaranteed to fail.

Yes, they are very impressive models, but they're a long way from AGI.

[–] jacksilver 3 points 1 week ago

Yeah, dgpus have been for niche applications for decades, I didn't read the article, but the parent comment is vastly overestimating igpu capabilites

[–] jacksilver 11 points 1 week ago (4 children)

LLMs do suck at math, if you look into it, the o1 models actually escape the LLM output and write a python function to calculate the output, I've been able to break their math functions by asking for functions that use math not in the standard Python library.

I know someone also wrote a wolfram integration to help solve LLMs math problems.

[–] jacksilver 3 points 1 week ago

Even if we ignore the fact it is talking about emission goals, but the metrics are in celcius. A good graphic would include an indication that they're meeting their goals. Either having two groupings or an additional column to provide a quick way to see that information.

[–] jacksilver 4 points 1 week ago (1 children)

Not sure if you're serious, but they were making a joke because Intel, who makes chips, is a competitor to TMSC the chip manufacturer from the article.

So they played on that relationship by treating the word Intel in your "thanks for the Intel" comment as meaning the company.

view more: ‹ prev next ›