UncommentedCode

joined 1 year ago
[–] [email protected] 5 points 1 year ago

Any hyrule well

[–] [email protected] 16 points 1 year ago

The issue with LLMs that I have is that while they are great at certain tasks, they are bad at anything, let's call it factual, due to their nature.

I can for example use it to quickly draft up a email or a piece of python code, and I can immediately see whether or not the response it generated is actually what I want.

If I go ask it what the hottest day in a given country was or ask it to explain something, I have absolutely no idea whether it's bullshit or not and I will have to double check it anways.

I think the learning curve with LLMs as a tool is to be able to know when to use it and when to rely on other sources instead.

[–] [email protected] 3 points 1 year ago

Well in these cases, I just scrolled past. It wasn't a common enough occurence for me.

[–] [email protected] 21 points 1 year ago* (last edited 1 year ago) (3 children)

Ironically, that was one of the feature I actually really liked. Seeing the same post two or three times didn't really matter to me since if it was posted in different communities, there was a wider variety of responses and perspectives (or I could just scroll past it).

Also it let me discover new communities that I wasn't aware existed.