this post was submitted on 05 Feb 2025
373 points (97.2% liked)

Greentext

4975 readers
1519 users here now

This is a place to share greentexts and witness the confounding life of Anon. If you're new to the Greentext community, think of it as a sort of zoo with Anon as the main attraction.

Be warned:

If you find yourself getting angry (or god forbid, agreeing) with something Anon has said, you might be doing it wrong.

founded 1 year ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 2 points 3 hours ago (1 children)

Because the tools are here and not going anyway

I agree with this on a global scale; I was thinking about on a personal scale. In the context of the entire world, I do think the tools will be around for a long time before they ever fall out of use.

The actually useful shit LLMs can do.

I'll be the first to admit I don't know many use cases of LLMs. I don't use them, so I haven't explored what they can do. As my experience is simply my own, I'm certain there are uses of LLMs that I hadn't considered. I'm personally of the opinion that I won't gain anything out of LLMs that I can't get elsewhere; however, if a tool helps you more than any other method, then that tool could absolutely be useful.

[–] [email protected] 2 points 2 hours ago (1 children)

My 2 cents on this.

I never used LLMs until recently; not for moral or ideological reasons but because I had never felt much need to, and I also remember when ChatGPT originally came out it asked for my phone number, and that's a hard no from me.

But a few months ago I decided to give it another go (no phone number now), and found it quite useful sometimes. However before I explain how I use it and why I find it useful, I have to point out that this is only the case because of how crap search engines are nowadays, which pages and pages of trash results and articles.

Basically, I use it as a rudimentary search engine to help me solve technical problems sometimes, or to clear something up that I'm having a hard time finding good results for. In this way, it's also useful to get a rudimentary understanding of something, especially when you don't even know what terms to use to begin searching for something in the first place. However, this has the obvious limitation that you can't get info for things that are more recent than the training data.

Another thing I can think of, is that it might be quite useful if you want to learn and practice another language, since language is what it does best, and it can work as a sort of pen pal, fixing your mistakes if you ask it to.

In addition to all that, I've seen people make what are essentially text based adventure games that allow much more freedom than traditional ones, since you don't have to plan everything yourself - you can just give it a setting and a set of rules to follow, and it will mould the story as the player progresses. Basically DnD.

[–] [email protected] 1 points 4 minutes ago

Basically, I use it as a rudimentary search engine

The other day I had a very obscure query where the web page results were very few and completely useless. Reluctantly I looked at the Google LLM-generated "AI Overview" or whatever it's called. What it came up with was completely plausible, but utter bullshit. After a quick look I could see that it had taken text that answered a similar question, and just weaved some words I was looking for into the answer in a plausible way. Utterly useless, and just ended up wasting my time checking that it was useless.

Another thing I can think of, is that it might be quite useful if you want to learn and practice another language

No, it's terrible at that. Google's translation tool uses an LLM-based design. It's terrible because it doesn't understand the context of a word or phrase.

For instance, a guy might say to his mate: "Hey, you mad cunt!". Plug that into an LLM translation and it you don't know what it might come up with. In some languages it actually translates to something that will translate back to "Hey, you mad cunt". In Spanish it goes for "Oye, maldita sea", which is basically "Hey, dammit" Which is not the sense it was used at all. Shorten that to "Hey, you mad?" and you get the problem that "mad" could be crazy or it could be angry, depending on the context and the dialect. If you were talking with a human, they might ask you for context cues before translating, but the LLMs just pick the most probable translation and go with that.

If you use long conversational interface, it will get more context, but then you run into the problem that there's no intelligence there. You're basically conversing with the equivalent of a zombie. Something's animating the body, but the spark of life is gone. It is also designed never to be angry, never to be sad, never to be jealous, it's always perky and pleasant. So, it might help you learn a language a bit, but you're learning the zombified version of the language.

Basically DnD.

D&D by the world's worst DM. The key thing a DM brings to a game is that they're telling a story. They've thought about a plot. They have interesting characters that advance that plot. They get to know the players so they know how to subvert their expectations. The hardest things for a DM to deal with is a player doing something unexpected. When that happens they need to adjust everything so that what happens still fits in with the world they're imagining, and try to nudge the players back to the story they've built. An LLM will just happily continue generating text that meets the heuristics of a story. But, that basically means that the players have no real agency. Nothing they do has real consequences because you can't affect the plot of the story when there's no plot to begin with.

And, what if you just use an LLM for dialogue in a game where the story/plot was written by a human. That's fine until the LLM generates a plausible dialogue that's "wrong". Like, say the player is investigating a murder and talks to a guard. In a proper game, the guard might not know the answer, or might know the answer and lie, or might know the answer but not be willing / able to tell the player. But, if you put an LLM in there, it can generate a plausible response from a guard, and that plausible response might match one of those scenarios, but it doesn't have a concept that this guard is "an honest but dumb guard" or "a manipulative guard who was part of the plot". If the player comes and talks to the guard again, will they still be that same character, or will the LLM generate more plausible dialogue from a guard, that goes against the previous "personality" of that guard?