this post was submitted on 24 May 2024
609 points (97.1% liked)

Technology

33645 readers
98 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Downcount 30 points 1 month ago (4 children)

The funny thing is, if you point out its mistakes, it often does better on subsequent attempts.

Or it get stuck in an endless loop of two different but wrong solutions.

Me: This is my system, version x. I want to achieve this.

ChatGpt: Here's the solution.

Me: But this only works with Version y of given system, not x

ChatGpt: Try this.

Me: This is using a method that never existed in the framework.

ChatGpt:

[–] [email protected] 13 points 1 month ago
  1. "Oh, I see the problem. In order to correct (what went wrong with the last implementation), we can (complete code re-implementation which also doesn't work)"
  2. Goto 1
[–] UberMentch 8 points 1 month ago (1 children)

I used to have this issue more often as well. I've had good results recently by **not ** pointing out mistakes in replies, but by going back to the message before GPT's response and saying "do not include y."

[–] [email protected] 5 points 1 month ago

Agreed, I send my first prompt, review the output, smack my head “obviously it couldn’t read my mind on that missing requirement”, and go back and edit the first prompt as if I really was a competent and clear communicator all along.

It’s actually not a bad strategy because it can make some adept assumptions that may have seemed pertinent to include, so instead of typing out every requirement you can think of, you speech-to-text* a half-assed prompt and then know exactly what to fix a few seconds later.

*[ad] free Ecco Dictate on iOS, TypingMind’s built-in dictation… anything using OpenAI Whisper, godly accuracy. btw TypingMind is great - stick in GPT-4o & Claude 3 Opus API keys and boom

[–] Boozilla 4 points 1 month ago (1 children)

Ha! That definitely happens sometimes, too.

[–] [email protected] 1 points 1 month ago

But only sometimes. Not often enough that I don't still find it more useful than not.

[–] BrianTheeBiscuiteer 2 points 1 month ago

While explaining BTRFS I've seen ChatGPT contradict itself in the middle of a paragraph. Then when I call it out it apologizes and then contradicts itself again with slightly different verbiage.