this post was submitted on 02 Oct 2023
155 points (88.9% liked)

Technology

34561 readers
254 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 1 year ago (1 children)

You can't feed it perceptions no more than you can feed me your perceptions. You give it text and the quality of the output is determined by how the LLM has been trained to understand that text. If by feeding it perceptions, you mean by what it's trained on, I have to remind you that the reality GPT is trained on is the one dictated by the internet with all of its biases. The internet is not a reflection of reality, it's how many people escape from reality and share information. It's highly subject to survivorship bias. If the information doesn't appear on the internet, GPT is unaware of it.

To give an example, if GPT gives you a bad output and you tell it that it's a bad output, it will apologise. This seems smart but it's not really. It doesn't actually feel remorse, it's giving a predetermined response based on what it's understood by your text.

[–] LemmysMum 1 points 1 year ago* (last edited 1 year ago) (1 children)

We're not talking about perceptions as in making an AI literally perceive anything. I can feed you prompts and ideas of my own and get an output no different than if I was using AI tools, the difference being ai tools have already gathered the collective knowledge you'd get from say doing a course in photoshop, taking an art class, reading an encyclopaedia or a novel, going to school for music theory, etc.

[–] [email protected] 1 points 1 year ago (1 children)

I get that part but I think what gets taken more seriously is how 'human" the responses seem which is a testament to how good the LLM model is. But that's set dressing when GPT has been known to give incorrect, outdated or contradictory answers. Not always but unless you know what kind of answer to expect, you have to verify what it's telling you which means you'll be spending half the time fact-checking the LLM.

[–] LemmysMum 1 points 1 year ago* (last edited 1 year ago)

Exactly, how is the end result not that of the user if they need to craft and modify and adjust and manipulate the prompts inputs and outputs of ai to produce something new or coherent?

It's just a tool. A tool that will improve access to human knowledge and improve each individuals ability to create and produce more complex works with less effort. Each of which will feed back into the algorithm expanding the knowledge and capacity of ai and human ingenuity.