this post was submitted on 23 Sep 2024
174 points (94.8% liked)

Technology

58865 readers
5118 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
top 22 comments
sorted by: hot top controversial new old
[–] [email protected] 89 points 1 month ago

OpenAI: Here's a new model that can think in steps and reason about things!

User: How did you conclude this is the correct answer?

OpenAI: No! Not like that! banhammer

[–] [email protected] 87 points 1 month ago (3 children)
[–] glitchdx 25 points 1 month ago (1 children)

did anyone ever actually assume that "open" wasn't a lie?

[–] [email protected] 38 points 1 month ago (1 children)

When I heard about it first, I thought it was some open source project, because of the name. :(

[–] Womble 12 points 1 month ago

It was, originally. GPT-2 was eventually released after some push back from openAI and the models prior to that were fully released immediately. Its been apparent for quite a while that OpenAI have been transitioning from a non-profit org interested in pushing technology forward to a VC backed monopoly-seeking company. The big Altman putsch/counter putsch was just the solidfying of that.

[–] TheBat 15 points 1 month ago

Open, not like a library, but like Sandworm's mouth.

[–] T00l_shed 12 points 1 month ago
[–] [email protected] 75 points 1 month ago (2 children)

Please be applicable to business accounts, Please be applicable to business accounts, Please be applicable to business accounts, Please be applicable to business accounts,

I want to get rid of this shit so bad, of another junior dev submits a shit MR they can’t explain because they had chatGPT write it I’m going to explode. Also, the number of AI executives we have in charge of our manufacturing company is somehow more than we have in charge of manufacturing, and guess what?! They are all MBAs who haven’t written a god damn line of code in their life but have become professional “prompt engineers”.

[–] yemmly 42 points 1 month ago (1 children)

Every time I hear someone talking up prompt engineering, I feel like I should say something. But I don’t.

[–] elrik 28 points 1 month ago

"Prompt engineering" must be the easiest job to replace with AI. You can simply ask an LLM to generate and refine prompts.

[–] [email protected] 9 points 1 month ago (3 children)

Do they not test them before submission?

[–] [email protected] 18 points 1 month ago

I've met someone employed as a dev, who not only didn't know that the compiler generates an executable file, but actually spent a month trying to change the code, not noticing that 0 of their code changes were having any effect whatsoever (because they kept running an old build of mine)

[–] SlopppyEngineer 13 points 1 month ago (1 children)

They probably tested in ideal circumstances and their stuff breaks down when even coming close to an edge case.

[–] [email protected] 10 points 1 month ago (1 children)

I would be really interested in learning a language. The AI assistance method actually meshes very well with my learning style. I would never submit anything to anyone that I was not certain was good working code though. My brain wouldn’t let me do it. Now i just need to choose a language.

[–] [email protected] 15 points 1 month ago (1 children)

I applaud your ethics. But you don't know how close you are to falling from grace.

Just yesterday I had to remove perfectly tested, sensible and non-ai code from our production system, not because that it did not do what the author intended, but because what the author intended was flawed. And this is exactly what ai also cannot teach you right now: Taking a step back to realize that your code might be right, but your intentions are not.

Definetly keep at it. But be aware you will do the wrong things even with perfectly working code.

[–] SlopppyEngineer 4 points 1 month ago

Yeah, the code can work flawlessly in test, but after a few months of production there are a lot more records or files and the code starts to have issues.

[–] [email protected] 4 points 1 month ago

Probably don't know how to get it to run.

[–] [email protected] 11 points 1 month ago (2 children)

I don't understand why it's so hard to sandbox an LLM's configuration data from it's training data.

[–] [email protected] 10 points 1 month ago

Because its all one thing. The promise of AI is that you can basically throw anything at it, and you don't need to understand exactly how/why it makes the connections it does; you just adjust the weights until it kinda looks alright.

There are many structural hacks used to give it better results (and in this case some form of reasoning) but ultimately they're mostly relying on connecting multiple nets together and retrying queries and such. There's no human understandable settings. Neural networks are basically one input and one output (unless you're training it).

[–] [email protected] 1 points 1 month ago (1 children)

What do you mean by "configuration data?"

[–] [email protected] 2 points 1 month ago (1 children)

The data used to configure it.

[–] [email protected] 1 points 1 month ago

Do you mean finetune data?

A model's configuration data is training data.