this post was submitted on 11 Jun 2023
1 points (100.0% liked)

Programming

355 readers
1 users here now

This magazine is dedicated to discussions on programming languages, software development, and coding. Whether you are a beginner programmer or an experienced developer, this is the place for you. Here you can share your knowledge, ask questions, and engage in discussions on topics such as coding languages, software engineering, web development, and more. From the latest trends and frameworks to tips and tricks for debugging, this category covers a wide range of topics related to programming.

founded 2 years ago
 

“* People ask LLMs to write code

LLMs recommend imports that don't actually exist
Attackers work out what these imports' names are, and create & upload them with malicious payloads
People using LLM-written code then auto-add malware themselves”

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 0 points 1 year ago (2 children)

ChatGPT and similar LLMs don't really "know" anything. They can only predict what the answer should look like. This means that they can't be trusted for much and their answers should be reviewed before used, because anything they produce will sound correct by default.

[–] [email protected] 1 points 1 year ago (1 children)

and the devs copy+pasting code from it probably are aware of that it doesn't know anything, and that it is likely synthesizing something based on StackOverflow, which they used to happily copy+paste from a few months ago.

If the libraries ChatGPT suggests work ~80% of the time, this leaves an opportunity for someone to provide a "solution" the other 20%.

[–] [email protected] 1 points 1 year ago

This is pretty much my experience. It did a pretty good job with the grunt work of setting up a Qt UI in python, but something like 5/20 imports were wrong.