this post was submitted on 21 Sep 2024
84 points (71.4% liked)

Technology

59993 readers
2629 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
 

Please remove it if unallowed

I see alot of people in here who get mad at AI generated code and I am wondering why. I wrote a couple of bash scripts with the help of chatGPT and if anything, I think its great.

Now, I obviously didnt tell it to write the entire code by itself. That would be a horrible idea, instead, I would ask it questions along the way and test its output before putting it in my scripts.

I am fairly competent in writing programs. I know how and when to use arrays, loops, functions, conditionals, etc. I just dont know anything about bash's syntax. Now, I could have used any other languages I knew but chose bash because it made the most sense, that bash is shipped with most linux distros out of the box and one does not have to install another interpreter/compiler for another language. I dont like Bash because of its, dare I say weird syntax but it made the most sense for my purpose so I chose it. Also I have not written anything of this complexity before in Bash, just a bunch of commands in multiple seperate lines so that I dont have to type those one after another. But this one required many rather advanced features. I was not motivated to learn Bash, I just wanted to put my idea into action.

I did start with internet search. But guides I found were lacking. I could not find how to pass values into the function and return from a function easily, or removing trailing slash from directory path or how to loop over array or how to catch errors that occured in previous command or how to seperate letter and number from a string, etc.

That is where chatGPT helped greatly. I would ask chatGPT to write these pieces of code whenever I encountered them, then test its code with various input to see if it works as expected. If not, I would ask it again with what case failed and it would revise the code before I put it in my scripts.

Thanks to chatGPT, someone who has 0 knowledge about bash can write bash easily and quickly that is fairly advanced. I dont think it would take this quick to write what I wrote if I had to do it the old fashioned way, I would eventually write it but it would take far too long. Thanks to chatGPT I can just write all this quickly and forget about it. If I want to learn Bash and am motivated, I would certainly take time to learn it in a nice way.

What do you think? What negative experience do you have with AI chatbots that made you hate them?

you are viewing a single comment's thread
view the rest of the comments
[–] simplymath 45 points 2 months ago* (last edited 2 months ago) (3 children)

People who use LLMs to write code (incorrectly) perceived their code to be more secure than code written by expert humans.

https://arxiv.org/abs/2211.03622

[–] [email protected] 3 points 2 months ago

Lol.

We literally had an applicant use AI in an interview, failed the same step twice, and at the end we asked how confident they were in their code and they said "100%" (we were hoping they'd say they want time to write tests). Oh, and my coworker and I each found two different bugs just by reading the code. That candidate didn't move on to the next round. We've had applicants write buggy code, but they at least said they'd want to write some test before they were confident, and they didn't use AI at all.

I thought that was just a one-off, it's sad if it's actually more common.

[–] [email protected] 2 points 2 months ago (1 children)

OP was able to write a bash script that works... on his machine 🤷 that's far from having to review and send code to production either in FOSS or private development.

[–] [email protected] 4 points 2 months ago (1 children)

I also noticed that they were talking about sending arguments to a custom function? That's like a day-one lesson if you already program. But this was something they couldn't find in regular search?

Maybe I misunderstood something.

[–] [email protected] 4 points 2 months ago

Exactly. If you understand that functions are just commands, then it's quite easy to extrapolate how to pass arguments to that function:

function my_func () {
    echo $1 $2 $3  # prints a b c
}

my_func a b c

Once you understand that core concept, a lot of Bash makes way more sense. Oh, and most of the syntax I provided above is completely unnecessary, because Bash...

[–] [email protected] 1 points 2 months ago (2 children)

Hmm, I'm having trouble understanding the syntax of your statement.

Is it (People who use LLMs to write code incorrectly) (perceived their code to be more secure) (than code written by expert humans.)

Or is it (People who use LLMs to write code) (incorrectly perceived their code to be more secure) (than code written by expert humans.)

[–] simplymath 1 points 2 months ago

I intended B, but A is also true, no?

[–] [email protected] 1 points 2 months ago

The "statement" was taken from the study.

We conduct the first large-scale user study examining how users interact with an AI Code assistant to solve a variety of security related tasks across different programming languages. Overall, we find that participants who had access to an AI assistant based on OpenAI's codex-davinci-002 model wrote significantly less secure code than those without access. Additionally, participants with access to an AI assistant were more likely to believe they wrote secure code than those without access to the AI assistant. Furthermore, we find that participants who trusted the AI less and engaged more with the language and format of their prompts (e.g. re-phrasing, adjusting temperature) provided code with fewer security vulnerabilities. Finally, in order to better inform the design of future AI-based Code assistants, we provide an in-depth analysis of participants' language and interaction behavior, as well as release our user interface as an instrument to conduct similar studies in the future.