this post was submitted on 20 Aug 2023
1262 points (99.0% liked)

Programmer Humor

19623 readers
75 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 1 year ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] kromem 1 points 1 year ago (1 children)

I can't disagree with your colleagues more, and suppose that perhaps they are reporting experiences in a fresh codebase or early on in its release.

With a mature codebase, it feeds a lot of that in as context, and so suggestions match your naming conventions, style, etc.

It could definitely use integration with a linter so it doesn't generate subtle bugs around generative naming mismatching actual methods/variables, but it's become remarkably good, particularly in the past few weeks.

BTW, if you want more milage out of ChatGPT, I would actually encourage it to be extremely verbose with comments. You can always strip them out later, but the way generative models work, the things it generates along the way impact where it ends up. There's a whole technique around having it work through problems in detailed thoughts called "chain of thought prompting" and you'll probably have much better results instructing it to work through what needs to be done in a comment preceding its activity writing the code than just having it write the code.

And yes, I'm particularly excited to see where the Llama models go, especially as edge hardware is increasingly tailored for AI workloads over the next few years.

[–] [email protected] 1 points 1 year ago

It could definitely use integration with a linter so it doesn’t generate subtle bugs around generative naming mismatching actual methods/variables, but it’s become remarkably good, particularly in the past few weeks.

Maybe I should try it again, I doubt thought that it really helps me, I'm a fast typer, and I don't like to be interrupted by something wrong all the time (or not really useful) when I have a creative phase (a good LSP like rust-analyzer seems to be a sweet spot I think). And something like copilot seems to just confuse me all the time, either by showing plain wrong stuff, or something like: what does it want? ahh makes sense -> why this way, that way is better (then writing instead how I would've done it), so I'll just skip that part for more complex stuff at least.

But it would be interesting how it may look like with code that's a little bit less exotic/living on the edge of the language. Like typical frontend or backend stuff.

In what context are you using it, that it provides good results?

I would actually encourage it to be extremely verbose with comments

Yeah I don't know, I'm not writing the code to feed it to an LLM, I like to write it for humans, with good function doc (for humans), I hope that an LLM is smart enough at some day to get the context. And that may be soon enough, but til then, I don't see a real benefit of LLMs for code (other than (imprecise) boilerplate generators).