this post was submitted on 24 Jun 2023
30 points (85.7% liked)

Programming

17670 readers
253 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities [email protected]



founded 2 years ago
MODERATORS
30
submitted 2 years ago* (last edited 2 years ago) by rarkgrames to c/[email protected]
 

Over the last year I've been learning Swift and starting to put together some iOS apps. I'd definitely class myself as a Swift beginner.

I'm currently building an app and today I used ChatGPT to help with a function I needed to write. I found myself wondering if somehow I was "cheating". In the past I would have used YouTube videos, online tutorials and Stack Overflow, and adapted what I found to work for my particular usage case.

Is using ChatGPT different? The fact that ChatGPT explains the code it writes and often the code still needs fettling to get it to work makes me think that it is a useful learning tool and that as long as I take the time to read the explanations given and ensure I understand what the code is doing then it's probably a good thing on balance.

I was just wondering what other people's thoughts are?

Also, as a side note, I found that chucking code I had written in to ChatGPT and asking it to comment every line was pretty successful and a. big time saver :D

Edit: Thanks everyone for insightful and considered replies.

I think the general consensus is basically where my head was at - use it as a tool like you would SO or other resources but be aware the code may be incorrect, and the reality is there will be work required to adapt and integrate with your current project (very much like SO) and that's where you programming skills really come in to play.

I think I still have imposter syndrome when it comes to development, which is maybe where the question was coming from in my mind. :D.

top 44 comments
sorted by: hot top controversial new old
[–] [email protected] 21 points 2 years ago* (last edited 2 years ago) (1 children)

No, it's not cheating, but also please don't blindly trust it. Random people on the internet can be wrong too but people can at least correct them if they are. Stuff ChatGPT outputs is fresh for your eyes only.

Edit: typo

[–] [email protected] 10 points 2 years ago (2 children)

Agreed. While I've never used ChatGPT on an actual project, I've tested it on theoretical problems and I've never seen it give an answer that didn't have a problem.

So I would treat it like any answer on Stack Overflow, use it as a start, but you should definitely customize it and fix any edge cases.

[–] [email protected] 2 points 2 years ago

I've never used ChatGPT (the workflow sounds tedious) but I have used GitHub copilot for personal stuff. The free ChatGPT has weird rights to your queries, whereas GH copilot doesn't snarf up your code. It genuinely saves a ton of time if you treat it like an in-line Stack Overflow query. It never gets it 100% right, but it can crap out boilerplate like nobody's business.

[–] [email protected] 1 points 2 years ago

It also seems to depend a lot on how popular what you're asking about is. I asked it some questions about docker and it helped me understand some nuances between different commands in Dockerfiles that I was having trouble with. Docker is pretty widely used. I then asked it some questions about how to use the jpackage command from Gradle and it couldn't help at all.

[–] [email protected] 16 points 2 years ago (1 children)

Over time you'll realize Chatgpt has giant holes.

As a developer you do use tools every day -- you probably use a rapid gui tool, you use a compiler, you use APIs, and they're things you probably couldn't build on your own. Even under MS-DOS, you're using bios or msdos interrupts. The PC also handles lot of stuff.

So it's just another tool, and it doesn't do everything so you need to use it as one thing in your pouch. Don't rely on it too much, and be mindful of IP concerns -- ai is like a monkey with a camera legally, you can't copyright whatever it creates.

[–] [email protected] 6 points 2 years ago (2 children)

This! Not even 2 months ago I had a classmate be convinced that he could prompt ChatGPT to write a program that could both encrypt and then decrypt files and store the key secured. It didn't do a single one of those things so we got a long way to go until we got ChatGPT Operating Systems.

[–] [email protected] 7 points 2 years ago (2 children)

My favorite test for ChatGPT is to ask it to write a function to divide two numbers in 6502 assembly. Not only is there no DIV instruction to rely on, but the 6502 is very register starved, so you get a lot of screwups. Here's one example of what ChatGPT generated:

; Inputs:
; - A: dividend
; - X: divisor

DIVISION:
    LDX #0       ; initialize quotient to 0
    CPX #0       ; check for divide-by-zero error
    BEQ DIVIDE_BY_ZERO
    CLC          ; clear carry flag
DIV_LOOP:
    CMP X,A      ; compare divisor with dividend
    BCC DIV_DONE ; if divisor is greater, division is complete
    SEC          ; set carry flag
    ROL A        ; shift dividend left
    INX          ; increment quotient
    JMP DIV_LOOP ; continue division
DIV_DONE:
    RTS          ; return from subroutine
DIVIDE_BY_ZERO:
    ; handle divide-by-zero error here
    RTS

You can see it immediately overwrites the divisor with the quotient, so this thing will always give a divide by zero error. But even if it didn't do that, CMP X,A is an invalid instruction. But even if that wasn't invalid, multiplying the dividend by two (and adding one) is nonsense.

[–] [email protected] 8 points 2 years ago (3 children)

Honestly I still don't get it. Every dialog with ChatGPT where I tried to do something meaningful always ends with ChatGPT hallucinations. It answers general questions, but it imagine something everytime. I asks for a list of command line renderers, it returns list with a few renderers that do not have CLI interface. I asks about library that do something, it returns 5 libraries with one library that definitely can't do it. And so on, so on. ChatGPT is good on trivial task, but I don't need help with trivial task, I can do trivial task myself... Sorry for a rant.

[–] [email protected] 4 points 2 years ago

No you aren't the only one. I've prompted ChatGPT before for SFML library commands and it's given me commands that either don't work anymore or just never existed everytime.

[–] [email protected] 4 points 2 years ago

That’s what (most) people don’t understand. It’s a language model. It’s not an expert system and it’s not a magical know-it-all oracle. It’s supposed to give you an answer like a random human would do. But people trust it much more as they would trust a random stranger, because “it is an AI”…

[–] [email protected] 4 points 2 years ago* (last edited 2 years ago)

That's because ChatGPT and LLM's are not oracles. They don't take into account whether the text they generate is factually correct, because that's not the task they're trained for. They're only trained to generate the next statistically most likely word, then the next word, and then the next one...

You can take a parrot to a math class, have it listen to lessons for a few months and then you can "have a conversation" about math with it. The parrot won't have a deep (or any) understanding of math, but it will gladly replicate phrases it has heard. Many of those phrases could be mathematical facts, but just because the parrot can recite the phrases, doesn't mean it understands their meaning, or that it could even count 3+3.

LLMs are the same. They're excellent at reciting known phrases, even combining popular phrases into novel ones, but even then the model lacks any understanding behind the words and sentences it produces.

If you give an LLM a task in which your objective is to receive factually correct information, you might as well be asking a parrot - the answer may well be factually correct, but it just as well might be a hallucination. In both cases the responsibility of fact checking falls 100% on your shoulders.

So even though LLMs aren't good for information retreival, they're exceptionally good at text generation. The ideal use-cases for LLMs thus lie in the domain of text generation, not information retreival or facts. If you recognize and understand this, you're all set to use ChatGPT effectively, because you know what kind of questions it's good for, and with what kind of questions they're absolutely useless.

[–] [email protected] 1 points 2 years ago

I've only ever done X86 Assembly. But oh lord that does not look like it can really do much. Yet still somehow has like 20 lines.

[–] colonial 4 points 2 years ago

I recently took an "intro to C" course at my university, despite already having some experience - they wouldn't let me test out - so I ended up helping a few of my classmates. Some had made the rookie mistake of "posting the assignment into ChatGPT and hitting enter," whereupon their faces were eaten by nasal demons.

Here's the worst example I saw, with my comments:

char* getName() {
    // Dollar store ass buffer
    char name[1];

    printf("Enter your name: ");
    // STACK GOES BOOM
    scanf("%s", name);
    
    // Returning stack-allocated data, very naughty
    return name;
}

Sighs

[–] [email protected] 14 points 2 years ago

Yes and no. If your goal is to learn how to code manually, then you are "cheating" in that you may not learn as much.

If your goal is to learn how to utilize AI to assist you in daily tasks, I would say you're not.

If your goal is to provide value for others through how much you can produce in a given amount of time, then you're definitely not.

[–] [email protected] 14 points 2 years ago* (last edited 2 years ago) (1 children)

No, it's not cheating. But you are expected to understand what your code does and how.

And this brings us to the explanations it provides. Keep in mind that these AI tools excell in producing content that seems right. But they may very well be hallucinating. And just as for code, small details and exact concepts matter.

I would therefore recommend you to verify your final code against official documentation, to make sure you actually understand.

In the end, as long as you don't trust the AI, neither for solutions or knowledge, its just another tool. Use it as it fits.

[–] [email protected] 4 points 2 years ago

I’d go as far as saying you should know what every line of code does or you’re risking the whole thing to have unexpected side effects. When you understand what the code is doing, you know what parts you should test.

[–] TeaHands 14 points 2 years ago* (last edited 2 years ago)

If you understand the code and are able to adapt it to for your needs it's no different to copy pasting from other sources, imo. It's just a time saver.

If you get to the point where you're blindly trusting it with no ability to understand what it's doing, then you have a problem. But that applied to Stack Overflow too.

[–] [email protected] 13 points 2 years ago (1 children)

I'm dealing with a new service written by someone who extensively cut and pasted from ChatGPT, got it to "almost done -- just needs all the operational excellence type stuff to put it into production", and left the project.

Honestly we should have just scrapped it and rewritten it. It's barely coherent and filled with basic bugs that have wasted so much time.

I feel maybe this style of sloppy coding workflow is better suited to front end coding or a simple CRUD API for saving state, where you can immediately see if something works as intended, than backend services that have to handle common sense business logic like "don't explode if there is no inventory" and etc.

For this dev, I think he was new to the language and got in a tight feedback loop of hacking together stuff with ChatGPT without trying to really understand each line of code. I think he didn't learn as much as if he would have applied himself to reading library and language documentation, and so is still a weak dev. Even though we gave him an opportunity to grow with a small green field service and several months to write it.

[–] [email protected] 3 points 2 years ago* (last edited 2 years ago)

I wouldn't consider the bugs chatgpt's fault, per se. The same could happen by blindly copy/pasting from SO or a template Github project. If you are copy/pasting from anywhere, it's even more important that you have good automated tests with good coverage, and that you take extra time to understand what you pasted.

One of the things I do is generate high level tests first, and then the implementation code. This way I know it works, and I can spend extra time reviewing the test code first to make sure it has the correct goal(s).

Learning is another matter. Personally, ChatGPT has greatly accelerated my learning of libraries and other languages. I've also used it to help me grok a block of complex code, and to automatically comment and refactor complex code into something more understandable. But it can also be used as a crutch.

[–] [email protected] 13 points 2 years ago

Cheating who?

[–] [email protected] 11 points 2 years ago (1 children)

Programming pays well because it's hard. Just keep in mind that if AI is making it easy for you, it's making it easy for a lot of people who could easily replace you.

Use it as a tool, but know what it's doing, and be able to do it yourself after you learn from it.

Personally, I generally struggle through on my own first and then ask it to critique. Great teachers don't just give you the code to copy.

By analogy, you need to be able to hand fly this plane when the autopilot dies; those are the pilots who get the jobs.

[–] [email protected] 2 points 2 years ago* (last edited 2 years ago) (1 children)

Trying yourself first seems like the best approach. There are people who recommend you not to Google the answer until you have tried all the options and looked at the official documentation as an “exercise” of problem-solving without being fed the answer, cause you won’t always have it.

I’m in a situation like that. I currently work for a huge bank which requires a lot of custom configurations and using their own framework for a lot of stuff. So, most of the problems people have cannot be searched online as they’re company specific. I see new workers there struggle a lot because they don’t try to understand what’s wrong and just want a fed copy paste solution to make the problem go away.

[–] [email protected] 1 points 2 years ago

I see new workers there struggle a lot because they don’t try to understand what’s wrong and just want a fed copy paste solution to make the problem go away.

I see that in my students a lot, as well. I've been hammering away that the goal (in school, but frankly in general) is not actually to solve the problem. It's to learn to solve the problem. And that every experience you have, success or failure, is learning to solve problems. The 10 ways you failed to solve this problem all are solutions to other problems that you now know.

"The expert has failed more times than the beginner has even tried."

I fear, however, the pervasive Pride in the Craft that existed 30 years ago is now something observed only by a minority.

[–] [email protected] 11 points 2 years ago (1 children)

I wrote a fairly detailed spec for some software and told it what dependencies to use, what it should do, and what command-line options it should use. The base was a decent starting point, but after several hours of back-and-forth, after actually reading the code, I realized it had completely misinterpreted my spec somehow and implemented a similar feature in a completely broken way, as well as making a few mistakes/redundancies elsewhere. I tried to coach it to fix these issues, but it just couldn't cope.

I spent about 3 hours getting this base code generated, and about 5 hours re-writing it and implementing the features properly. The reason I turned to ChatGPT is because I needed this software written by the end of the day, and I didn't have time to read all the different docs for the dependencies I needed to use to write it. It likely would have taken me at least 2 days to write this program myself. It was an interesting learning experience, but my only ChatGPT usage in the future is likely to be with individual code blocks.

You really need to pay attention if you're using LLMs to generate code. I've found it usually gets at least one thing wrong, and sometimes multiple things horribly wrong. Don't rely on it; look for other sources to corroborate all of its explanations. Additionally, please do not feed proprietary, copyrighted code into ChatGPT. The software I was writing was released under a free license. OpenAI will use it as training data unless you use their API and opt out of it. ChatGPT isn't really a tool; it's a service which is using you as much as you're using it.

[–] [email protected] 1 points 2 years ago* (last edited 2 years ago) (1 children)

I had similar experiences and nowadays I just ask for sample exemples of how to do stuff in isolation then I piece them together myself.

One example was trying to create some hooks for git to avoid copy-pasting something every commit. After trying to often correct it again and again, I just decided to start fresh and ask for a generic sample. It finally gave me a correct one. But I did the work to customize if for my needs and test it.

[–] [email protected] 1 points 2 years ago

The nature of the software I was writing was inter-dependent on other functions, so it needed to understand the surrounding context, which was difficult to coach it on. Its strength is definitely in isolated examples.

I've been using Kagi's Discuss Document feature for some things, like understanding documentation or an API. It's pretty useful. Also works on videos/files like PDFs.

[–] [email protected] 10 points 2 years ago

I really don't think so. You are asking it how to write a function. It explains how the function works and sometimes even how to expand on it. You still have to intergrate that function into your program yourself and tailor it to the purpouse of the program. It's far quicker than Stackoverflow giving 8 functions that don't work.

[–] axtualdave 8 points 2 years ago (1 children)

ChatGPT is, at least for the moment, just a really fancy snippet repository with an search function that works really well.

Is re-using code someone else wrote cheating? Nah.

But, no matter where you get the code from (cough Stackoverflow), if you use it without understanding what it's doing, you're not doing yourself any favors.

[–] [email protected] 9 points 2 years ago (1 children)

I just want to add that ChatGPT is a "really fancy snippet repository" that sometimes, randomly lies to you.

[–] axtualdave 6 points 2 years ago

As a generative language model, I am incapable of lying, but sometimes, I am very, very wrong. /s

[–] [email protected] 8 points 2 years ago

I would view ChatGPT as just an extension of stack overflow and Google. At the end of the day you still have to plug it into your broader code base and that’s what makes a good programmer. That and debugging the issues you get after

[–] [email protected] 6 points 2 years ago

No, it's not cheating (unless you are using it to do your homework, I guess). It's a tool and like any other we learn how to use it appropriately.

But one needs to be aware of other ethical concerns related to using AI generated code. The discussion revolves around companies (OpenAI, Github, etc.) training their models using the code written by people who have not consented use of their code as training data. In some cases, licensing is clear and allows for such use, but in some cases it's debatable (I'm not that much involved in those discussions, so I cannot provide more details).

When creating software, the value we bring is the understanding of a problem and the ability to ask the correct questions that will bring us to a good solution. In simple scenarios, even a machine can do what we do and we should definitely use the machine instead of spending time on that.

[–] [email protected] 5 points 2 years ago

I've never seen utilizing advancing tools as "cheating", but I can understand why purists might scoff at it. You should always be running checks and making sure everything is legit before deployment anyway, so I have a hard time seeing it as anything but Autopilot+.

[–] [email protected] 4 points 2 years ago

It’s no more cheating than scrubbing through StackOverflow posts for help. Just a lot quicker.

[–] [email protected] 3 points 2 years ago

Its only cheating if you pretend you didn't use it. Chatgpt provides based on what you ask and you are responsible for its endresult which will not be perfect and only as good as you can understand and use it. Its a super effective powertool and aid but hiding that your using it is like saying you code with just a keyboard and no monitor.

[–] [email protected] 3 points 2 years ago

Back in the day they used to look things up in books, then the internet came along and you didn't need these heavy books anymore to look something up, you just typed it into a search engine, and today we use ChatGPT to do the "searching"(obviously it's not actually searching on the internet, but you get what I mean) for us. It's just another step in making coding and learning coding easier and more accessible.

[–] [email protected] 3 points 2 years ago
[–] [email protected] 3 points 2 years ago

Cheating? What test or game are you playing that it might be considered cheating? Honestly, I feel it's a legitimate tool to build applications and it can help teach you along the way. If you can stomach using a Microsoft tool then Bing Chat might be an even better option. It's the same technology with a better, IMO, data set.

[–] [email protected] 2 points 2 years ago* (last edited 2 years ago)

It depends what you're wanting to do and what you define as 'cheating'? I'd expect you'd get better at debugging massive amounts of hallucinated code, but I don't think it'd generally improve your skills in software design/engineering/architecture. It might help you learn about breaking down software and integration though.

[–] sznio 1 points 2 years ago

ChatGPT is like a car. It makes you go faster, but if you use it for every time you need to go somewhere you'll eventually face reprecussions. Keep exercising your programming muscle.

[–] [email protected] -1 points 2 years ago

Not using it will make it awfully hard to compete with all the devs who ARE using it.

Asking to ChatGPT to write comments is a GREAT idea!

load more comments
view more: next ›