this post was submitted on 13 Nov 2023
233 points (96.0% liked)
Technology
59665 readers
3831 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
What do you do when ChatGPT just makes shit up or answers incorrectly to yes or no questions, you'd have no way of knowing it was wrong
ChatGPT is most useful when you may not know the right answer, but you know a wrong answer when you see one. It's very useful for technical issues. Much quicker for troubleshooting than searching page after page for a solution.
It’s actually great at troubleshooting linux stuff weirdly enough lol
Web/full-stack development?
Yeah that makes sense. The success rate might fall off a cliff in more complex software projects. E.g. applications that require designs beyond 10 UML boxes with hundreds of thousands of lines, especially not written in JS/Python.
Can you post the app?
Bing AI provides reference in the "more precise" version
Not the other commenter:
I usually have an idea about the thing I'm asking, and if not then I'll look up the topics mentioned after some guided brainstorming
I've also found that asking the same question again, after resetting the chat, can give you an idea of what is happening
While this is an important thing to understand about AI, it's an overstated issue once understood. For most questions I ask AI, it doesn't matter if it's correct as long as it pulls some half useful info to get me on track (i.e programming). For other questions, I only ask it if I need to figure out where to look next, which it will usually do just fine.
The first page of my search results is all AI generated garbage articles anyway, at least I know what I am getting with GPT and can take it as such.
Yup, as long as you are aware that it could be wrong and look at it critically LLMs at GPT scale are very useful tools. The best way I've heard it described is having a lightning fast intern who often gets things wrong but will always give it a go.
So long as you're calibrated to "how might this be wrong" when looking at the results it is exceptionally useful.
I'm curious what you use it for, because I try to use it daily for IT related queries and it gets less than half of what I ask correct. I basically have to fact check almost everything it tells me which kind of defeats the purpose. It does shine when I need really abstract instructions though, the other day I asked it how to get into a PERC controller on some old server and Google had nothing helpful, and ChatGPT laid out the instructions to get in there and rebuild a disk perfectly. So while it has some usefulness I generally can't really trust it fully.
The point you have to remember is that it is trained on bulk data out there in a very inefficient manner, it needs to see thousands of examples in order to start getting any sort of understanding of something. If you ask it "how do I do {common task} in {popular language}" you will generally get excellent results, but the further you stray from that the more likely to be error prone it is.
Still it is often good to get you looking on the right track when you are unsure to start, and is fantastic for learning a new language. I've been using it extensively in learning C# where I know what I want to code but not exactly how to use existing features to do it.
But generally you can't (shouldn't) trust web search results fully either. At the end of the day, the onus is on you as the user to do your due diligence.
I've seen ChatGPT give me wrong information, and sometimes it would be bad to execute the code or command it generated it, but I know enough to say "are you sure thats correct?". Hell, you can just challenge it each time or open a new session and ask it "what does this code do: insert-code-it generated here".
You shouldn't just paste a search result command from stack overflow into your terminal either. And at least with chatgpt you can ask it to explain the command or code in detail and it will walk you through what each step does.
Also, pasting that command from stack over flow into chatgpt and adding your specific context around it is HUGE. Thats why I say they are different products/use cases but they work well in concert. They just dont work well combined together like bing and google have been doing.
edit: I guess lemmy escapes certain characters and it ate my post.
That works for some things, but ChatGPT makes a lot of shit up
ChatGPT is not a search engine. It takes random shit from the Internet and stitches it together. It can often get things wrong in my experience. It's best to always fact check.
I thought ChatGPT can't search the internet and is using a LLM snapshot from 2021?
And I thought Bing's ChatGPT model is allowed to search the internet live?
Doesn't that make Bing's version of ChatGPT superior?
This was recently updated for paid users. You can now browse the internet, upload files and images, and they’ve also unlocked APIs by giving it tokens. It’s getting closer to being fully multi-modal quite quickly.
I feel like you are lying, because I cannot see where you can enable that feature.
Keyword searches worked fine and pulled up exactly what I wanted for years, I swear to god. Somewhere in the last decade though websites have gamed the system and now I can't find anything no matter how I word my search. It's depressing.
I prefer that stack looks the same as it did way back when. And stack is usually where i find my answers.
I use ChatGPT every day too. Because Google is being such a shit about YouTube I am in the process of moving away from Google altogether. I use DuckDuckGo for search, which indirectly uses Bing. It's mostly OK. Sometimes I'm forced to try Google, it usually doesn't help. But for programming, yeah, StackOverflow feels downright regressive now.
I'm honestly kind of surprised about this news, considering how horrible Google's results are now.
I thought ChatGPT can't search the internet and is using a LLM snapshot from 2021?
And I thought Bing's ChatGPT model is allowed to search the internet live?
Doesn't that make Bing's version of ChatGPT superior?
I've found this to be kind of subjective. Bing/Bard is more current than ChatGPT but yet I just find ChatGPT to be better. It's snappier and more conversant with context. It seems to understand you when you chide it for not quite doing what you asked it to do, and it responds in kind. I mostly use it for programming to be fair, but even for other stuff, ChatGPT just somehow feels more... real? I can't quite put my finger on it.
There was a short time where Bing chat was kind of frighteningly real. Took them five seconds to nerf that shit and it's never been anywhere near the same.
Edit: I expect this answer to be out of date within 3 months. Things keep moving.
GPT4 on ChatGPT was recently (last week ish) updated to include data up to April 2023.