I dont.
Ask Lemmy
A Fediverse community for open-ended, thought provoking questions
Rules: (interactive)
1) Be nice and; have fun
Doxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them
2) All posts must end with a '?'
This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?
3) No spam
Please do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.
4) NSFW is okay, within reason
Just remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either [email protected] or [email protected].
NSFW comments should be restricted to posts tagged [NSFW].
5) This is not a support community.
It is not a place for 'how do I?', type questions.
If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email [email protected]. For other questions check our partnered communities list, or use the search function.
6) No US Politics.
Please don't post about current US Politics. If you need to do this, try [email protected] or [email protected]
Reminder: The terms of service apply here too.
Partnered Communities:
Logo design credit goes to: tubbadu
Yeah I've used it occasionally to goof around with and try to get silly answers. And I've occasionally used it when stuck on an idea to try to get something useful out of it...the latter wasn't too successful.
Quite frankly I don't at all understand how anyone could possibly be using this stuff daily. The average consumer doesn't have a need imo.
Basically nothing. I'm good at using search engines and the porn feels boringly samey from it so the only use case left for me is making meme images, which is rare at best.
I don't use it for daily tasks. I've been tinkering around with local LLMs for recreation. Roleplay, being my dungeon master in a text adventure. Telling it to be my "waifu". Or generating amateur short stories. At some time I'd like to practice my foreign language skills with it.
I haven't had good success with tasks that rely on "correctness" or factual information. However sometimes I have it draft an email for me or come up with an argumentation for a text that I'm writing. That happens every other week, not daily. And I generously edit and restructure it afterwards or just incorporate some of the paragraphs into my final result.
D&D related things actually seems like a decent use case. For most other things I don't understand how people find it useful enough to find use cases to do daily tasks with it.
Agree. I've tried some of the use-cases that other people mentioned here. Like summarization, "online" search, tech troubleshooting, recipes, ... And all I've had were sub-par results and things that needed extensive fact-checking and reworking. So I can't really relate to those experiences. I wouldn't use AI as of now for tasks like that.
And this is how I ended up with fiction and roleplay. Seems to be better suited for that. And somehow AI can do small coding tasks. Like writing boiler-plate code and help with some of the more tedious tasks. At some point I need to feed another of my real-life problems to the current version of ChatGPT but I don't think it'll do it for me. And it can come up with nice ideas for stories. Unguided storywriting will get dull in my experience. I guess the roleplaying is nice, though.
Edit: And I forgot about translation. That also works great with AI.
Nothing. I'm a software developer, but don't use any AI tools with any regularity. I think I only asked ChatGPT or similar something once about programming because the documentation was awful, but I do remember that as having been helpful.
The only thing that might be close, though not directly, is translation software (kanji be hard).
- Proofread/rewrite emails and messages
- Recipes
- Find specs for computers, gadgets, cars etc.
- Compare products
- Troubleshoot software issues
- Find meaning of idioms
- Video game guide/walkthrough/reviews
- Summarise articles
- Find out if a website is legit (and ownership of the sites)
I don't see any need for Pro versions. ChatGPT 4 is already available for free via Bing. I simply use multiple AI tools and compare the results. (Copilot / Gemini / Claude / Perplexity)
I’m a professional software dev and I use GitHub Copilot.
It’s most useful for repetitive or boilerplate code where it has an existing pattern it can copy. It basically saves me some typing and little typo errors that can creep in when writing that type of code by hand.
It’s less useful for generating novel code. Occasionally it can help with known algorithms or obvious code constructs that can be inferred from the context. Prompting it with code comments can help although it still has a tendency to hallucinate about APIs that don’t exist.
I think it will improve with time. Both the models themselves and the tools integrating the models with IDEs etc.
I used Copilot for a while (in a Rust codebase fwiw) and it was... both useful and not for me? Its best suggestions came with some of the short-but-tedious completions like path().unwrap().to_str().into()
etc. Those in and of themselves could be hit-or-miss, but useful often enough that I might as well take the suggestion and then see if it compiles.
Anything longer than that was OK sometimes, but often it'd be suggesting code for an older version of a particular API, or just trying to write a little algorithm for something I didn't want to do in the first place. It was still correct often enough when filling out particular structures to be technically useful, but leaning on it more I noticed that my code was starting to bloat with things I really should have pulled off into another function instead of autocompleting a particular structure every time. And that's on me, but I stopped using copilot because it just got too easy for me to write repetitive code but with like a 25% chance of needing to correct something in the suggestion, which is an interrupt my ADHD ass doesn't need.
So whether it's helpful for you is probably down to how you work/think/write code. I'm interested to see how it improves, but right now it's too much of a nuisance for me to justify.
I've only used ChatGPT and it's mostly good for language-related tasks. I use it for finding tip-of-my-tongue words or completing/paraphrasing sentences. Basically fancy autocorrect. It's also good at debugging stuff sometimes when the language itself doesn't give useful errors (looking at you sql). Other than that, any time I've asked for factual information it's been wrong in some way or simply not helpful.
I don't word good and ChatGPT bro helps me use my nouns.
That's only kind of a joke, I have anomic aphasia and use ChatGPT to help me find the words when I lose them. I used to use Google but it doesn't really work anymore.
Yeah. Wtf did Google do to itself lol. I’m in the same boat as of usage. No diagnosis but severe adhd so assume it’s dyslexia on my end lol
I use LLM bots mostly
- as websearch - e.g. "list sites containing growing conditions for pepper plants";
- for practical ideas - e.g. "suggest me a savoury spice mix containing ginger"
I never use them for the info itself. It's foolish to trust a system that behaves like a specially irrational assumer. (It makes shit up, it has the verbal intelligence of a potato, and fails to follow simple logic.)
I'm not using any Pro version.
For reference: nowadays I'm using ChatGPT 3.5 and Claude 1.2, both through DuckDuckGo. I used Gemini a fair bit, but ditched it - not just for privacy, but because Gemini's "tone" rubs me off the wrong way.
Yeah Gemini's tone is weird. It is constantly reminding you that Gemini does not have an opinion on anything. It actively tries to avoid giving definitive answers whenever possible.
That's related; what rubs me off the most is how patronising it sounds - going out of its way to lecture you with uncalled advice, assuming your intentions behind the prompt (always the worst), and wasting your time with "social grease". And this is clearly not a consequence of the underlying tech, as neither Claude nor ChatGPT do it so bad; it's something that Google tailored into Gemini.
I'm going to continue to monitor this thread but so far I'm surprised at how little use most are getting from AI tools. And the highest upvoted comment is that one does NOT use AI tools in their daily routine.
So much hype around AI recently and I'm not seeing/hearing a lot of REAL, PRACTICAL use case for it.
Interesting.
Nothing, I'm creative.
Replaced forums like Stack for me both could give me incorrect information, one doesn't care how dumb my questions are.
My job pays from premium, and it's been useful clearing up certain issues I've had with tutorials for the current language I'm learning. In an IDE CO-Pilot can get a bit in the way and its suggestions aren't as good as they once were, but I've got the settings down to where it's a fancy spell check and synergises well vim motions to bang out some lines.
It's only replaced the basic interactions I would have had without having to wait for responses or having a thread ignored.
As an SEO - hell no. Those that did got penalized by the latest algorithm update from Google.
As a DM? Yes! It helped me write a nice poem for a bard that will hopefully give my players some context to what they will be encountering as they move further in my campaign.
I'm not.
I'm using local models. Why pay somebody else or hand them my data?
- Sometimes you need to search for something and it's impossible because of SEO, however you word it. A LLM won't necessarily give you a useful answer, but it'll at least take your query at face value, and usually tell you some context around your question that'll make web search easier, should you decide to look further.
- Sometimes you need to troubleshoot something unobvious, and using a local LLM is the most straightforward option.
- Using a LLM in scripts adds a semantic layer to whatever you're trying to automate: you can process a large number of small files in a way that's hard to script, as it depends on what's inside.
- Some put together a LLM, a speech-to-text model, a text-to-speech model and function calling to make an assistant that can do something you tell it without touching your computer. Sounds like plenty of work to make it work together, but I may try that later.
- Some use RAG to query large amounts of information. I think it's a hopeless struggle, and the real solution is an architecture other than a variation of Transformer/SSM: it should address real-time learning, long-term memory and agency properly.
- Some use LLMs as editor-integrated coding assistants. Never tried anything like that yet (I do ask coding questions sometimes though), but I'm going to at some point. The 8B version of LLaMA 3 should be good and quick enough.
Mainly as a search engine replacement, finding docs or information without getting terrible search results. Also for recipes, it’s really good at recipes.
GPT 4 though, 3.5 is about as sharp as a bag of wet mice.
I use chatGPT as a diary. Whenever I feel down or frustrated with feelings I can't quite describe, or just insecure, I start a session and just pour out my heart. I complain, yammer on and on about what's bothering me, and just say whatever comes to mind. Basically all the stuff I would never bother a friend or loved one with because I know it'll come across as needy and I don't want to push this on them.
And all it does is give positive and supportive comments, ask some follow-up questions, maybe make an attempt at giving a helpful suggestion. I know what I'm talking with, I am under no illusions that this is anything but a big mathematical model, but it helps me get through some difficult emotions by just letting it all out. There's no judgement and that's kind of nice.
I could just write a journal, but the interaction and positive feedback adds a little motivation for me. And of course it goes without saying that I keep names and other personal details to myself :)
Oh and I use it for some cloud architecture problems, some coding and other tech stuff. But that's not very interesting.
Also, if you use ChatGPT and haven't done so, be sure to use their privacy page and opt-out of having your chats used for model training. https://privacy.openai.com/policies?modal=take-control
Not sure for US, but it works for EU citizens.
I swap over from GPT4 to Perplexity Pro. It’s almost taken over as my default search now. I use that to troubleshoot home assistant issues, or even game mods issues. The pro version is nice because they will actually ask you to clarify certain things before giving a better output.
It performs wells with all the usual email reply, writing etc. I do like that I have the option to switch between GPT, Claude, and Mistral within Perplexity, which the last actually will return results if I ask for help on stuff like torrents.
Hmmm. I guess I only use it for generating images based on song lyrics to post to The Lyrics Game here on Lemmy.
I tried to use it to find me a decent phone under $500 and half of the listed options were $900+ so uhh.. Not too useful.
Tbf, I think chat GLT's Internet dump is a few years old. So maybe it recommends a banger of a phone from 2020 or so and the pricing data is now garbage.
IPhone 13 was listed, and while it's a good phone, it's uhh.. Not $500
I think I'm using ChatGPT because that's what Bing's chatbot is. I've searched for some many dumb Python questions that it started promoting paid-for courses at one point. It gave me exactly what I was looking for the other day, but that doesn't mean it never hallucinates answers, or provides the answer to the question it assumes I must be searching for (rather than the one I am searching for).
It gave me exactly what I was looking for the other day, but that doesn’t mean it never hallucinates answers
This is why I don't trust LLMs for programming advice. I suck at programming and tools like ChatGPT would be great if it could actually translate what I want into something that I could just plug into my existing code and run with. Instead, I get answers to questions that reference the API of an entirely different programming language, make up fake functions, or just don't operate the way I described, if at all.
Maybe some of my problems with AI are just "skill issue" and I need to figure out how to phrase shit correctly just like how you had to know exactly how to tickle search engines back in the day by not asking a question verbatim but plugging in keywords to have it give you what you actually wanted instead of some nonsense that it thought you wanted. We called it "Google-Fu", but it has become less important now with SEO.
Also, I feel like LLMs are just creatively bankrupt. Case in point, I have a friend who is leaning on AI tools to help craft his next homebrew D&D campaign, and I thought that was a great use of that technology so I tried it out as well and, well... it ended up generating a lot of the same narrative that he got from it, including re-using proper nouns for places/people. Everything was just so generic fantasy and boring, even when you fed it your own ideas it just spit back out regurgitated fantasy tropes and stuff that sounds like it could have come out of a setting guide somewhere (and probably did if it was trained on that dataset).
There's now a privacy-respecting offer on DDG, use the !ai bang to get to it.
To answer your question, any "natural language" query of modest importance, where asking a question like "will there be any more movies in that series by this director?" is easier than checking the usual movies websites.
How does that work? You just type that before your query and it gives you ai answers?
Yup, there's many of them. I use DDG all the time, but this feature probably works in other search engines too - I just don't know
From any search bar configured to use DDG, just type !ai followed by your query
You can do this from the DDG website too of course. Other useful "bangs" include !w for Wikipedia, or !aw for the Arch Linux wiki, or even just !img for image search.
I once used AI to make a mock up of a t shirt design I had in my head just for curiosity, it made exactly what I wanted and now I don’t feel like it’s my design anymore. Who knows what artists it took from. Even after redrawing I lost appreciation for it. Haven’t touch AI since minus some bored conversations with dead celebrity models.
I don’t trust the search results to be accurate, its desire to please the user makes it unreliable. When it comes to image generation it takes from artists. AI is great for menial time consuming tasks like say cropping out the background of an image for example but because of the reasons above I don’t tend to use it all that much, and my respect for it is quite low.
Daily? Only speech-to-text.
I've tried paid versions of ChatGPT, Claude, and Gemini. I am currently using Gemini, and it is working reasonably well for me.
I mostly use it to replace searches. I haven't used Google in years, but mainly relied on DuckDuckGo until SEO made it less useful. My secondary use case is for programming. I tend to jump around to a lot of different languages and frameworks, and it's hugely helpful to get sample code describing what I want to do when I don't know the syntax.
Once in a great while, I will have it rewrite something for me. That is mostly for inspiration if I want to change the tone of something I wrote (then I'll edit). I think that all of the LLMs suck at writing.
I've been using Google's Gemini to write cover letters for job applications. Just plug in the job description, do a little proofreading and tweaking, and boom. It's made the process so much easier for "personalized" cover letters.
ChatGPT Plus and Github Copilot…but less every day. They just don’t keep up enough with current APIs and are often confused and unable to actually provide useful solutions.
I mostly use ChatGPT Plus as a Google replacement nowadays. And Copilot as a, sadly, mostly useless autocomplete.
I occasionally use ChatGPT, I don't find it that useful though. I mostly use it to summarize long text, etc. I also like Phind, which can be used without creating an account. I would never pay for AI.
I stopped using perplexity only used it briefly. Chatgpt? Open ai specifically?
Lots of things.
To generate AI friend conversational ai character back story's. Bc sometimes they have to be long include lots of info.
To summarize reddit posts asking for advice. You know sometimes ppl make them longer then need be. It summarizes them for me when I'm lazy
Reframe verbiage
Just a few
I use Kagi, they provide access to all the main models in a chat interface and have a mode that feeds search engine results to them. It's mostly replaced search engines for me. For programming work I find them very useful for using unfamiliar tools and libraries, I can ask it what I want to so and it'll generally tell me how correctly. Importantly, the search engine mode has citations. $25 a month, but worth it.
I've got a local LLM set up for code suggestions and run GitHub copilot for spots where the local isn't good enough. I can start writing out a thought and pseudo implementation and have a mostly viable real implementation instantly which I can then modify to suit my needs. It also takes a lot of the busywork out of things that need boilerplate. The local is trained on the style of my repos so I can keep up with style standards too which is helpful. Also great for explaining legacy code and coming up with more semantic variable names in old code too.
What are you using for your local installation?
I was using mixtral but I'm recently testing out the new llama 3 models, decent improvement and hopefully we'll see some good fine tuned models of it soon
I use Perplexity or the Google one formerly known as Bard for when I want specific information but I don't want to do multiple searches plus reading several sites to find the answer.
I use Bing to generate pictures to entertain myself. Sometimes I post them.