I prefer to call them LLMs (Large Language Models). It’s how they are referred to in the industry and I think it’s far more accurate than “AI”
Ask Lemmy
A Fediverse community for open-ended, thought provoking questions
Please don't post about US Politics. If you need to do this, try [email protected]
Rules: (interactive)
1) Be nice and; have fun
Doxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them
2) All posts must end with a '?'
This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?
3) No spam
Please do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.
4) NSFW is okay, within reason
Just remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either [email protected] or [email protected].
NSFW comments should be restricted to posts tagged [NSFW].
5) This is not a support community.
It is not a place for 'how do I?', type questions.
If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email [email protected]. For other questions check our partnered communities list, or use the search function.
Reminder: The terms of service apply here too.
Partnered Communities:
Logo design credit goes to: tubbadu
Thank you, it's frustrating seeing (almost) everyone call them AI. If/when actual AI comes into existence I think a lot of people are going to miss the implications as they've become used to every LLM and its grandmother being called AI.
AI is correct though, it is a form of AI. It's just a more general term and LLM is more specific.
But I want to be an obnoxious pedantic asshole!
Then you'll fit right in.
I’ve never done this and I guess I need to go yell at a cloud somewhere if this is about to become a thing.
Better yet, yell into the cloud, let an LLM respond
Understandable! I wouldn't want to just talk to a chat bot either, whilst thinking I'm talking to a friend.
The way I use it is mostly to get a starting point from which I'll edit further. Sometimes the generated response is bang on though and I admit I have just copy pasted.
I'd be pretty mad if I knew someone was sending personal texts/emails to openai
Did you use ai to write this post?
No actually! It's not a problem for me to write text per se. Actually it's a significant part of my job to write guidelines, documentation, etc.
What's difficult about replying to people is putting my opinions in relation to the other's expectations.
I tell people I have "phone anxiety"... but it sucks. Family, friends, new acquaintances... it doesn't matter, trying to reply or answer a phone can feel like torture sometimes. Have absolutely lost a few friends over this. You're not alone
....................................... Literally never.
And its never once crossed my mind.
And if one of my friends told me they did this to talk to me, I think I'd just stop talking to them, because I want to talk to them. If I wanted to be friends with a computer, I'd get a tamagachi.
No, and I'd say it's probably not the solution to your problem that you think it is.
Reading the rest of these comments, I can't help but agree. If I found out a friend, family member, or coworker was answering me with chatgpt I'd be pretty pissed. Not only would they be feeding my private conversation to a third party, but they can't even be bothered to formulate an answer to me. What am I, chopped liver? If others find out you're doing this, it might be pretty bad for you.
Additionally, you yourself aren't getting better at answering emails and messages. You'll give people the wrong impression about how you are as a person, and the difference between the two tones could be confusing or make them suspicious - not that you're using chatgpt, but that there's something fake.
This is in the same ballpark as digital friends or significant others. Those don't help with isolation, they just make you more isolated. Using chatgpt like this doesn't make you a better communicator, it just stops you from practicing that skill.
No. I have the same problem you do, which is harming my friendships and networking.
But I definitely am not going to reach for the solution you did. Because if anyone notices, it will effectively nuke that relationship from orbit.
Putting myself in the position of a friend who realized that you were using gpt or something to form thoughts...
I'd be impressed that you found that solution, and then I'd want to check be sure that the things you said were true.
Like, if I found out that 90% of your life as I knew it was just mistakes then computer made that you didn't bother to edit, I'd be bummed and betrayed, and it would turn out how you said.
On the other hand, if everything you sent is true to life and you formed the computer's responses into your personality, I'd be very much impressed that you used this novel tool to keep in contact and overcome the frozen state that had kept you from responding before.
@Usernameblankface that's a kind and generous interpretation, and I hope it's the one OP's friends will come to.
I suspect it's likely to be seen as an outsourcing of the friendship, though.
I never used it, but damn are people here judgy. I don't understand how it's a personal insult if someone used it in the way you're describing. As long as your actual thoughts and emotions are what you send, who cares you used a tool to express them.
Anxiety is rough. I wish people were more understanding.
Thank you! I probably could have been more elaborate in the op. But it doesn't seem like people really paid attention to it regardless. I don't just plop in a message I received and go with whatever response. I sanitize the received message of personal information as much as possible, then I let the LLM know what I want to say, and then use the response as a starting point which I'll further edit. Admittedly sometimes I get something that is just bang on and I'll copy paste. But it rarely happens since the model can't match my personal writing style.
As you recognise, it's still my thoughts and feelings. It's akin to having a secretary writing drafts for you maybe? Not that I would know anything about having a secretary, ha!
This sounds like a plot to a horror movie. It all starts out with good intent, but pretty soon you notice your AI responses seem a little off. You try to correct it but it in turn corrects you. Your reach out to family and friends but they dislike your ‘new’ tone and are concerned for your sudden change in behavior…
The one time I drafted an email using ai, I was told off as being " incredibly inappropriate " so heck no. I have no idea what was inappropriate either, it looked fine to me. Spooky that I can't notice the issues, so I don't touch it
If you're using it right then there'd be no way for the recipient to even tell whether you'd used it, though. Did you forget to edit a line that began with "As a large language model"?
Once you know someone is using it, it's very easy to know when you're reading AI generated text. It lacks tone and any sense of identity.
While I don't mind it in theory, I am left with the feeling of "well if you can't be bothered with this conversation..."
😆
"I'm not a cat!"
- says a cat through the Webcam.
First of all, I can really empathize with your anxieties. I've lost contact with a few penpals years ago because of similar issues and I still hate myself for it.
I don't use chat-gpt for writing my replies, because my English is crap and my manner of writing distinct enough that any friend can immediately spot a real response from a generated one (not enough smileys for one :)
But I still have similar anxieties. So if I feel anxious about writing something, I do sometimes give a general description of the original mail ("A friend of mine wrote about her mother's funeral", "a family member lost his cat", etc.) and give it the reply I've written so far (names and personal details removed).
I then explain that I feel anxious about my reply and worry if I hit the right tone. I never ask it to write for me, only to give critique where necessary and advice on how to improve (for good measure I always add some snide remarks on how it sounds too fake to ever pass as a human so don't even bother trying, which it always takes in good humor because.. well.. AI :)
I ignore most of the suggestions because it sounds like a corporate HR communique. But, what's more important is that it usually tries to tell me that I was thoughtful, considerate and that that little light-hearted joke at the end was just sweet enough to add a personal touch without coming across as insensitive.
Just to get some positive feedback, even from software that was designed specifically for that purpose, gives me that little push to hit the send button and hope for the best. I wouldn't dare to ask someone else for advice because it would be an admission of how weak and insecure I feel about expressing myself in the first place, which would ramp up my anxiety by making it a 'big thing'.
Anyway, I can understand the animosity people show against AI. And I'm happy for those who don't need or want it.
PS: This reply was 100% written without any use of AI, direct or indirectly. I did spend a good half hour on it before feeling confident enough to hit "Post" :)
This is pretty much how I use it as well!! I wasn't very detailed in the op.
And yes, the positive feedback is gold!
my resume is 90% chatGPT... the informations are true, but i could never write in that style. it got me two jobs, so i know it works.
i used it a couple of times to rewrite stuff given a context. like i wrote the email but it came out in a vague passive aggressive tone, and letting chatGPT rewrite it will reword it to be more appropriate given the context.
Solution: Write everything in a passive aggressive tone to vent out your frustrations, let the bots do the cleanup.
New problem: Get used to speaking in a passive aggressive tone. Oh shit.
I use it whenever I need to write in Corporate Speak. Resume, cover letter, important email.
I also avoid putting in sensitive information, so it needs editing. I found that usually it will leave me places that need specific information, (name here) for example.
It is soooo much better than smashing out some sloppy attempts and rewording it until I get the style right.
Zero. It's important to me to be personal in my interactions.
I try not to. With work email, you should write it as short and to the point as possible, no one really has time to read an essay instead of trying to get their job done.
Part of the reason I use Lemmy is for writing practice, because I want to prove that as a person that I can't be replaced by an AI. This place basically forces me to think on my feet to write quickly on an ever changing set of random topics and get my point across clearly and effectively.
I am against this. You are using chatbots to avoid your problem, I don't think this is healthu.
Showing ChatGPT how to respond to my messages sounds like more work than just replying to them myself.
I don't even use predictive text.
Never.
I mostly just use it for laughs. I'll usually ask GPT to explain things from the nihilistic viewpoint and get amazing results.
I also use it to rewrite emails that I need to send for work. I have a tendency to over-explain things and use a cold tone when I write, so sometimes I'll tell it "rewrite this to be more concise and empathetic" and it does a really good job of cleaning it up.
Maybe what you're doing with artificial intelligence isn't exactly a good idea.
Never, I have no issue with formulating a lot, I just tend to not immediately reply and then forget.
When I Text people I don't know well I use goblin tools that uses chat gpt to "translate" how I speak to neurotypical speak, which generally makes them not hate me for being without all the added fillers. Also great for professional emails and text because it makes me look a lot "smarter" because of all the buzz words and phrases it adds for me.
I have a problem with writing out my thoughts in a concise way that flows well. I can't think of the correct word. I think it starts with "C". So I use ChaGPT like so:
- I write my thoughts out as a stream of consciousness.
- I tell ChatGPt what I am trying to communcate.
- I paste the stream of consciousness.
- It assembles it as a reply formatted as a message or email.
- I read over it to ensure it got everything correct and worded everything the correct way.
- I tell it what I want changed or explain why I don't like a certain part, and it adjusts as needed.
Then I edit the output as I need. I don't always do the editing and just send the output, depends how I feel and how well it does. I am thinking I am just going to start appending a default "Due to my brain injury, ChatGPT may have assisted me in composing this message" in my email signature, with a link to a screenshot of my process on imgur or something.
I look at it like a psychologist or speech pathologist helping me write/assemble a letter. It's awesome.
And I can usually tell immediately when something has been written by ChatGPT lol. Unless they've gone through and edited the whole thing.
If you can't genuinely talk with me without the need for an llm then I'd say we weren't really friends to begin with.
never