this post was submitted on 15 Feb 2024
29 points (87.2% liked)
Autism
6884 readers
81 users here now
A community for respectful discussion and memes related to autism acceptance. All neurotypes are welcome.
We have created our own instance! Visit Autism Place the following community for more info.
Community:
Values
- Acceptance
- Openness
- Understanding
- Equality
- Reciprocity
- Mutuality
- Love
Rules
- No abusive, derogatory, or offensive post/comments e.g: racism, sexism, religious hatred, homophobia, gatekeeping, trolling.
- Posts must be related to autism, off-topic discussions happen in the matrix chat.
- Your posts must include a text body. It doesn't have to be long, it just needs to be descriptive.
- Do not request donations.
- Be respectful in discussions.
- Do not post misinformation.
- Mark NSFW content accordingly.
- Do not promote Autism Speaks.
- General Lemmy World rules.
Encouraged
- Open acceptance of all autism levels as a respectable neurotype.
- Funny memes.
- Respectful venting.
- Describe posts of pictures/memes using text in the body for our visually impaired users.
- Welcoming and accepting attitudes.
- Questions regarding autism.
- Questions on confusing situations.
- Seeking and sharing support.
- Engagement in our community's values.
- Expressing a difference of opinion without directly insulting another user.
- Please report questionable posts and let the mods deal with it. Chat Room
- We have a chat room! Want to engage in dialogue? Come join us at the community's Matrix Chat.
.
Helpful Resources
- Are you seeking education, support groups, and more? Take a look at our list of helpful resources.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I am no expert either, but I once trained and ran an AI chat bot of my own. With a decently powerful Nvidia GPU it could output a message every 20-ish seconds (which is still too slow if you want to keep the conversation at a decent pace). I also tried it without a GPU, just running on my CPU (on a PC that had an AMD GPU which is about the same as not having one for ML applications) and it was of course noticeably slower. About 3 minutes per message, give or take.
And bear in mind, this was with an old and comparatively tiny model, something like Pi would be much more demanding, the replies my model produced hardly made any sense most of the times.