this post was submitted on 15 Feb 2024
29 points (87.2% liked)

Autism

6529 readers
9 users here now

A community for respectful discussion and memes related to autism acceptance. All neurotypes are welcome.

We have created our own instance! Visit the following community for more info:

Our Community

Values

Rules

  1. No abusive, derogatory, or offensive post/comments e.g: racism, sexism, religious hatred, homophobia, gatekeeping, trolling.
  2. Posts must be related to autism, off-topic discussions happen in the matrix chat.
  3. Your posts must include a text body. It doesn't have to be long, it just needs to be descriptive.
  4. Do not request donations.
  5. Be respectful in discussions.
  6. Do not post misinformation.
  7. Mark NSFW content accordingly.
  8. Do not promote Autism Speaks.
  9. General Lemmy World rules.

Encouraged

  1. Open acceptance of all autism levels as a respectable neurotype.
  2. Funny memes.
  3. Respectful venting.
  4. Describe posts of pictures/memes using text in the body for our visually impaired users.
  5. Welcoming and accepting attitudes.
  6. Questions regarding autism.
  7. Questions on confusing situations.
  8. Seeking and sharing support.
  9. Engagement in our community's values.
  10. Expressing a difference of opinion without directly insulting another user.
  11. Please report questionable posts and let the mods deal with it. Chat Room

Helpful Resources

Relevant Communities

Autism:

ADHD:

Bipolar:

Mental Health:

Misc:

Neurodivergence:

Social:

^lemmy.world/c/autism^ ^will^ ^happily^ ^promote^ ^other^ ^ND^ ^communities^ ^as^ ^long^ ^as^ ^said^ ^communities^ ^demonstrate^ ^that^ ^they^ ^share^ ^our^ ^community's^ ^values.^

Lemmy World Donations

The admins of our instance work hard to ensure that we have a good place to host our community. They are also helpful at protecting our community from trolls and other malicious actors. They do this for free, so if you appreciate our community, please consider donating to them. The first link is the preferred method, but both work.

founded 1 year ago
MODERATORS
 

It’s called Pi and it’s a conversational AI made to be more of a personal assistant. In the bit of time I’ve used it, it’s done far better than I expected at reframing and simplifying my thoughts when I’m overwhelmed.

Obviously, talking to a real person is much better if possible, but the reality is some of us don’t have the finances to pay for therapy or other ways to cope with the anxiety/depression that so often comes with ASD. What are your thoughts on this?

top 11 comments
sorted by: hot top controversial new old
[–] [email protected] 19 points 4 months ago (1 children)

The idea is great. I‘m not in a position to check it rn but if its not selfhosted, its probably a huge security risk.

[–] [email protected] 9 points 4 months ago (3 children)

That’s a huge concern for me too. They do explicitly state in the Privacy Policy that your data will never be sold or shared with third parties for advertising purposes, but that only means so much. It would be nice to see a full list of exact companies/services they use behind-the-scenes. Regardless, I really look forward to the day I can self-host something this powerful myself

[–] [email protected] 7 points 4 months ago (1 children)

It might be correct at this moment (as they have not yet found the data breach or decided selling your data is more profitable).

I would absolutely prefer something selfhosted. If its small, it can run on a pi. If it needs gpu power, one could host it for their friend group or family and recoup the cost and effort that way.

But I honestly dont think a post training conversational AI (for one person) should be that demanding. We‘d need a ML specialist to confirm that though. Some really know what they’re talking about.

[–] [email protected] 4 points 4 months ago (3 children)

Agreed. I’ve dabbled in it some but I’m no expert, maybe someone else could chime in. I just haven’t found anything that works quite as well as Pi yet and it was really intriguing to say the least. You can even talk to it verbally back and forth like a phone call

[–] TheBluePillock 3 points 4 months ago

I would love to be corrected, but when I looked into it, it sounded like you'd probably want 32gb VRAM or better for actual chat ability. You have to have enough memory to load the model, and anything not handled by your GPU takes a major performance hit. Then, you probably want to aim for a 72 billion parameter model. That's a decently conversational level and maybe close to the one you're using (but it's possible they're higher? I'm just guessing). I think 34B models are comparatively more prone to hallucination and inaccuracy. It sounded like the 32GB VRAM was kinda entry point for the 72B models so I stopped looking, because I can't afford that.

So somebody with more experience or knowledge can hopefully correct me or give a better explanation, but just in case, maybe this is a helpful starting point for someone.

You can download models on huggingface.co and interact with them through a web-ui like this one.

[–] [email protected] 2 points 4 months ago

Thats pretty awesome. I only know of mycroft which is an assistant like siri and also only partially selfhosted. I havent had the patience to dabble with this yet. My forte are fediverse instances, a raspi smart tv and home automation. I have used AI for image recognition in nextcloud - both selfhosted obviously - but thats it.

[–] [email protected] 2 points 4 months ago

I am no expert either, but I once trained and ran an AI chat bot of my own. With a decently powerful Nvidia GPU it could output a message every 20-ish seconds (which is still too slow if you want to keep the conversation at a decent pace). I also tried it without a GPU, just running on my CPU (on a PC that had an AMD GPU which is about the same as not having one for ML applications) and it was of course noticeably slower. About 3 minutes per message, give or take.

And bear in mind, this was with an old and comparatively tiny model, something like Pi would be much more demanding, the replies my model produced hardly made any sense most of the times.

[–] [email protected] 1 points 4 months ago

Not sure what model it's running or how powerful your computer is, but it's pretty easy to run a llm yourself, even without a great video card. I'm currently running a docker image called Serge, which lets you download and run several models in a web ui. There are plenty that will run reasonably well with an average computer. There are some other options as well, but that's the one i've found to be the easiest.

If you can figure out docker, I think they even have a docker compose file you can just run.

[–] Falcon 1 points 4 months ago (1 children)

Let’s put it this way, I’d be surprised if they didn’t have a backup of each single one of your messages.

[–] [email protected] 1 points 4 months ago

Thankfully you don’t need an account to use. It does look like they store anonymized messages to “make the service better” according to their terms. Honestly, I don’t know how you’d improve a service like that without some sense of how your users are actually using it

[–] [email protected] 4 points 4 months ago* (last edited 4 months ago)

Like others have said I am a bit concerned by the privacy implications, but I like how nice the model is. Defintiely wouldn't consider it as an alternative to therapy or even real conversations (the sentences it generates look very fake at times, plus I wouldn't want to trust a machine's advice anyway), but it's pleasant to talk to and that's probably all that matters.

Great if you have some time to kill or maybe need someone to take your head off something you've been thinking too much about, just gotta be careful on what you reveal to him.

EDIT: as I was chatting I got a prompt asking me to log in to continue the conversation. I logged in with my Google account, then kept going for a bit, then singned off and closed the conversation. After I told him I was going to close the tab he greeted me with my real name without me ever have mentioned it in the convo (of course, he took it from Google), but it still weirded the heck out of me. Be careful with what you share lol.