this post was submitted on 21 Jan 2024
29 points (91.4% liked)
ADHD
9742 readers
46 users here now
A casual community for people with ADHD
Values:
Acceptance, Openness, Understanding, Equality, Reciprocity.
Rules:
- No abusive, derogatory, or offensive post/comments.
- No porn, gore, spam, or advertisements allowed.
- Do not request for donations.
- Do not link to other social media or paywalled content.
- Do not gatekeep or diagnose.
- Mark NSFW content accordingly.
- No racism, homophobia, sexism, ableism, or ageism.
- Respectful venting, including dealing with oppressive neurotypical culture, is okay.
- Discussing other neurological problems like autism, anxiety, ptsd, and brain injury are allowed.
- Discussions regarding medication are allowed as long as you are describing your own situation and not telling others what to do (only qualified medical practitioners can prescribe medication).
Encouraged:
- Funny memes.
- Welcoming and accepting attitudes.
- Questions on confusing situations.
- Seeking and sharing support.
- Engagement in our values.
Relevant Lemmy communities:
lemmy.world/c/adhd will happily promote other ND communities as long as said communities demonstrate that they share our values.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I'm using my own offline open source AI. It isn't reliable as a primary source, but it usually helps me work past the dead ends I hit on my own. I wouldn't try to use it cold for something you need right away, but if you have a summer off or some down time, I would start exploring. You need to understand how to communicate directly with an given model. They are all a little different in their nuances, and all require much more open and direct communication than typical human conversation. You need to understand the AI alignment problem as well, so that you are better equipped to spot when it is hallucinating or going off the rails. You need the largest models you can possibly run for general assistant type tasks. It can't give you specific details accurately, but it can explain complex conceptual ideas in unique ways tailored to you specifically. For instance, in computer science, all the processes running on your computer are given CPU time using a Scheduler algorithm. Asking an AI to write a Scheduler, or when certain changes were made to the Scheduler in the Linux kernel is going to generate bad results, but like asking it to help you understand the differences between Fair scheduling and Real Time scheduling would like generate good results. These "good" results are still not primary source quality and should not be trusted directly. However, when I am struggling to understand something like the scheduler, I can ask specific questions about what I am having trouble with and usually figure out whatever details I need in the process. It's like having a really smart friend to talk things out with, but they are not an expert at what it is you're studying.
I always feel this need to make intuitive connections in order to learn, and AI helps me do that. I'm using a 12th gen Intel i7 with 64GB system memory, and a 16GB GPU and running quantized versions of the Mixral 8×7B and Llama2 70B models from huggingface, using Oobaboga Texgen WebUI offline. It takes every bit of this hardware spec to run these very large models offline.