this post was submitted on 01 Oct 2024
81 points (80.5% liked)

Asklemmy

43721 readers
2853 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy ๐Ÿ”

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_[email protected]~

founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[โ€“] hoshikarakitaridia 1 points 4 weeks ago* (last edited 4 weeks ago) (1 children)

This might be a wild take but people always make AI out to be way more primitive than it is.

Yes, in it's most basic for an LLM can be described as an auto-complete for conversations. But let's be real: the amount of different optimizations and adjustments made before and after the fact is pretty complex, and the way the AI works is pretty close already to a brain. Hell that's where we started out; emulating a brain. And you can look into this, the base for AI is usually neural networks, which learn to give specific parts of an input a specific amount of weight when generating the output. And when the output is not what we want, the AI slowly adjusts those weights to get closer.

Our brain works the same in it's most basic form. We use electric signals and we think associative patterns. When an electric signal enters one node, this node is connected via stronger or lighter bridges to different nodes, forming our associations. Those bridges is exactly what we emulate when we use nodes with weighted connectors in artificial neural networks.

Our AI output is quality wise right now pretty good, but integrity and security wise pretty bad (hallucinations, not following prompts, etc.), but saying it is performing at the level of a three year old is simultaneously under-selling and overselling how AI performs. We should be aware that just because it's AI doesn't mean it's good, but it also doesn't mean it's bad either. It just means there's a feature (which is hopefully optional) and then we can decide if it's helpful or not.

I do music production and I need cover art. As a student, I can't afford commissioning good artworks every now and then, so AI is the way to go and it's been nailing it.

As a software developer, I've come to appreciate that after about 2y of bad code completion AIs, there's finally one that is a net positive for me.

AI is just like anything else, it's a tool that brings change. How that change manifests depends on us as a collective. Let's punish bad AI, dangerous AI or similar (copilot, Tesla self driving, etc.) and let's promote good AI (Gmail text completion, chatgpt, code completion, image generators) and let's also realize that the best things we can get out of AI will not hit the ceiling of human products for a while. But if it costs too much, or you need quick pointers, at least you know where to start.

[โ€“] [email protected] 1 points 4 weeks ago

This shows so many gross misconceptions and with such utter conviction, Iโ€™m not even sure where to start. And as you seem to have decided you like to get free stuff that is the result of AI trained off the work of others without them receiving any compensation, nothing I say will likely change your opinion because you have an emotional stake in not acknowledging the problems of AI.