trollbearpig

joined 1 year ago
[–] trollbearpig 10 points 6 months ago* (last edited 6 months ago) (8 children)

Hahaha, I don't know why people are so shocked. I'm sure we will see anything useful with AIs anytime soon, just like with crypto hahaha.

In the mean time, it's obvious these companies are using AIs as an excuse to bypass laws and regulations, and people are cheering them ...They are bypassing copyright laws (in a direct attack to open source) with their autocomplete bots, but we should not worry, it's not copyright infrigment because the LLMs are smart (right), so that makes it ok ... They are using this to steal the work of real artists through image generation bots, but people love this for some reason. And they are using this to bypass the few privacy laws in place now, like Facebook/Meta could ever have another incentive.

Maybe I'm extremist, but if the only useful thing we are getting from this is mediocre code autocomplete that works sometimes, I think the price we are paying is way too high. Too bad I'm in the minority here.

[–] trollbearpig 3 points 6 months ago* (last edited 6 months ago)

Not that you are wrong, but I think we should keep using Taylor Swift as the face of this because:

  1. She is the worst offender in this case, even if not the only.
  2. She is on the "left" (what passes for left in the us, a leftist billionare is obviously a contradiction). So this is a clear signal from us that this is not about us vs them. This is an issue even when done by someone on our "side" (like Taylor Swift is in our side lol, but for MAGAs and similar extremists she is).
  3. At the end of the day, any meassure stoping Taylor Swift from contaminating with her stupid jet will also help us stop all the other assholes.
  4. We don't owe shit to Taylor Swift or any other celebrity, fuck her. We can talk after she stops being a deca millionare, in the mean time fuck her lol.

So get mad at her, use her bad image in this issue to push for change, and seriously, fuck her almost as hard as any other rich assholes. The fact that she is sligthly better than people pushing for a return to feudalism doesn't make her a good person lol.

[–] trollbearpig 5 points 6 months ago

You are not wrong, but I think you are missing the point a bit. What everyone in the thread is saying is that he got no pushback from conservatives. And instead they are defending him like they are in a cult lol. So the fact that normal people like us do attack him, like he deserves, is beyond the point of the conversation.

I'm assuming you are just confused here lol, maybe you are trolling to derail the conversation. That's why people are downvoting you, sadly a lot of people who argue in bad faith sound like you.

[–] trollbearpig 8 points 6 months ago

Nah man, they don't freeze the model because they think we will ruin it with our racism hahaha, that's just their PR bullshit. They freeze them because they don't know how to make the thing learn in real time like a human. We only know how to use backpropagatuon to train them. And this is expected, we haven't solved the hard problem of the mind no matter what these companies say.

Don't get me wrong, backpropagation is an amazing algorithm and the results for autocomplete are honestly better than I expected (though remeber that a lot of this is just underpaid workers in africa that pick good training data). But our current understanding of how human learns points to neuroplasticity as the main mechanism. And then here come all these AI grifters/companies saying that somehow backpropagation produces the same results. And I haven't seen a single decent argument for this.

[–] trollbearpig 15 points 6 months ago* (last edited 6 months ago) (2 children)

Sorry, but no man. Or rather, what evidence do you have that LLMs are anything like a human brain? Just because we call them neural networks doesn't mean they are networks of neurons ... You are faling to the same fallacy as the people who argue that nazis were socialists, or if someone claimed that north korea was a democratic country.

Perceptrons are not neurons. Activation functions are not the same as the action potential of real neurons. LLMs don't have anything resembling neuroplasticity. And it shows, the only way to have a conversation with LLMs is to provide them the full conversation as context because the things don't have anything resembling memory.

As I said in another comment, you can always say "you can't prove LLMs don't think". And sure, I can't prove a negative. But come on man, you are the ones making wild claims like "LLMs are just like brains", you are the ones that need to provide proof of such wild claims. And the fact that this is complex technology is not an argument.

[–] trollbearpig 21 points 6 months ago* (last edited 6 months ago) (4 children)

Man, I hate this semantics arguments hahaha. I mean yeah, if we define AI as anything remotely intelligent done by a computer sure, then it's AI. But then so is an if in code. I think the part you are missing is that terms like AI have a definition in the collective mind, specially for non tech people. And companies using them are using them on purpose to confuse people (just like Tesla's self driving funnily enough hahaha).

These companies are now trying to say to the rest of society "no, it's not us that are lying. Is you people who are dumb and don' understand the difference between AI and AGI". But they don't get to define what words mean to the rest of us just to suit their marketing campagins. Plus clearly they are doing this to imply that their dumb AIs will someday become AGIs, which is nonsense.

I know you are not pushing these ideas, at least not in the comment I'm replying. But you are helping these corporations push their agenda, even if by accident, everytime you fall into these semantic games hahaha. I mean, ask yourself. What did the person you answered to gained by being told that? Do they understand "AIs" better or anything like that? Because with all due respect, to me you are just being nitpicky to dismiss valid critisisms to this technology.

[–] trollbearpig 1 points 6 months ago (2 children)

I'm pretty sure you are just trolling. But if you really want to learn about the topic go read what fair use is and isn't, or ask a lawyer. Fair use is much, much limited than you people think it is. Even memes and gameplay videos fall short of fair use most of the time, it's just that everyone looks the other way. This shows that copyright laws are a mess hahaha, but that's another topic.

[–] trollbearpig 1 points 6 months ago (5 children)

Lol, ok dude. Then they are rampant copyright infringement machines dude ... Nice argument lol

[–] trollbearpig 0 points 6 months ago* (last edited 6 months ago) (8 children)

Look man. If I go and read the linux kernel code (for example), and then go and program my own closed source kernel (assuming I was good enough for that lol), and then my kernel becomes popular (it's ok to dream right?) then any lawyer worth it's salary will sue me because my beatifull kernel is not a clean room implementation. In practice it's almost impossible to prove, unless I go and tell everyone I was reading the linux kernel hahaha. But for LLMs there is nothing to prove, they did "read" the code (or rarther are indexing the code ...). So yes dude, this would be plagiarism for a human too.

[–] trollbearpig 11 points 6 months ago* (last edited 6 months ago)

First of all man, chill lol. Second of all, nice way to project here, I'm saying that the "AIs" are overhyped, and they are being used to justify rampant plagiarism by Microsoft (OpenAI), Google, Meta and the like. This is not the same as me saying the technology is useless, though hobestly I only use LLMs for autocomplete when coding, and even then is meh.

And third dude, what makes you think we have to prove to you that AI is dumb? Way to shift the burden of proof lol. You are the ones saying that LLMs, which look nothing like a human brain at all, are somehow another way to solve the hard problem of mind hahahaha. Come on man, you are the ones that need to provide proof if you are going to make such wild claim. Your entire post is "you can't prove that LLMs don't think". And yeah, I can't prove a negative. Doesn't mean you are right though.

[–] trollbearpig 20 points 6 months ago* (last edited 6 months ago) (16 children)

Come on man. This is exactly what we have been saying all the time. These "AIs" are not creating novel text or ideas. They are just regurgitating back the text they get in similar contexts. It's just they don't repeat things vebatim because they use statistics to predict the next word. And guess what, that's plagiarism by any real world standard you pick, no matter what tech scammers keep saying. The fact that laws haven't catched up doesn't change the reality of mass plagiarism we are seeing ...

And people like you keep insisting that "AIs" are stealing ideas, not verbatim copies of the words like that makes it ok. Except LLMs have no concept of ideas, and you people keep repeating that even when shown evidence, like this post, that they don't think. And even if they did, repeat with me, this is still plagiarism even if this was done by a human. Stop excusing the big tech companies man

[–] trollbearpig 31 points 6 months ago* (last edited 6 months ago)

Man, can you people stop with this BS. One thing is if the photos where recovered from the local storage using some "forensics" to retrieve files from the free sectors in the hard drive. But that's not what is going on here. These photos are comming back from iCloud after apple pinky sweared they had deleted them. The conclusion here is that apple is keeping your photos (and most likely all your data) in their cloud even after you explicitly tell them to delete them.

I can't believe people with "tech literacy" are just repeating apple's excuse like gosspel and then lecturing other people ...

view more: ‹ prev next ›