lily33

joined 1 year ago
[–] lily33 3 points 1 year ago

Whether a specific reason for defederating is a good idea depends on the instance IMO. I don't think a "general purpose" instance should defederate on ideological grounds.

That said, they should defederate instances whose members are too disruptive. But right now we only speculate how hexbear members will behave. We will only know once they actually federate.

1
submitted 1 year ago* (last edited 1 year ago) by lily33 to c/[email protected]
 

I'm trying to subscribe to this instance from kbin.social, but it seems it can't federate. Other people have reported it also won't federte with some lemmy instances as well.

PS. I don't think kbin.social has defederated this instance, though I don't know if there's a way to check that.

[–] lily33 -4 points 1 year ago (2 children)

If you give me several paragraphs instead of a single sentence, do you still think it's impossible to tell?

[–] lily33 5 points 1 year ago* (last edited 1 year ago)

It's not it's biological origins that make it hard to understand the brain, but the complexity. For example, we understand how the heart works pretty well.

While LLMs are nowhere near as complex as a brain, they're complex enough to make it extremely difficult to understand.

But then there comes the question: if they're so difficult to understand, how did people make them in the first place?

The way they did it actually bears some similarities to evolution. They created an "empty" model - a large neural network that wasn't doing anything useful or meaningful. But it depended on billions of parameters, and if you tweak a parameter, its behavior changes slightly.

Then they expended enormous amount of computing power tweaking parameters, each tweak slightly improving its ability to model language. While doing this, they didn't know what each number meant. They didn't know how or why each tweak was improving the model. Just that each tweak was making an improvement.

Unlike evolution, each tweak isn't random. There's an algorithm called back-propagation that can tell you how to tweak the neural network to make it predict some known data slightly better. But unfortunately it doesn't tell you anything about the "why" this tweak is good, or "what" each parameter change means. Hence why we don't understand how LLMs work.

One final clarification: It's not a complete black box. We do have some understanding of how LLM works, mostly on high level. Kind of like we have some basic understanding of how a brain works. We understand LLMs much better than brains, of course.

[–] lily33 5 points 1 year ago* (last edited 1 year ago)

It's not that nobody took the time to understand. Researchers have been trying to "un-blackbox" neural networks pretty much since those have been around. It's just an extremely complex problem.

Logistic regression (which is like a neural network but with just one node) is pretty well understood - but even then sometimes it can learn some pretty unintuitive coefficients and it can be tricky to understand why.

With LLMs - which are enormous by comparison - it's simply not a tractable problem to understand how it works in detail.

[–] lily33 1 points 1 year ago* (last edited 1 year ago) (6 children)

I don't see how that affects my point.

  • Today's AI detector can't tell apart the output of today's LLM.
  • Future AI detector WILL be able to tell apart the output of today's LLM.
  • Of course, future AI detector won't be able to tell apart the output of future LLM.

So at any point in time, only recent text could be "contaminated". The claim that "all text after 2023 is forever contaminated" just isn't true. Researchers would simply have to be a bit more careful including it.

[–] lily33 6 points 1 year ago (11 children)

Not really. If it's truly impossible to tell the text apart, than it doesn't really pose a problem for training AI. Otherwise, next-gen AI will be able to tell apart text generated by current gen AI, and it will get filtered out. So only the most recent data will have unfiltered shitty AI-generated stuff, but they don't train AI on super-recent text anyway.

[–] lily33 0 points 1 year ago

They don't redistribute. They learn information about the material they've been trained on - not there natural itself*, and can use it to generate material they've never seen.

  • Bigger models seem to memorize some of the material and can infringe, but that's not really the goal.
[–] lily33 4 points 1 year ago* (last edited 1 year ago)

Language models actually do learn things in the sense that: the information encoded in the training model isn't usually* taken directly from the training data; instead, it's information that describes the training data, but is new. That's why it can generate text that's never appeared in the data.

  • the bigger models seem to remember some of the data and can reproduce it verbatim; but that's not really the goal.
[–] lily33 2 points 1 year ago* (last edited 1 year ago) (2 children)

It's specifically distribution of the work or derivatives that copyright prevents.

So you could make an argument that an LLM that's memorized the book and can reproduce (parts of) it upon request is infringing. But one that's merely trained on the book, but hasn't memorized it, should be fine.

[–] lily33 1 points 1 year ago* (last edited 1 year ago)

It's actually a real problem on reddit where people spin up fake users to manipulate votes. Reddit hasn't published how they detect that exactly, but one way to do that is to look for bad voting patters, like if one account systematically upvotes/downvotes another. But you pretty much can't without knowing the votes.

[–] lily33 3 points 1 year ago (2 children)

True - but it'll be much easier to detect.

25
submitted 1 year ago* (last edited 1 year ago) by lily33 to c/[email protected]
 

I'm looking for an open-source alternative to ChatGPT which is community-driven. I have seen some open-source large language models, but they're usually still made by some organizations and published after the fact. Instead, I'm looking for one where anyone can participate: discuss ideas on how to improve the model, write code, or donate computational resources to build it. Is there such a project?

view more: next ›