IncognitoErgoSum

joined 1 year ago
[–] [email protected] 1 points 1 year ago* (last edited 1 year ago) (3 children)

You will never move a boat with nuclear,

I assume you haven't heard of aircraft carriers and nuclear submarines.

Also, nuclear power can be stored in batteries and capacitors and then used to move electric vehicles (including boats, planes, and tractors), so I don't know what the hell you're even talking about.

Eat less meat! How hard is it to compute! So turn off your stupid AI and eat less meat. Do it now, stop eating meat.

I've actually cut my meat consumption way down.

That being said, a person using AI consumes an absolutely minuscule amount of power compared to a person eating a steak. One steak (~20kwh) is equivalent to about 60 hours of full time AI usage (300W for an nvidia A100 at max capacity), and most of the time a person spends using an AI is spent idling while they type and read, so realistically it's a lot longer than that.

Again, your hypothetical data center smashers are going after AI because they hate AI, not because they care about the environment. There are better targets for ecoterrorism. Like my car's tires, internet tough guy.

[–] [email protected] 1 points 1 year ago (12 children)

I'm looking at it with a computer science degree and experience with AI programming libraries.

And yes, it's a machine that simulates neurons using math. We simulate physics with math all the way down to the quantum foam. I don't know what your point is. Whether it's simulated neurons or real neurons, it learns concepts, and concepts cannot be copyrighted.

I have a sneaking suspicion since you switched tactics from googling the wrong flowchart to accusing me of not caring about workers due to a contract dispute that's completely unrelated to anything of the copyright stuff I'm talking about, I have a feeling you at least suspect that I know what I'm talking about.

Anyway, since you're arguing based on personal convenience and not fact, I can't really trust anything that you say anyway, because we're on entirely different wavelengths. You've already pretty much indicated that even if I were to convince you I'm right, you'd still go on doing exactly what you're doing, because you're on a crusade to save a small group of your peers from automation, and damn the rest of us.

Best of luck to you.

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago) (14 children)

I get it, then.

It's more about the utilitarian goal of convincing people of something that it's convenient for you if the public believes it, in order to protect yourself and your immediate peers from automation, as opposed to actually seeking the truth and sticking going with established legal precedent.

Legally, your class action lawsuit doesn't really have a leg to stand on, but you might manage to win anyway if you can depend on the ignorance of the judge and the jury about how AI actually works, and prejudice them against it. If you can get people to think of computer scientists and AI researches as "tech bros" instead of scientists with PHDs, you might be able to get them to dismiss what they say as "hype" and "fairy tales".

[–] [email protected] 1 points 1 year ago (16 children)

But if it makes you happy, how about we get a copyright ala Creative Commons that can allow an individual to create an AI using the copyrighted work for non-profit reason, but restrict corporations from doing so with an AI used for profit, and considers any work created by this AI to be noncopyrighted.

Honestly, I think keeping the output of AI non-copyrighted is probably the best of both worlds, because it allows individuals to use AI as an expressive tool (you keep separating "creatives" from "average people", which I take issue with) while making it impractical for large media companies to use.

At any rate, the reason copyright restrictions would just kill open source AI is that it strikes me as incredibly unlikely that you're going to be able to stop corporations from training AI on media that they own outright. Disney has a massive library of media that they can use as training data, and no amount of stopping open source AI users from training AI on copyrighted works is going to prevent Disney from doing that (same goes for Warner Bros, etc). Disney, which is known for exploiting its own workers, will almost certainly use that AI to replace their animators completely, and they'll be within their legal rights to do so since they own all the copyrights on it.

Now consider companies like Adobe, Artstation, and just about any other website that you can upload art to. When you sign up for those sites, you agree to their user agreement, which has standard boilerplate language that gives them a sublicenseable right to use your work however they see fit (or "for business purposes", which means the same thing). In other words, if you've ever uploaded your work anywhere, you've already given someone else the legal right to train an AI on your work (even with a creative interpretation of copyright law that allows concepts and styles to be copyrighted), which means they're just going to build their own AI and then sell it back to you for a monthly fee.

But artists and writers should be compensated every time someone uses an AI trained on their work, right? Well, let's look at ChatGPT for a moment. I have open source code out there on github, which was almost certainly included in ChatGPT's training data. Therefore, when someone uses ChatGPT for anything (since the training data doesn't go into a database; it just makes tiny tiny little changes to neuron connection weights), they're using my copyrighted work, and thus they owe me a royalty. Who better to handle that royalty check but OpenAI? So now you get on there and use ChatGPT, making use of my work, and some of the "royalty fee" they're now charging goes to me. Similarly, ChatGPT has been trained on some of whatever text you've added to the internet (comments, writing, whatever, it doesn't matter), so when I use it, you get royalties. So far so good. Now OpenAI charges us both, keeps a big commission, and we both pay them $50/month for the privilege of access to all that knowledge, and we both make $20/month because people are using it, for a net -$30/month. Who wins? OpenAI. With a compensation scheme, the big corporations win every time and the rest of us lose, because it costs money to do it, and open source can't do it at all. Better to skip the middle man, say here's an AI that we all contributed to and we all have access to.

So again, what specifically is your plan to slow down deployment? Because onerous copyright restrictions aren't going to stop any of the people who need to be stopped, but they will absolutely stop the people competing with those people.

[–] [email protected] 2 points 1 year ago* (last edited 1 year ago) (18 children)

Lots to unpack here.

First of all, the physical process of human inspiration is that a human looks at something, their optic nerves fire, those impulses activate other neruons in the brain, and an idea forms. That's exactly how an AI takes "inspiration" from images. This stuff about free will and consciousness is metaphysics. There's no meaningful difference in the actual process.

Secondly, let's look at this:

SAG-AFTRA just got a contract offer that says background performers would get their likeness scanned and have it belong to the studio FOREVER so that they can simply generate these performers through AI.

This is what is happening RIGHT NOW. And you want to compare the output of an AI to a human's blood sweat and tears, and argue that copyright protections would HURT people rather than help them avoid exploitation.

I'll say right off that I don't appreciate the "you're a bad person" schtick. Switching to personal attacks stinks of desperation. Plus, your personal attack on me isn't even correct, because I don't approve of the situation you described any more than you do. The reason they're trying to slip that into those people's contracts is because those people own their likenesses under existing copyright law. That is, you don't have to come up with a funny interpretation of copyright law where concepts can be copyrighted but only if a machine learns them. They need a license to use those people's likenesses regardless of whether they use an AI or Photoshop or just have a painter do it. Using AI doesn't get them out of that -- if it did; they wouldn't need to try to put it into the contract.

In other words, they aren't using an AI to attack anyone; they're using a powerful bargaining position to try to get people to sign away an established right they already have according to copyright law. That has absolutely nothing to do with anything I'm talking about here, except that you want to attach it to what I'm talking about so you can have something to rage about.

And here's the thing. None of you people ever gave a shit when anybody else's job was automated away. Cashiers have had their work automated away recently and all I hear is "ThAt'S oKaY bEcAuSe tHeIr jOb sUcKs!!!!!!111" Artists have been actually violating the real copyright of other artists (NOT JUST LEARNING CONCEPTS) with fanart (which is a DERIVATIVE WORK OF A COPYRIGHTED CHARACTER) for god only knows how long and there's certainly never been a big outcry about that.

It sucks to be the ones looking down the business end of automation. I know that because as a computer programmer I am too. On the other hand, I can see past the end of my own nose, and I know how amazing it would be if lots of regular people suddenly had the ability to do the things that I do, so I'm not going to sit there and creatively interpret copyright law in an attempt to prevent that from happening. If you're worried about the effects of automation, you need to start thinking about things like a universal healthcare and universal income, not just ESTABLISH SPECIAL PROTECTIONS FOR A TINY SUBSET OF PEOPLE WHOM YOU HAPPEN TO LIKE. It just seems a bit convenient, and (dare I say) selfish that the point in history that we need to start smashing the machines happens to be right now. Why not the printing press or the cotton gin or machines that build railroads or looms or or robots in factories or grocery store kiosks? The transition sucked for all those people as well. It's going to suck for artists, and it'll suck for me, but in the end we can pull through and be better off for it, rather than killing the technology in its infancy and calling everyone a monster who doesn't believe that you and you alone ought to have special privileges.

We need to be using the political clout we have to push us toward a workable post-scarcity economy, as opposed to trying to preserve a single, tiny bit of scarcity so a small group of people can continue to do something while everybody else is automated away and we all end up ruled by a bunch of rent-seeking corporations. Your gatekeeping of the ability of people to do art isn't going to prevent any of that.

P.S. We seem to be at the very beginning of a major climate disaster these last couple weeks, so we're probably all equally fucked anyway.

[–] [email protected] 2 points 1 year ago (5 children)

Oh boy! Link, please!

[–] [email protected] 4 points 1 year ago

Upvoting your own material is a Michael Scott thing to do.\

- PabloDiscobar

- Michael Scott

[–] [email protected] 3 points 1 year ago (21 children)

I don't believe that current AIs should have rights. They aren't conscious.

My point is was purely that AIs learn concepts and that concepts aren't copyrightable. Encoding concepts into neurons (that is, learning) doesn't require consciousness.

[–] [email protected] 3 points 1 year ago* (last edited 1 year ago) (7 children)

So to clarify, are you making the claim that nothing that's simulated with vector mathematics can have emergent properties? And that AIs like GPT and Stable Diffusion don't contain simulated neurons?

[–] [email protected] 4 points 1 year ago* (last edited 1 year ago)

If what you're going to give me is an oversimplified analogy that puts too much faith in what AI devs are trying to sell and not enough faith in what a human brain is doing, then don't bother because I will dismiss it as a fairy tale.

I'm curious, how do you feel about global warming? Do you pick and choose the scientists you listen to? You know that the people who develop these AIs are computer scientists and researchers, right?

If you're a global warming denier, at least you're consistent. But if out of one side of you're mouth you're calling what AI researchers talk about a "fairy tail", and out of the other side of your mouth you're criticizing other people for ignoring science when it suits them, then maybe you need to take time for introspection.

You can stop reading here. The rest of this is for people who are actually curious, and you've clearly made up your mind. Until you've actually learned a bit about how they actually work, though, you have absolutely no business opining about how policies ought to apply to them, because your views are rooted in misconceptions.

In any case, curious folks, I'm sure there are fancy flowcharts around about how data flows through the human brain as well. The human brain is arranged in groups of neurons that feed back into each other, where as an AI neural network is arranged in more ordered layers. There structure isn't precisely the same. Notably, an AI (at least, as they are commonly structured right now) doesn't experience "time" per se, because once it's been trained its neural connections don't change anymore. As it turns out, consciousness isn't necessary for learning and reasoning as the parent comment seems to think.

Human brains and neural networks are similar in the way that I explained in my original comment -- neither of them store a database, neither of them do statistical analysis or take averages, and both learn concepts by making modifications to their neural connections (a human does this all the time, whereas an AI does this only while it's being trained). The actual neural network in the above diagram that OP googled and pasted in here lives in the "feed forward" boxes. That's where the actual reasoning and learning is being done. As this particular diagram is a diagram of the entire system and not a diagram of the layers of the feed-forward network, it's not even the right diagram to be comparing to the human brain (although again, the structures wouldn't match up exactly).

[–] [email protected] 3 points 1 year ago* (last edited 1 year ago) (25 children)

I'm willing to, but if I take the time to do that, are you going to listen to my answer, or just dismiss everything I say and go back to thinking what you want to think?

Also, a couple of preliminary questions to help me explain things:

What's your level of familiarity with the source material? How much experience do you have writing or modifying code that deals with neural networks? My own familiarity lies mostly with PyTorch. Do you use that or something else? If you don't have any direct familiarity with programming with neural networks, do you have enough of a familiarity with them to at least know what some of those boxes mean, or do I need to explain them all?

Most importantly, when I say that neural networks like GPT-* use artificial neurons, are you objecting to that statement?

I need to know what it is I'm explaining.

[–] [email protected] 3 points 1 year ago (36 children)

Except an AI is not taking inspiration, it's compiling information to determine mathematical averages.

The AIs we're talking about are neural networks. They don't do statistics, they don't have databases, and they don't take mathematical averages. They simulate neurons, and their ability to learn concepts is emergent from that, the same way the human brain is. Nothing about an artificial neuron ever takes an average of anything, reads any database, or does any statistical calculations. If an artificial neural network can be said to be doing those things, then so is the human brain.

There is nothing magical about how human neurons work. Researchers are already growing small networks out of animal neurons and using them the same way that we use artificial neural networks.

There are a lot of "how AI works" articles in there that put things in layman's terms (and use phrases like "statistical analysis" and "mathematical averages", and unfortunately people (including many very smart people) extrapolate from the incorrect information in those articles and end up making bad assumptions about how AI actually works.

A human being is paid for the work they do, an AI program's creator is paid for the work it did. And if that creator used copyrighted work, then he should be having to get permission to use it, because he's profitting off this AI program.

If an artist uses a copyrighted work on their mood board or as inspiration, then they should pay for that, because they're making a profit from that copyrighted work. Human beings should, as you said, be paid for the work they do. Right? If an artist goes to art school, they should pay all of the artists whose work they learned from, right? If a teacher teaches children in a class, that teacher should be paid a royalty each time those children make use of the knowledge they were taught, right? (I sense a sidetrack -- yes, teachers are horribly underpaid and we desperately need to fix that, so please don't misconstrue that previous sentence.)

There's a reason we don't copyright facts, styles, and concepts.

Oh, and if you want to talk about something that stores an actual database of scraped data, makes mathematical and statistical inferences, and reproduces things exactly, look no further than Google. It's already been determined in court that what Google does is fair use.

view more: ‹ prev next ›