this post was submitted on 26 Feb 2024
1486 points (94.9% liked)

Microblog Memes

5856 readers
2638 users here now

A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.

Created as an evolution of White People Twitter and other tweet-capture subreddits.

Rules:

  1. Please put at least one word relevant to the post in the post title.
  2. Be nice.
  3. No advertising, brand promotion or guerilla marketing.
  4. Posters are encouraged to link to the toot or tweet etc in the description of posts.

Related communities:

founded 1 year ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] daltotron 8 points 9 months ago (3 children)

You know, interesting kind of aside here, I haven't seen talked about anywhere at all, but I would like to interrogate everyone here about it to get their thoughts.

I don't think AI is generally going to just replace artists wholesale, or is going to take over without some sort of editing, and that editing will probably necessitate a kind of creative process, and that's probably going to be adjacent to what lots of artists already do. AI as a tool, rather than as a replacement. We saw this with the shift from 2d to 3d in animation. This was accompanied by lack of unionization in the 3d workforce, yes, and was incentivized by it, but the convergence of these mediums, even really only fairly recently, has bolstered artists' ability to make much smaller projects work on a larger scale than they previously would've been able to. If you really need evidence of this, you can kind of look at much earlier newgrounds stuff vs the later work. There's less people using that site now, and the userbase has probably aged up substantially over time, but I do think it's probably fair to say that the quality of the work has gone up (quality obviously being subjective). Basically, Blender is a pretty good software, it's very cool and good.

SO, to the point, if this is the case, and artists are able to substantially cut down on their workload, while still producing similar or larger outputs, or better outputs, will this actually affect art, kind of, as an industry? Is there a pre-allocated volume of art that public consciousness will allow to exist? In which case, the amount of artists would go down. Or is it more the case that there is only a pre-allocated amount of capital that can be given to art? In which case, the number of artists might be the same, and we might just see larger volumes of art in general? I think historically the latter is the case, but that might have changed, or, more realistically, I think it would be dependent on external economic factors.

[–] Theharpyeagle 2 points 9 months ago (1 children)

The issue is that most art is not made just for display. Concept art, corporate art, icons, stock art, and more is how artists make their bread and butter. You might not be able to sell an unedited AI image as a print (yet), but making, say, 100 icons for a mid budget mobile game goes from a small freelance job for an artist to no job at all. Same for someone who makes all that stock art you see on news articles or random blogs. The truth is, the vast majority of the art we look at every day isn't meant to be critically scrutinized, but it still requires artists to make. AI art dramatically reduces the small but numerous jobs for artist, who already struggle to make a living.

The contentious part is that all of this AI was trained on decades of living artists' work (and associated descriptions provided for accessibility) without their permission, and now it is actively, not theoretically, replacing their jobs. Now artists are hesitant to even post the art they want to make for fear that models will be trained to reproduce their style.

[–] daltotron 3 points 9 months ago (1 children)

I mean, twofold questions here. Were artists really wanting those jobs in the first place, for one? I would think that, you know, along those lines, this is just kinda the long end of a process that has been taking place throughout the whole of the 20th century. Used to be that you would have to get someone to paint your billboard, paint your glass storefront, used to be that you would have to hire skilled draftsmen to draw up blueprints on huge boards, for basically every product. Now we're at the point where you only hire an artist to draw something if you really want to get something that looks very original, for some reason, because otherwise you can probably just get it in a stock library, and make whatever you want with stock assets, even without AI. You might also still be looking to artists for product design, but that's maybe going to be less and less the case as you get design processes that are driven more by committee, and consumer feedback.

So along those lines, the total number of art available to artists to do, would be dwindling all the time, basically because the total amount of art floating around in culture, or at least, the total amount of art monetizable by culture, has remained the same for much of the 20th century, and automation has simply made it easier to get rid of artists.

Second question, right, is... I dunno, I forgot it. damn. There's probably also some theoretical point about capitalism and how this is just the mechanism through how it's working at current, and not the fault of the technology specifically as much as the organization and forces of the market, but I feel like everyone's already made that point mostly, and it wasn't gonna be my second question. I dunno maybe if I cook long enough I'll remember what it was gonna be.

[–] [email protected] -1 points 9 months ago (1 children)

I'd argue that those are still jobs lost to automation, that just weren't mourned like jobs are today because they weren't lost after decades of huge swatches of jobs being taken over by automation.

When billboard painters were replaced there were 1000 other places they could go. When today's fiverr app artist loses their small niche income to AI, they're losing what was already their last ditch artistic income. It's the same issue, but the scale makes it hit WAY harder now.

[–] daltotron 1 points 9 months ago

Yeah, you're probably right. I still see plenty of artists that are able to scrape by on a combination of comissions and furry porn, but I think the bigger problem generally is just that, as wealth inequality grows and cost of living gets worse and worse, which doesn't seem to be like it's gonna be something that gets better globally because of climate change, I don't think the consumer market is going to be able to step up and substitute some of this billboard work for "actual" artistic work, or, more free artistic work. I think otherwise, that might've been the case, more.

The efficiency gains, what would otherwise let people save more money to spend on things like artists, are all being sucked up by rich dudes, and not your average joe. Sucks omega hard.

[–] [email protected] 2 points 9 months ago (3 children)

Got vehemently disagreed with, without counter-argument, for making the point that 'AI' art already requires a decent amount of human input and knowledged tinkering to get an adequate result.

I, for example, can't sit down and make Midjourney output a human with only five fingers per hand. I'm sure I wouldn't have to look too hard to find a tutorial on how to solve that hurdle, and the effort is no doubt a lot less than painting it myself. But my point stands that 'AI'/LLMs aren't doing diddly useful squat on their own and won't be for a while because so far they just do not understand abstract reasoning and so need humans to accommodate that element.

I recall an excellent article that pointed out 'AI' doesn't understand a prompt that says 'no giraffes' because people do not label every image on the internet that does not contain giraffes with 'no giraffes'.

So, waffle coming to a close, I absolutely agree with you. As it stands - and likely for a long while yet - 'AI'/LLMs are just a tool that can be helpful to artists in certain situations.

The point of the OP microblog still stands though; our system prioritises made up money trees over actual human life.

[–] [email protected] 4 points 9 months ago (1 children)

Minor nitpick, but negative weights exist for the purposes of excluding a result. Yeah people dont list "no giraffe" as a tag, but if you apply a negative weight to Giraffe, the AI will try and exclude any result that might count as a giraffe

[–] [email protected] 1 points 9 months ago (1 children)

Fair enough; I don't know enough about it.

Does my point still stand up though? It requires a human to tweak these things - to prompt. The 'AI' isn't imagining it up on its own.

[–] [email protected] 2 points 9 months ago

Ye, thats why I said my nitpick was minor. AI Art is now my main preferred style of creating art, and it still takes me days of work and plenty of time in photoshop tweaking my results to get the final result I want

[–] FlyingSquid 2 points 9 months ago

Got vehemently disagreed with, without counter-argument, for making the point that ‘AI’ art already requires a decent amount of human input and knowledged tinkering to get an adequate result.

I think that is a good point for now, but I think we are also going toward a point where that won't be much of a hurdle.

The issue will be turning what AI can create into something people actually want to watch. And that will definitely still take humans until an AGI emerges.

[–] daltotron 1 points 9 months ago* (last edited 9 months ago)

You know what's interesting is that I bet a lot of those problems with, say, stable diffusion, and generative models, would be solved, if they were more capable of trusting their prompters to have some measure of artistic ability, rather than only being able to put in keywords. It'd be much easier, I would think, to interface with something that makes images, through the language of images.

Drawing thumbnails, or stick figures, or basic shapes and forms, would, I think, make it much easier to interact with the model, and get what you want out of it. You could just draw a stick figure of a hand, and blam, proper number of fingers, and all that. It's really funny to me, that I think, a core problem with much of this technology is basically just that it's kind of become separated from the artistic methods which it is meant to assist. We could do a great deal with what already exists, without the need to endlessly scale it up (apparently the only form of concrete progression that the space is capable of), if only we were willing to perhaps focus on making it more usable and perhaps if we were more open to artistic input. Alas, this is not to be.

But I suppose that would happen to anything that falls victim to being "the next big thing" in the tech space, like crypto, or NFTs, or what have you. Just turns into a pile of shit. The midas touch, but instead of gold, things turn to shit. The shitass touch.

Edit: oh yeah also agree with all of what you said this shit kinda sucks bunk as it is, but mostly also people have a problem with capitalism. I think this problem in particular gets a lot of air because of how much influence artists and writers, creatives, have over the airwaves, generally, as high profile communicators, and how this is kind of the main problem capitalism is confronting them with in this particular moment. It's also just kind of a high profile thing, everyone's dumping into it rn, which sucks.

[–] dejected_warp_core 2 points 9 months ago* (last edited 9 months ago)

I agree.

The most succinct way I can make this argument to the layperson is that "AI", as it exists today, is terrifyingly good at mimicry. But that's all it can do. Attributing more to this synthetic neural network makes about as much sense as saying a parrot understands grammar and syntax because it can perfectly reproduce a few words in the right context, or with the right prompt.

From this vantage point, we can clearly see how this technology is severely limited. It can be asked to synthesize new outputs, but that's merely an extrapolation of the input training set. While this isn't all that different from what people can, and often do, it's not a fully rational intelligence that solves problems outside that framing. For that, one needs a general intelligence, capable of extrapolating meaning from context and generating novel concepts.

Moreover, if you want an AI to generate something, you first need to define the general ballpark for the right answer(s). Data gathering, cleaning, categorization (tagging), is a big labor problem that feeds into the AI itself. So there are also a lot of real world problems that don't fit this model for a whole bunch of reasons. Like not having a working dataset at all, information that doesn't digitize well, or areas that are too small to properly feed this process in the first place. People function just fine in those spaces, so again, we can see a gap that is not easily closed.