this post was submitted on 14 Aug 2024
-53 points (14.7% liked)

Technology

60165 readers
3954 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 0 points 4 months ago

In the long run, sure.

In the near term? No, not by a long shot.

There are some tasks we can automate, and that will happen. That's been a very long-running trend, though; it's nothing new. People generally don't write machine language by physically flipping switches these days; many decades of automation have happened since then.

I also don't think that a slightly-tweaked latent diffusion model, of the present "generative AI" form, will get all that far, either. The fundamental problem: taking an incomplete specification in human language and translating it to a precise set of rules in machine language making use of knowledge of the real world, isn't something that I expect you can do very effectively by training on a existing corpus.

The existing generative AIs work well on tasks where you have a large training corpus that maps from something like human language to an image. The resulting image don't have a lot by way of hard constraints on their precision; you can illustrate that by generating a batch of ten images for a given prompt that might all look different, but a fair number look decent-enough.

I think that some of that is because humans typically process images and language in a way that is pretty permissive of errors; we rely heavily on context and our past knowledge about the real world to obtain meaning up with the correct meaning. An image just needs to "cue" our memories and understanding of the world. We can see images that are distorted or stylized, or see pixel art, and recognize it for what it is.

But...that's not what a CPU does. Machine language is not very tolerant of errors.

So I'd expect a generative AI to be decent at putting out content intended to be consumed by humans -- and we have, in fact, had a number of impressive examples of that working. But I'd expect it to be less-good at putting out content intended to be consumed by a CPU.

I think that that lack of tolerance for error, plus the need to pull in information from the real world, is going to make translating human language to machine language less of a good match than translating human language to human language or human language to human-consumable image.