this post was submitted on 26 Feb 2024
1486 points (94.9% liked)
Microblog Memes
5858 readers
2637 users here now
A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.
Created as an evolution of White People Twitter and other tweet-capture subreddits.
Rules:
- Please put at least one word relevant to the post in the post title.
- Be nice.
- No advertising, brand promotion or guerilla marketing.
- Posters are encouraged to link to the toot or tweet etc in the description of posts.
Related communities:
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Huh? Image ai to semantic formating, then consumption is trivial now
Could you give me an example that uses live feeds of video data, or feeds the output to another system? As far as I'm aware (I could be very wrong! Not an expert), the only things that come close to that are things like OCR systems and character recognition. Describing in machine-readable actionable terms what's happening in an image isn't a thing, as far as I know.
No live video no, that didn't seem the topic
But if you had the horsepower, I don't think it's impossible based on what I've worked with. It's just about snipping and distributing the images, from a bottleneck standpoint
Well, that'd be a prerequisite to a transformer model making decisions for a ship scuttling robot, hence why I brought it up.
It is. That's actually the basis of multimodal transformers - they have a shared embedding space for multiple modes of data (e.g. text and images). If you encode data and take those embeddings, you suddenly have a vector describing the contents of your input.