I'm fine watching porn without subtitles
Opensource
A community for discussion about open source software! Ask questions, share knowledge, share news, or post interesting stuff related to it!
⠀
Why are you using VLC for porn? You download porn?!
my state banned pornhub so I made a big ass stash just in case, so yeah I guess. I also have a stash of music from YouTube in case they ever fully block YT-DLP, so I'm just a general data hoarder.
Still no live audio encoding without CLI (unless you stream to yourself), so no plug and play with Dolby/DTS
Encoding params still max out at 512 kpbs on every codec without CLI.
Can't switch audio backends live (minor inconvenience, tbh)
Creates a barely usable non standard M3A format when saving a playlist.
I think that's about my only complaints for VLC. The default subtitles are solid, especially with multiple text boxes for signs. Playback has been solid for ages. Handles lots of tracks well, and doesn't just wrap ffmpeg so it's very useful for testing or debugging your setup against mplayer or mpv.
accessibility is honestly the first good use of ai. i hope they can find a way to make them better than youtube's automatic captions though.
While LLMs are truly impressive feats of engineering, it's really annoying to witness the tech hype train once again.
The app Be My Eyes pivoted from crowd sourced assistance to the blind, to using AI and it's just fantastic. AI is truly helping lots of people in certain applications.
There are other good uses of AI. Medicine. Genetics. Research, even into humanities like history.
The problem always was the grifters who insist calling any program more complicated than adding two numbers AI in the first place, trying to shove random technologies into random products just to further their cancerous sales shell game.
The problem is mostly CEOs and salespeople thinking they are software engineers and scientists.
I know Jeff Geerling on Youtube uses OpenAIs Whisper to generate captions for his videos instead of relying on Youtube's. Apparently they are much better than Youtube's being nearly flawless. I would have a guess that Google wants to minimize the compute that they use when processing videos to save money.
I know people are gonna freak out about the AI part in this.
But as a person with hearing difficulties this would be revolutionary. So much shit I usually just can’t watch because open subtitles doesn’t have any subtitles for it.
I agree that this is a nice thing, just gotta point out that there are several other good websites for subtitles. Here are the ones I use frequently:
https://subdl.com/
https://www.podnapisi.net/
https://www.subf2m.co/
And if you didn't know, there are two opensubtitles websites:
https://www.opensubtitles.com/
https://www.opensubtitles.org/
Not sure if the .com one is supposed to be a more modern frontend for the .org or something but I've found different subtitles on them so it's good to use both.
The most important part is that it’s a local ~~LLM~~ model running on your machine. The problem with AI is less about LLMs themselves, and more about their control and application by unethical companies and governments in a world driven by profit and power. And it’s none of those things, it’s just some open source code running on your device. So that’s cool and good.
Also the incessant ammounts of power/energy that they consume.
Running an llm llocally takes less power than playing a video game.
The training of the models themselves also takes a lot of power usage.
Yeah, transcription is one of the only good uses for LLMs imo. Of course they can still produce nonsense, but bad subtitles are better none at all.
Just an important note, speech to text models aren't LLMs, which are literally "conversational" or "text generation from other text" models. Things like https://github.com/openai/whisper are their own, separate types of models, specifically for transcription.
That being said, I totally agree, accessibility is an objectively good use for "AI"
That's not what LLMs are, but it's a marketing buzzword in the end I guess. What you linked is a transformer based sequence-to-sequence model, exactly the same principal as ChatGPT and all the others.
I wouldn't say it is a good use of AI, more like one of the few barely acceptable ones. Can we accept lies and hallucinations just because the alternative is nothing at all? And how much energy/CO2 emissions should we be willing to waste on this?
Now if only I could get it to play nice with my Chromecast... But I'm sure that's on Google.
Or shitty mDNS implementations
Et tu, Brute?
VLC automatic subtitles generation and translation based on local and open source AI models running on your machine working offline, and supporting numerous languages!
Oh, so it's basically like YouTube's auto-generatedd subtitles. Never mind.
Hopefully better than YouTube's, those are often pretty bad, especially for non-English videos.
Youtube's removal of community captions was the first time I really started to hate youtube's management, they removed an accessibility feature for no good reason, making my experience with it significantly worse. I still haven't found a replacement for it (at least, one that actually works)
and if you are forced to use the auto-generated ones remember no [__] swearing either! as we all know disabled people are small children who need to be coddled!
I am still waiting for seek previews
MPC-BE
All hail the peak humanity levels of VLC devs.
FOSS FTW
Perhaps we could also get a built-in AI tool for automatic subtitle synchronization?
I know AI has some PR issues at the moment but I can’t see how this could possibly be interpreted as a net negative here.
In most cases, people will go for (manually) written subtitles rather than autogenerated ones, so the use case here would most often be in cases where there isn’t a better, human-created subbing available.
I just can’t see AI / autogenerated subtitles of any kind taking jobs from humans because they will always be worse/less accurate in some way.
Solving problems related to accessibility is a worthy goal.
I've been waiting for ~~this~~ break-free playback for a long time. Just play Dark Side of the Moon without breaks in between tracks. Surely a single thread could look ahead and see the next track doesn't need any different codecs launched, it's technically identical to the current track, there's no need to have a break. /rant
And yet they still can't seek backwards
Iirc this is because of how they've optimized the file reading process; it genuinely might be more work to add efficient frame-by-frame backwards seeking than this AI subtitle feature.
That said, jfc please just add backwards seeking. It is so painful to use VLC for reviewing footage. I don't care how "inefficient" it is, my computer can handle any operation on a 100mb file.
If you have time to read the issue thread about it, it's infuriating. There are multiple viable suggestions that are dismissed because they don't work in certain edge cases where it would be impossible for any method at all to work, and which they could simply fail gracefully for.
I don't mind the idea, but I would be curious where the training data comes from. You can't just train them off of the user's (unsubtitled) videos, because you need subtitles to know if the output is right or wrong. I checked their twitter post, but it didn't seem to help.
subtitles aren't a unique dataset it's just audio to text
They may have to give it some special training to be able to understand audio mixed by the Chris Nolan school of wtf are they saying.