this post was submitted on 03 Aug 2023
106 points (63.1% liked)
Technology
60090 readers
5363 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Let me add that I don't think that we are at the end-all-and-be-all of audio. I can hypothetically imagine things that might be done if one threw more money at audio playback that would create a better experience than one can get today.
When you hear audio from a given point, some of how you detect the location of an audio source is due to the effect on it hitting your ears, which are of a distinct shape, which means that what's actually hitting your inner ear is slightly unique to an individual person Currently, if you're listening to a static audio file, it's the same for everyone. One could hypothetically ship hardware which fits inside the ear of and can build an audio model for the ear of a given individual to make audio which reflects their specific ears. Then audio could be played back that sounds as if it's actually coming from a given point in space relative to someone's ears. That's not a drop-in improvement for existing audio, because you'd need to have 3D location information available about the individual sources in the audio. But if audio companies wanted to sell a fancier experience for audio that does have that information, they could leverage that.
For decades, audio playback devices have tried to produce visual effects that synchronize with music. They haven't done a phenomenal job, at even basic stuff like beat detection, in my opinion, and so clubs and the like have people that have to rig up DMX512 gear with manually-created annotations to have effects happen at a given point. Audio tracks today don't have a standard format for annotations; if I go buy an album, it doesn't come with something like that. One could produce a standard for it and rig up various gear, like strobes or colored light or even do this in VR, to stimulate the other senses in time with the audio.
I suspect that very few people listen to audio in an environment where they can hear absolutely zero detectable background sound when they don't have their audio playing. You can get decent passive sound cancellation devices, but they only go so far; even good passive sound cancellation headphones are something that one can probably hear fairly quiet sound through. Right now, active sound cancellation devices are being worked on, but that doesn't get one to the point of inaudibility either, and I haven't seen anything that does both good active and passive cancellation, so using active noise cancellation means giving up good passive noise cancellation.
My point is that I think that there are remaining areas for audio hardware companies to explore to try to create better experiences. I just don't think that playing audio at a sampling frequency hundreds of times above the frequencies that humans can hear is really a fantastic area to be banging on.
Isn't that what Atmos is supposed to do. Although currently we don't have personalized HRTFs for it.